Why is it recommended not to close a MongoDB connection anywhere in Node.js code?
Asked Answered
M

4

74

Consider following is the Node.js code:

function My_function1(_params) {
    db.once('open', function (err){
     //Do some task 1
});
}

function My_function2(_params) {
    db.once('open', function (err){
     //Do some task 2
});
}

See the link for best practice, which says not to close any connections

https://groups.google.com/forum/#!topic/node-mongodb-native/5cPt84TUsVg

I have seen log file contains following data:

Fri Jan 18 11:00:03 Trying to start Windows service 'MongoDB'
Fri Jan 18 11:00:03 Service running
Fri Jan 18 11:00:03 [initandlisten] MongoDB starting : pid=1592 port=27017 dbpath=\data\db\ 64-bit host=AMOL-KULKARNI
Fri Jan 18 11:00:03 [initandlisten] db version v2.2.1, pdfile version 4.5
Fri Jan 18 11:00:03 [initandlisten] git version: d6...e0685521b8bc7b98fd1fab8cfeb5ae
Fri Jan 18 11:00:03 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Fri Jan 18 11:00:03 [initandlisten] options: { config: "c:\mongodb\mongod.cfg", logpath: "c:\mongodb\log\mongo.log", service: true }
Fri Jan 18 11:00:03 [initandlisten] journal dir=/data/db/journal
Fri Jan 18 11:00:03 [initandlisten] recover begin
Fri Jan 18 11:00:04 [initandlisten] recover lsn: 6624179
Fri Jan 18 11:00:04 [initandlisten] recover /data/db/journal/j._0
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:59343 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:118828 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:238138 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:835658 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:955218 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3467218 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3526418 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3646154 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3705844 < lsn:6624179
Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section more...
Fri Jan 18 11:00:05 [initandlisten] recover cleaning up
Fri Jan 18 11:00:05 [initandlisten] removeJournalFiles
Fri Jan 18 11:00:05 [initandlisten] recover done
Fri Jan 18 11:00:10 [initandlisten] query MYDB.system.namespaces query: { options.temp: { $in: [ true, 1 ] } } ntoreturn:0 ntoskip:0 nscanned:5 keyUpdates:0  nreturned:0 reslen:20 577ms
Fri Jan 18 11:00:10 [initandlisten] waiting for connections on port 27017
Fri Jan 18 11:00:10 [websvr] admin web console waiting for connections on port 28017
Fri Jan 18 11:01:10 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 32ms
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50076 #1 (1 connection now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50077 #2 (2 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50078 #3 (3 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50079 #4 (4 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50080 #5 (5 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50081 #6 (6 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50082 #7 (7 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50083 #8 (8 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50084 #9 (9 connections now open)
Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50085 #10 (10 connections now open)
...........................................
Fri Jan 18 13:36:48 [initandlisten] connection accepted from 192.168.0.1:50092 #97 (97 connections now open)

Doesn't this create a overhead on server by opening multiple connection and not closing it, Does it handles connection pooling internally?

But in MongoDB Docs it is mentioned "This is normal behavior for applications that do not use request pooling"

Can somebody help me understanding this.

Moralist answered 24/1, 2013 at 7:26 Comment(1)
Even this link says the same "Keep one or more connections open and reuse them in your code." (in the last comment) github.com/mongodb/node-mongodb-native/issues/84Moralist
B
79

You open a Db connection once with MongoClient and reuse it across your application. If you need to use multiple db's you use the .db function on the Db object to work on a different db using the same underlying pool of connections. A pool is kept to ensure a single blocking operation cannot freeze up your node.js application. Default size if 5 connections in a pool.

http://mongodb.github.io/node-mongodb-native/driver-articles/mongoclient.html

I also forgot to add. As the other answer pointed out setting up a new TCP connection is EXPENSIVE timewise and memory wise that's why you reuse connections. Also a new connection will cause a new Thread to be created on MongoDB using memory on the Db as well.

Ballinger answered 30/1, 2013 at 15:41 Comment(1)
I've just built cron task that was re-connecting to mongo each time. It's fast task to archive some stuff. With re-connecting each time task took ~15-25ms. With re-using connection it takes ~0-1ms. So that's real world difference: 15-25x increment in speed while reusing connection. Of course some may say 25ms is fast enough, but why eating more resources even on simple tasks? Just reuse connection. Done.Monkey
F
30

MongoDB pools database connections to be more efficient, so it is not unusual to have many connections open in the mongodb.log

However it is useful to close all connections when your app closes completely. This code is most excellent for doing this.

process.on('SIGINT', function() {
  mongoose.connection.close(function () {
    console.log('Mongoose disconnected on app termination');
    process.exit(0);
  });
});
Federicofedirko answered 13/1, 2015 at 3:23 Comment(5)
What is "process" ?Berman
Is a native method of nodeBolanger
So the efficient and rigorous approach is: [1] Open a connection; [2] Use that same connection for all database operations until the script is scheduled to exit; [3] Close the connection only when the script is scheduled to exit. Have I understood this correctly?Laevorotatory
@DavidEdwards did you get your confimartion anywhere regarding this question? I was also thinking in the same way but am unsure whether it is 100% correctVivyan
@eugensunic ... I've discovered that if you fail to close a MongoDB connection in a node.js script, the script remains in Node's tick queue even after it's finished executing. The result is that the server keeps waiting for Node to complete, and because this never happens, you end up with timeout failures in AJAX.Laevorotatory
H
11

I am no node.js expert however I think the reason is relatively the same between most languages.

Making a connection is:

one of the most heavyweight things that the driver does. It can take hundreds of milliseconds to set up a connection correctly, even on a fast network.

( http://php.net/manual/en/mongo.connecting.pools.php )

Provided that is for PHP and the doc is a little out of date that part still applies even now and across most, if not all, drivers.

Each connection can also use a separate thread which causes obvious overhead.

It seems from:

http://mongodb.github.com/node-mongodb-native/driver-articles/mongoclient.html#the-url-connection-format

That node.js still uses connection pooling to try and stop the overhead of making a connection. This, of course, no longer applies to other drivers like the PHP one.

It opens x amount of connections (default is 5) to your database server and transfers work to a free connection when data is needed and so reusing old connections averting this nasty process which can cause those logs:

https://docs.mongodb.com/manual/faq/diagnostics/#why-does-mongodb-log-so-many-connection-accepted-events

Haloid answered 24/1, 2013 at 9:9 Comment(0)
V
1

If you want to ensure that mongo disconnects at the termination of the program as well as only one connection is ever established during the run time of your program I recommend writing the following singleton (this is in python not node unfortunatly but the same concepts apply).

import atexit

class MongoDB:
'''define class attributes'''
__instance = None

@staticmethod
def getInstance():
    """ Static access method. """
    # if the instance doesnt exist envoke the constructor
    if MongoDB.__instance == None:
        MongoDB()
    # return instance
    return MongoDB.__instance

def __init__(self) -> None:
    """ Virtually private constructor. """
    if MongoDB.__instance != None:
        raise Exception("Singleton cannot be instantiated more than once")

    else:
        print("Creating MongoDB connection")
        # set instance and instance attributes
        self.client = MongoClient(config.CONNECTION_STRING)
        MongoDB.__instance = self


@staticmethod
@atexit.register
def closeConnection():
    ''' 
    Python '__del__' aka destructor dunder doesnt always get called
    mainly when program is terminated by ctrl-c so this method is decorated
    by 'atexit' which ensures this method is called upon program termination
    '''
    if MongoDB.__instance != None:
        MongoDB.__instance.client.close()
        print("Closing Connections")

I think this is a good design pattern to make sure all connections get disconnected at the end of the program as well as the same instance of the connection is shared and only establish once as someone previously stated connecting to the database is the most expensive task that should be avoided

Verdin answered 14/4, 2022 at 2:57 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.