MongoDB connection fails on multiple app servers
Asked Answered
D

1

8

We have mongodb with mgo driver for golang. There are two app servers connecting to mongodb running besides apps (golang binaries). Mongodb runs as a replica set and each server connects two primary or secondary depending on replica's current state.

We have experienced the SocketException handling request, closing client connection: 9001 socket exception on one of the mongo servers( which resulted in the connection to mongodb from our apps to die. After that, replica set continued to be functional but our second server (on which the error didn't happen) the connection died as well.

In the golang logs it was manifested as:

read tcp 10.10.0.5:37698-\u003e10.10.0.7:27017: i/o timeout

Why did this happen? How can this be prevented?

As I understand, mgo connects to the whole replica by the url (it detects whole topology by the single instance's url) but why did dy·ing of the connection on one of the servers killed it on second one?

Edit:

  1. Full package path that is used "gopkg.in/mgo.v2"
  2. Unfortunately can't share mongo files here. But besides the socketexecption mongo logs don't contain anything useful. There is indication of some degree of lock contention where lock acquired time is quite high some times but nothing beyond that
  3. MongoDB does some heavy indexing some times but the wasn't any unusual spikes recently so it's nothing beyond normal
Dissentious answered 9/10, 2018 at 17:21 Comment(6)
could you share the mongodb log files, and mongostats if possible? also pls show us the socketoptions you defined ?Albuminuria
Which mgo driver are you using? Please post full package path you use to import it.Pyonephritis
can you check if your connection is doing some heavy ops on mongodb?Rainer
@Pyonephritis answered in the editsDissentious
@LarsHendriks in the editsDissentious
@Astro answered in the editsDissentious
P
3

First, the mgo driver you are using: gopkg.in/mgo.v2 developed by Gustavo Niemeyer (hosted at https://github.com/go-mgo/mgo) is not maintained anymore.

Instead use the community supported fork github.com/globalsign/mgo. This one continues to get patched and evolve.

Its changelog includes: "Improved connection handling" which seems to be directly relating to your issue.

Its details can be read here https://github.com/globalsign/mgo/pull/5 which points to the original pull request https://github.com/go-mgo/mgo/pull/437:

If mongoServer fail to dial server, it will close all sockets that are alive, whether they're currently use or not. There are two cons:

  • Inflight requests will be interrupt rudely.

  • All sockets closed at the same time, and likely to dial server at the same time. Any occasional fail in the massive dial requests (high concurrency scenario) will make all sockets closed again, and repeat...(It happened in our production environment)

So I think sockets currently in use should closed after idle.

Note that the github.com/globalsign/mgo has backward compatible API, it basically just added a few new things / features (besides the fixes and patches), which means you should be able to just change the import paths and all should be working without further changes.

Pyonephritis answered 15/10, 2018 at 21:10 Comment(2)
Thanks, that was my initial idea to switch the drivers because I know that the mgo driver is not maintained anymore. However, I wanted to better understand the root cause of a problem. One idea I had is that because mgo driver connects to the replica set directly and holds connection to primary, it failed when primary failed.Dissentious
@Dissentious There are even more fixes in globalsign/mgo related to connectivity, the one I quoted seems to be the issue / directly related to your problem. Does switching to globalsign/mgo fixes your problem?Pyonephritis

© 2022 - 2025 — McMap. All rights reserved.