MongoDB - PHP - MongoCursorException 'Cursor not found'
Asked Answered
C

3

6

I have 2 collections: A (3.8M docs) and B (1.7M docs)

I have a PHP script that I run from the shell that:

  1. loops over each record in A
  2. ~60% of the time, it does a findOne on B (using _id)
  3. does some basic math, creating a php array

once the loop on all docs in a is done:

4) loop over php array

5)upsert into collection C

during (1), I consistently get: PHP Fatal error: Uncaught exception 'MongoCursorException' with message 'Cursor not found' The last item processed was #8187 of 3872494.

real    1m25.478s
user    0m0.076s
sys     0m0.064s

Running it again, with no change in code, the exception got thrown at item #19826 / 3872495

real    3m19.144s
user    0m0.120s
sys     0m0.072s

And again, #8181 / 387249

real    1m31.110s
user    0m0.036s
sys     0m0.048s

Yes, I realize that I can (and probably should) catch the exception... but... why is it even being thrown? Especially at such different elapsed time/depth into the database.

If it helps, my setup is a 3-node replica set (2+arb). I took the secondary offline and tried with just the primary running. Same results (different number of results processed and times, but always throws the Cursor Not Found exception).

Celluloid answered 24/7, 2011 at 7:18 Comment(1)
Was this ultimately a cursor-timeout issue? ThanksBeseem
J
10

Yes, I realize that I can (and probably should) catch the exception...

Yes, this is definitely the first thing to do. There are dozens of legitimate reasons for an exception to happen? Heck what do you think happens when the primary goes off-line and becomes unreachable?

... why is it even being thrown?

There are a couple of potential reasons, but let's cut straight to the error code you're seeing.

  • Official PHP docs are here.
  • Quote from that page: The driver was trying to fetch more results from the database, but the database did not have a record of the query. This usually means that the cursor timed out on the server side...

The MongoDB PHP driver has two different timeouts:

  • Connection timeout
  • Cursor timeout

You're hitting a cursor timeout. You can connect to the DB, but your query is "running out of time".

Possible fixes:

  1. Extend the cursor timeout. Or you can set it to zero and make it last forever.
  2. Do this job in batches. Get the first 1000 _ids from A, process them and then mark that you have done so. Then get the next 1000 _ids greater than your last run and so on.

I would suggest #2 along with handling the exception. Even if these doesn't completely solve the problem it will help you isolate and mitigate the problem.

Jilolo answered 24/7, 2011 at 7:51 Comment(1)
cursor timeout solved my problem (timeout after 2k records fetched)Lepidosiren
S
4

I know it's late and this may not be your solution but you can try using immortal(). As Gates VP noted, this page describes the exception.

The driver was trying to fetch more results from the database, but the database did not have a record of the query. This usually means that the cursor timed out on the server side: after a few minutes of inactivity, the database will kill a cursor (see MongoCursor::immortal() for information on preventing this).

I figured I'd post the entire description for others reaching this page and since timeout() and immortal() are different. timeout() sets the amount of time to wait for a response. immortal() denies the cursor from dying due to inactivity.

Sollars answered 18/10, 2011 at 6:40 Comment(0)
B
0

It could be a memory limit issue. Try experimenting with providing more memory and see if your results vary, which you can do with the -d option: php -d memory_limit=256M yourscript.php

That is a lot of documents, and it sounds like you're making a pretty large array of objects. There are also various php functions like memory_get_usage() you can use to profile your memory allocation at runtime as well as debugging extensions like xdebug or what zend provides.

Bobbinet answered 24/7, 2011 at 7:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.