MongoDB - too much data for sort() with no index error
Asked Answered
B

4

21

I am using MongoDB 1.6.3, to store a big collection (300k+ records). I added a composite index.

db['collection_name'].getIndexes()
[
    {
        "name" : "_id_",
        "ns" : "db_name.event_logs",
        "key" : {
            "_id" : 1
        }
    },
    {
        "key" : {
            "updated_at.t" : -1,
            "community_id" : 1
        },
        "ns" : "db_name.event_logs",
        "background" : true,
        "name" : "updated_at.t_-1_community_id_1"
    }
]

However, when I try to run this code:

db['collection_name']
  .find({:community_id => 1})
  .sort(['updated_at.t', -1])
  .skip(@skip)
  .limit(@limit)

I am getting:

Mongo::OperationFailure (too much data for sort() with no index. add an index or specify a smaller limit)

What am I doing wrong?

Benelux answered 9/12, 2010 at 14:7 Comment(0)
S
14

Try adding {community_id: 1, 'updated_at.t': -1} index. It needs to search by community_id first and then sort.

Stalactite answered 9/12, 2010 at 16:18 Comment(1)
The sort column must be the last column in the index. mongodb.org/display/DOCS/….Beaverbrook
H
4

So it "feels" like you're using the index, but the index is actually a composite index. I'm not sure that the sort is "smart enough" to use only the partial index.

So two problems:

  1. Based on your query, I would put community_id as the first part of the index, not the second. updated_at.t sounds like a field on which you'll do range queries. Indexes work better if the range query is the second bit.
  2. How many entries are going to come back from community_id => 1? If the number is not big, you may be able to get away with just sorting without an index.

So you may have to switch the index around and you may have to change the sort to use both community_id and updated_at.t. I know it seems redundant, but start there and check the Google Groups if it's still not working.

Hemicellulose answered 9/12, 2010 at 16:18 Comment(0)
A
2

Even with an index, I think you can still get that error if your result set exceeds 4MB.

You can see the size by going into the mongodb console and doing this:

show dbs
# pick yours (e.g., production)
use db-production
db.articles.stats()

I ended up with results like this:

{
"ns" : "mdalert-production.encounters",
"count" : 89077,
"size" : 62974416,
"avgObjSize" : 706.9660630690302,
"storageSize" : 85170176,
"numExtents" : 8,
"nindexes" : 6,
"lastExtentSize" : 25819648,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 18808832,
"indexSizes" : {
    "_id_" : 3719168,
    "patient_num_1" : 3440640,
    "msg_timestamp_1" : 2981888,
    "practice_id_1" : 2342912,
    "patient_id_1" : 3342336,
    "msg_timestamp_-1" : 2981888
},
"ok" : 1
}
Apprehensive answered 16/6, 2011 at 16:10 Comment(1)
Isn't it db.articles.stats() ?Pampuch
R
0

Having a cursor batch size that is too large will cause this error. Setting the batch size does not limit the amount of data you can process, it just limits how much data is brought back from the database. When you iterate through and hit the batch limit, the process will make another trip to the database.

Rolandorolandson answered 7/11, 2013 at 18:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.