The amount of processing that can occur with the aggregation framework depends on your schema.
The aggregation framework can only output the relative of one document at the moment (for larger output you will want to watch: https://jira.mongodb.org/browse/SERVER-3253 ) and it will output in the form of:
{
result: { //the result },
ok: 1/0
}
So you have to make sure that what you get back out of your $group
/$project
is not so big that you don't get back the results you need. Most of the time this will not be the case and a simple $group
even on millions of rows can result in a response of smaller than 16Meg.
We have no idea of the size of your documents or the aggregative queries you wish to run as such we cannot advise nout.
If any single aggregation operation consumes more than 10 percent of system RAM the operation will produce an error.
That is pretty self explanatory really. If the working set for an operation is so large that it takes more than 10 percent RAM ($group
/Computed fields/$sort
on computed or grouped fields) then it will not work.
Unless you try and misuse the aggregation framework to do your app logic for you then you should never really run into this problem.
The aggregation system currently stores $group operations in memory, which may cause problems when processing a larger number of groups.
Since $group
is really hard to not do in memory (it "groupes" the field) this means that operations on that group are also in memory, i.e. $sort
this is where you can start to use up that 10% if your not careful.