MongoDB does have a query optimizer, and in most cases it's effective at picking the best of multiple possible plans. However it's worth remembering that in the case of the aggregate function the sequence in which various steps are executed is completely under your control. The optimizer won't reorder steps into the optimal sequence to get you out of trouble.
Optimizing the order of steps probably comes mainly to reducing the amount of data in the pipeline as early as possible – this reduces the amount of work that has to be done by each successive step. The corollary is that steps that perfom a lot of work on data should be placed after any filter steps.
Nowhere is this more important that in $lookup steps. Since $lookup steps perform a separate collection lookup – hopefully using an index – we should make sure we delay them until all data has been filtered. Consider this aggregation function, which generates a “top 10” list of product purchases by customer:
Lines 11-22 perform lookups on the customers and products collection to get customer and product names.
We could have done these lookups much earlier in the pipeline. So for instance, this code returns the exact same results, but does the lookup a little earlier in the sequence:
The difference in performance is striking. By moving the $lookup a few lines earlier, we have created a much less scalable solution:
When the $lookups are before the $limit step, we have to perform as many lookups as there are matching rows. When we move after the $limit we only have to perform 10. It’s an obvious but important optimization.
The aggregation framework is similar in nature to pig (see this post). Both provide a procedural way for processing data which is philosophically different from that that we have become familiar with in the SQL world. The main thing to remember is that you are in control of the execution plan in an aggregation pipeline. As the Pig programmers like to say “it uses the query optimizer between your ears”!