MongoDB Performance Guide

Kevin Grüneberg
13 min readOct 7, 2022

While improving the overall performance for Parqet, a fintech startup to visualize your portfolio and wealth, I learned quite a bit. Today, I’d like to share those learnings and give some tips on improving MongoDB performance.

Some code samples will be in JavaScript/TypeScript, however, they are also applicable to other languages. Let’s dive right in.

Tooling

In order to optimize the performance of your MongoDB queries, you first need to understand your current performance. To do so, you should measure or visualize your query executions.

Similar to other databases, MongoDB has an Explain functionality. This allows you to understand the execution plan and performance of your query in depth. There are several tools to help you:

MongoDB Compass

MongoDB Compass is a multi-platform GUI, developed by MongoDB Inc. I personally always use it and can highly recommend it for analyzing query performance or executing other common database tasks.

When connecting to your database, select “Explain Plain” to check your query performance and get some insights on what the query did exactly.

MongoDB Compass: Explain Plan

The visualization helps you understand the stages your query goes through, which stage took the longest, how many documents were inspected, if an index was used and so on. It’s great for debugging queries.

You may do the same in the aggregations view by hitting the “Explain” button.

MongoDB Compass: Explain Aggregation

MongoDB Atlas

MongoDB Atlas, the official cloud service for MongoDB offers some good insights into performance and even tries to give you some advice on how to improve query performance. The Atlas Profiler provides you insights on examined keys, execution time, …

MongoDB Atlas: Profiler

Additionally, Atlas also tries to advice you on how to gain some performance (less lookups, add an index, …).

MongoDB Atlas: Performance Advisor

While this won’t fix our queries, it will show us bottlenecks and give us a good starting point.

MongoDB Shell

In case you’re comfortable with CLIs, you may like the MongoDB Shell. By adding explain() to your query, you can get insights about your query.

Monitoring

After you deployed changes, you should continue to monitor CPU load, storage, query times, … in order to be sure that your changes are actually beneficial.

I’m sure there are other proprietary tools (not developed by MongoDB), I just never had to use one. Do you have any positive experience with other tools to optimize your query performance or gain some insights?

I’d love to hear about it.

Indices

The fields you query should be covered by an Index.

Rather than going through every single row (collection scan) and testing whether your query matches the row, an index speeds up the process. See official docs for a deeper understanding of how an index works under the hood.

MongoDB adds an index on the _id field by default. Besides that, you need to add indices to your collections. Let’s take a sample collection with one million entries. Our data structure looks like this:

Sample document

Let’s first query the data without any index. Find assets matching ISIN:

db.assets.find({ isin: "US123456890" })

Query without index

The query took 398ms in total (see “Actual Query Execution Time (ms)”). You can see that MongoDB had to go through one million entries to look for the asset. Let’s add an appropriate index and try again.

db.assets.createIndex({ isin: 1 }) // ascending

Query with index

As you can see, the query performance increased immensely (sub-millisecond — it only shows 0ms), as our queries are covered by an index.

Drawbacks when adding an index include decreased write performance and increased storage space. You should keep that in mind and not index every single field known to mankind. However, commonly queried fields should definitely be covered by an index.

Compound Indexes

When querying multiple fields, MongoDB tries its best to work with the existing indices and optimize your query. When you have multiple fields that are commonly queried together, it might be a good idea to use a Compound Index. A Compound Index is an index that is based on multiple fields. Let’s say you commonly query IPO date and dividend dates.

Without the compound query (one index on IPO date):

db.assets.createIndex({ ipoDate: -1 })

Query without compound index

Even though we hit an index, MongoDB still has to go through ~568.000 entries, as those entries match the IPO date criterion. Let’s add a compound index covering both fields:

db.assets.createIndex({ ipoDate: -1, "dividends.date": -1 })

Query with compound index

With the compound index, MongoDB only goes through ~9150 documents (all matching documents). The total query execution time also went from 1800ms to 755ms. Not optimal yet, but a great improvement already.

Indexing Arrays

It is also possible to index fields within arrays. Note that you cannot add a compound index on two array fields. When using an $exists query:

You can add an index to myArray.0 to cover the query with an index. When commonly query a property inside an array:

You can add an index to myArray.myProperty and it will use the index when matching the property inside the array.

Projections

By default, MongoDB will return the full document. Projections allow you to reduce, add or modify the documents before fetching them. This enables you to optimize the document payload and ultimatively boost query performance.

Removing fields from the document

Removing multiple fields from a document

When removing fields, all other fields will still be present.

Explicitly include a field

When including a field by using <field>: 1, other fields are omitted. MongoDB will always include the _id field unless you explicitly use _id: 0 in your projection.

Adding a field

Projections also allow you to add fields to documents, i.e. computing a value based on other fields.

This projection adds the marketShares which is computed based on the last price and market cap fields.

Reducing arrays

Projections are pretty flexible, so you can also just return a subset of an array, if you’re only interested in the first few items.

Those were just a few examples on how to use projections to reduce or enhance the document payload before fetching.

We noticed that this had a great impact on stability of our queries. The performance became more predictable, especially with documents that possibly have a lot of data within a single document.

Covered Queries

Your queries can be super fast, however, MongoDB still has to read the actual data from disk in order to retreive it. While this is fast, there is another way to make it even faster.

When you’re only interested in a subset of properties from your document and all fields are covered by an index or compound index, MongoDB can actually read the data off the index, further improving query performance. This is called a Covered Query.

This likely only saves you a few milliseconds, but still worth a look if you really want to squeeze out every little bit of performance.

Data Structure

We’re dealing with a non-relational NoSQL database — even though joining data is possible (using $lookup), you should avoid it while you can, as it’s not that performant. Thus, you need to think about your data structure and ideally group related data into a single document.

You may run into situations where the only proper way to optimize query performance is changing your data structure. Data structure changes can be a lot of effort, so it is worth taking the time to think about the data structure more thoroughly.

Create data “views”

When you do not need the data in real-time, you can also use aggregations to combine data from multiple collections to a single collection, greatly improving query performance. Imagine having the following collections:

A portfolio has n holdings and a holding has an asset. We’d like to query a portfolio for the assets to check if a portfolio contains specific assets.

You could do a lookup over two collections — that is not going to be very fast and CPU intense though. While you could argue, that the data structure should be different in the first place, sometimes we cannot simply change data structures from running systems, at least not as easily.

Let’s aggregate the data (i.e. hourly) and write it into a separate collection, that is queryable and can also benefit from indices, without doing any lookups on-the-fly.

The following query groups all asset identifiers per portfolio and writes them to a separate portfolio_assets collections:

We may now query the new portfolio_assets collection. While the data is not available in real-time, the query performance likely went from 1–2 seconds to a few milliseconds. With MongoDB Atlas, you could run that as a CRON job, too.

Read Preference

If not configured otherwise, read operations are usually done on the primary node of the cluster.

When data frequency is not crucial (a few milliseconds or seconds delay is fine), you can explicitly tell MongoDB to read the data from a secondary node.

While this won’t boost your query performance 10x, this will lead to your load being more evenly balanced across your cluster, making queries more stable and predictable.

Here’s a screenshot of before-and-after consciously using read preferences and preferring reads from secondary (replicated) instances for a bunch of queries. The primary right is on the far right. After changing a bunch of frequently used queries to a different read preference, the load is more balanced across the cluster.

MongoDB Atlas: Impact of read preferences (more even load)

For additional options for read preferences, check out the official docs.

Write Concern

In a clustered setup, by default, all write operations are written to the primary node and to the oplog — from there, the operations will be applied to the replica nodes.

Write operations require an acknowledgement from the primary node. With non-crucial writes, you can also skip the acknowledgment (note that you will not be notified in case of a failure).

db.insertOne({ ... }, { w: 0 })

Depending on your needs, you can significantly increase data throughput by changing the write concern. Use with caution, though. For additional options for the write concern, check out the official docs.

Optimize your aggregations

While Aggregations are a very powerful feature, you can easily mess up the performance.

The best way to improve your aggregations is by benchmarking them and checking the execution plans for possible optimizations. Try to reduce the data first (i.e. by grouping or using a $match filter), before doing lookups or applying further mutations. I cannot give you a general performance tip, as aggregations are highly individual.

Through the MongoDB Atlas Profiler, we looked at the aggregations that took the most time to execute and optimized them (adding indices, reducing payloads, filtering, grouping, restructuring data, …).

On some occassions, it could even boost performance, when you do not do everything in a huge single aggregation, but multiple queries and maybe some code in your application for processing.

Compression

If not configured, MongoDB will send/receive the data uncompressed. As you probably know, uncompressed textual data is likely ~80–90% bigger than compressed JSON, meaning, you’ll be fetching a lot more data and your network load increases.

Compression is always a trade-off between speed, processing resources (CPU) and size/network load. MongoDB supports the following compression algorithms:

  • ZLib
  • ZStd
  • Snappy

You can tell MongoDB which compression to use when connecting to the server by appending the compressors query parameter.

mongodb://localhost:27017/?compressors=snappy

The MongoDB drivers should already handle this for you, here’s an example on how to use Googles’ Snappy with the Node driver.

There is no general answer on which compression algorithm is the best. Some algorithms are faster, some are less CPU intense, some have smaller data size. It depends on your data and access patterns — so you should do benchmarks and find out which algorithm fits best.

We settled for Snappy (as we also use Snappy for when writing to Redis). Network load significantly reduced and performance increased.

Parallelize fetching

You’re query executes in 3ms, but it still takes 500ms for your application to get the data? Fetching data takes a lot of time, so the more you query, the longer it’ll take your application to fetch the data from the MongoDB server (after querying).

You’re already using compression and projection to reduce the amount of data sent to your application. One more way to improve performance is by parallelizing your request.

If you know you’re going to query 25 years of data, you can split it into 5-year-chunks and send all five requests in parallel. MongoDB can easily handle those 5 queries and so can your application. Use with caution as you can easily overload your application or MongoDB, when you have too many items in your chunk.

We were able to reduce total query time by some queries that retreive > 6000 rows by 3–4x by chunking the request and issuing the requests in parallel.

Connection Pooling

Most, if not all, MongoDB drivers support connection pooling.

With connection pooling, the application keeps connections to MongoDB open, waiting for queries, speeding up queries, as the initial connection phase can be skipped. Depending on your setup and needs, you can modify the pool sizes in your application and possibly improve throughput.

With this configuration, our application will at least have 50 open connections and possibly scales up to 250. If all 250 connections are in use, your application will wait for any connection to become idle, before performing the query.

Also keep in mind, if your application is clustered, every single instance will have these settings. If you set a minimum pool size of 250 and you have 4 instances running and do a green-blue-deployment, you likely have 6–8 running instances for a while and already use a minimum of 1500–2000 connections. When using MongoDB Atlas, check your connection limit, based on your plan.

Advanced Client Configuration

Depending on your platform, it might be worth to take a look into client configuration options. For example, in case of the Node MongoDB driver, you can disable the UTF-8 validation to improve performance.

Check the docs of your specific MongoDB driver for these kind of options. Make sure to read the impact of the option, as the performance gain can have other downsights (check if they affect you).

Sharding & Replication

When scaling the server resources and optimizing the queries no longer has a big impact and you have billions of rows, you may want to look into Sharding.

Sharding distributes your data across multiple machines supporting very large datasets and high throughput. Sharding helps you scale as your data grows. With sharding, you do need additional computing resources and storage, costing you money, though.

As the costs for this were too high for us, we skipped it, so I cannot give you any personal insights.

TimeSeries

MongoDB introduced TimeSeries collections with version 5.0. They also received a lot of love in the 6.x release line. When dealing with TimeSeries data, you should likely go for the TimeSeries collection type, instead of a regular collection.

Note that you have to choose the collection type in the beginning, you cannot simply switch to a TimeSeries collection afterwards. If you want to move your collection to TimeSeries, you need to migrate the data, there is an example in the official docs.

While MongoDB TimeSeries is definitely a better fit for time-series data compared to regular collections, MongoDB is definitely not the fastest TimeSeries database out there.

We evaluated MongoDB timeseries and compared it to Redis TimeSeries. The average time to fetch data with Redis Timeseries was ~1–2ms, with MongoDB it was 20–30ms. While 20–30ms is not slow, it was still 10–20x slower for our use-case and as the evulation affected one of our most queried data, we decided not to use MongoDB for that in the future.

Atlas Search

In case you need a powerful and flexible full-text search, MongoDB offers Atlas Search.

However, this is only available on Atlas and not when you self-host the community edition of MongoDB. Given this vendor lock-in, I can only recommend using this feature if you’re sure that you will not move your MongoDB setup away from Atlas anytime soon or ever.

We did integrate Atlas search and not only improved search performance, but also search results. Before, we had a simple Text Index, that also offers some sort of full-text-search, but not nearly as powerful as the Atlas Search.

What’s pretty dope about Atlas Search is, that you can define your search indices based on your current collections, which completely cuts off the development effort for the integration. The only thing you have to do is write an aggregration query to query the search index.

Caching your queries

While this is not a MongoDB performance tip, is it an honorable mention. However, as this guide is mainly about performance, we cannot ignore the fact that caching is likely one of the best ways to improve your applications performance. Caching your data in-memory or in a distributed cache like Redis can get your data access to sub-millisecond at all times.

Caching does not come for free. Added complexity in terms of infrastructure and application code, caching and cache invalidation strategies and data recency need to be considered.

We checked the most frequently (hottest) collections and tried to introduce or improve our caching strategies in order to query MongoDB less. Atlas gives you some nice real-time insights of your hottest collections, too.

MongoDB Atlas: Real-Time Metrics

Wrap-Up

It’s a wrap. I tried to summarize my learnings and the things I found most impactful when optimizing MongoDB query performance. I hope you got something out of my learnings, even if it’s just a pointer on where to look next.

Some additional pointers:

What were the most impactful things you’ve done to improve MongoDB performance? Have you found any other tricks?

Looking forward to hear your stories and thanks for reading!

--

--