Will count be faster if all the data is loaded into memory?

question

looking at mongodb"s slow query log, it is found that there are many count slow queries, the longest of which is half a minute

.

solution

I don"t want to add a lot of indexes (such as article title, article platform, account emotion, etc.) if simply adding memory (such as 32GB = = > 128GB) can speed up the query performance of count? That is, will count be faster if all the data is loaded into memory (even if there is no index)?

some current indicators:

:20GB
: 32GB
db.serverStatus().wiredTiger.cache
"bytes currently in the cache" : 8458332579
"maximum bytes configured" : 11811160064

db.serverStatus().mem
{
    "bits" : 64,
    "resident" : 11327,
    "virtual" : 13123
}



May.27,2021

simple answer: no
the effect of memory on the database is not negligible, but you seem to exaggerate its effect. Consider this:
suppose you have 100w pieces of data from which you need to count under certain conditions. If you don't have an index, you need to do 100w comparisons to see which data meet the criteria, and then count them. In order to make this comparison, first of all, you have to pull the data out of the disk, and expanding the memory does help, but then what? Does it take a lot of time to compare 100w pieces of data? Will CPU also rise very high? This is the time complexity of O (n).
if you can complete hit the index, this process will be greatly simplified, because the search for data can be thought of as a half search, with a complexity of O (log2 (n). The time consumption of both is about this (Abscissa data quantity, ordinate time):

in a nutshell, under the same hardware conditions:

  • if there is no index support, the more data, the longer it takes
  • but theoretically, the increase in the amount of data will not result in a significant increase in time consumed when the index is fully hit
Menu