How to prevent a large number of queries from penetrating the cache when the Redis cache fails

suppose we have a sql query that will take 2 seconds.
We cache the results of this sql query to redis and set a 60-second timeout.
so when 60 seconds arrive, that is, two seconds from 60 to 62 seconds, hundreds of requests may miss the cache and query the database directly.
how can I prevent this from happening?

Mar.02,2021

this is cached Avalanche: if the cache expires within a period of time, a large number of cache traverses occur, and all queries fall on the database

.

Cache traversal refers to querying a data that must not exist. Because when the cache misses, it needs to be queried from the database, and if the data cannot be found, it is not written to the cache. This will cause the non-existent data to be queried to the database every time, resulting in cache penetration.

solution to caching Avalanche:

  1. after the cache expires, the number of threads reading the database write cache is controlled by locking or queuing. For example, only one thread is allowed to query data and write cache for a key, while other threads wait.
  2. you can update the cache in advance through the cache reload mechanism, and then manually trigger the loading of the cache before large concurrent access is about to occur
  3. different key, sets different expiration time to make cache expiration time as uniform as possible
  4. do secondary caching, or dual caching strategy. A1 is the original cache and A2 is the copy cache. when A1 fails, you can access the A1 cache expiration time set to short-term and A2 to long-term.

Source: https://blog.csdn.net/m0_3811.


estimate the maximum duration of cached data. 60 seconds is not set randomly, but according to the needs of the scene. Set a longer time. I really don't know any other way. I have paid attention to it and look forward to another answer

.
Menu