Why did I delete tens of millions of pieces of data from redis, and the memory usage remains the same?

I used to store 100 million pieces of data in my redis. There is a dump.rdb file on my computer, which takes up 10 g of space. Then I deleted 50 million pieces of data, and this dump.rdb file still takes up 10 g. I restart my computer to check the redis, and the data is confirmed to have been deleted. There are only 50 million items left. How can I reduce the memory footprint?
there is only one RDP file under the
redis directory.
various searches on the Internet have not found any problems. These data are all set and will not expire, and I have deleted the data manually.

Aug.10,2021

try this command echo 3 > / proc/sys/vm/drop_caches


dump.rdb still occupies 10g to prove that your modified database does not have a snapshot. After reboot, redis uses the previous 10g snapshot to recover the data, all is still 10g. You can try bgsave manually


after deleting the key. It turned out to be an own event, and I didn't expect the rest of the data to be so large

.
Menu