Efficient coupon (send and use) technology selection and solution

problem description

recently, we need to build a high-speed coupon distribution system. Now the mysql, used in the database has a sub-database and sub-table, but it has a bottleneck on the sending rate, and it is more complex for batch operations (the sub-database sub-table is done according to the user"s ID model, which is more troublesome for batch users, that is, irregular userId). Want to use redis or mongodb to do db, and then worker Synchronize to mysql (mysql for final protection and data analysis, not used in business). I am not familiar with mogno, is there any good solution?

the environmental background of the problems and what methods you have tried

consider using redis, directly, using the List structure to store a list of coupons for each user, with key as XXX_ {userId}. There are some problems that are difficult to solve, such as updating when using coupons, and regularly deleting expired data for a period of time, and so on.

related codes

/ / Please paste the code text below (do not replace the code with pictures)

what result do you expect? What is the error message actually seen?

is there a mature design scheme that can handle a large number of high-speed coupon issuing (insertion) operations, and has high performance support for concurrent processing and response speed in scenarios such as viewing / using coupons?

May.10,2021

after looking over it, it seems to me that there are only two problems you want to solve:

  1. High insertion performance
  2. easily update and delete

from a MongoDB point of view, there is no problem meeting these two points.

  1. MongoDB's sharding mechanism allows data to be logically integrated and physically distributed to different servers. It is a simpler and easier-to-use solution than sub-database and sub-table.
  2. performance to give a simple reference: on my Macbook Pro (SSD disk) an instance without any optimization to insert a small document (about 100byte), the application of BULK operation can be about the insertion speed of 10w+/s. It can also be expanded by sharding and adding servers if necessary.
  3. it is not clear what kind of update you want to make, but usually MongoDB can update the specified document according to the specified conditions and hit the secondary index, which is applicable in most scenarios. As for deleting data, MongoDB has TTL Index that automatically deletes expired data based on time.

I don't know if your question has been answered. If there are more detailed requirements, you can describe them again.

Menu