The problem of storing second kill data in high concurrency by redis+PHP

in the implementation of panic buying second kill function under high concurrency, I have a question, that is, the problem of data storage, when to store it.
idea:
1. Judging that he snapped up successfully, he immediately wrote the generated order data into the mysql order table, while the inventory table field was reduced by 1;
2. After judging the success of the rush purchase, store the user"s user_id in redis"s list list (for example: order, a list with the value of user_id). Then use crontab to insert one by one into the mysql order table, while the inventory table field is reduced by 1.

imagine the result:
the first way of thinking is easy to understand. The simple code implementation is as follows:

$num=10; //
 for($i=0;$i<$num;$iPP)
 \Redis::lpush("goods_store",1);//goods_store,
 push 1011

before snapping up, the above code can be executed to put the goods in the queue.

it"s time to rush to buy: (a large number of users request the following code to perform an action)

/* ,redis */
 $count=\Redis::lpop("goods_store");//lpop
 if(!$count)
    return "";
    
 /* mysql */
  1. //order_num
    $data["order_num"] = *****************;
    $data["user_id"]   = ***;
    $data["goods_id"]  = **;
    .......
    $res = DB::table("order")->insert($data);
  
 2. //num
  if($res)
    DB::table("goods")->decrement("num", 1);

in the above code, when the user buys successfully, the relevant order data is immediately inserted into the mysql order table, while the inventory is reduced. Now my question comes, if we use this way of thinking, under large concurrency, if multiple users enter the process of inserting data into the order list and reducing the inventory of goods at the same time, will it also cause the concurrent operation to cause excessive pressure on the server , and to cause incorrect data storage , for example, if the repository is reduced by one (or the atomicity of changing and querying according to the addition, deletion, deletion and deletion of mysql? Will not cause such a mistake)?

in view of the problem of excessive pressure on the database server caused by the reduction of inventory and the insertion of relevant order data into the mysql order table immediately after the above rush purchase.

then there is a second way to store the user"s user_id in the redis list (for example: order, the list with the value of user_id), extract the user_id from the list one by one through the timer crontab, generate relevant data and insert it into the mysql order table, while the inventory table field is reduced by 1.

the code implementation is similar to the above,

 /* ,redis */
 $count=\Redis::lpop("goods_store");//lpop
 if(!$count)
    return "";
    
   /* user_id */ 
    \Redis::lpush("order",user_id);


crontab

   $user_id = \Redis::rpop("order",user_id);
  
  1. //order_num
    $data["order_num"] = *****************;
    $data["user_id"]   = $user_id;
    $data["goods_id"]  = **;
    .......
    $res = DB::table("order")->insert($data);
  
 2. //num
  if($res)
    DB::table("goods")->decrement("num", 1);

the second way of thinking, my question is, if the panic purchase is successful, first put the user_id in the queue, then use the timer to get the data from the queue at regular intervals, and then generate the relevant data to be inserted into the mysql order table, while reducing inventory. This can relieve the pressure on the database server. but my rush purchase process is designed like this. After a successful rush purchase, the user will pop up / / go to the payment button / / enter the order list page (the order list page data is read from mysql). Because the timer is used to generate the order data and then insert it into the mysql database, this process will certainly be delayed. If the user passes the / / go payment button / / to enter the order list page at this time Isn"t it sad to find that the order list has not yet generated order data?
[is it reasonable to design the rush purchase process in this way? do you not have to enter the order list and click to pay? after the payment is successful, the order data will be generated and inserted into the database]

these are my doubts about the two ideas, which I hope can be explained by professionals or discussed together.

Apr.21,2022

  1. store the panic purchase quantity and other information in redis json, subtract the inventory directly, and then operate only redis json. Read the + redis residue of the database when you want to see the full remainder
  2. the so-called second kill, if there are a million items in it, it is still called second kill. All the second kill data is not a lot. After all verification, it is directly stored in the database.
  3. if there are so many. You can consider making small changes based on your original.

you write the logic of generating an order in crontab as a services, which is triggered periodically through crontab. If the user clicks the payment, first go to reids to check if there is this key. If so, generate the order through services and remove the key. If not, it is possible that the order has been stored through crontab. Otherwise, it is an illegal order.


your problem has a major premise, that is, large concurrency, however, large concurrency is a very general concept, how much concurrency can be considered as large concurrency, in fact, in this limit case, we often have to consider a lot of problems, whether the server configuration can hold large concurrency, your memory is not good at garbage particles, how to think can not bear large concurrency.
there are the following views on your problem:
first, except for a few large e-commerce platforms, few platforms have large concurrency (including Baidu, Baidu Cloud some time ago did not rush to buy a 6-month mainframe for 11 yuan). You study this is almost dragon slaughtering, useless, even if you go to these big companies, or even as an architect Redis+php will not be used to solve this problem, php is not a language for solving concurrency problems, and the inherent instability of redis will not be used casually in a product above 100 million.
second, Gao found that he would consider go first, just as writing a website would consider php first instead of asp, I said think first.
third, there is no big problem with your thinking. To put it simply, the speed of the hard disk is not enough and the storage is too slow. You have to save it in memory first and then move it from memory to hard disk. In the middle, if the memory is finished (redis crash), everything is finished. This is true, and there is no solution. Unless you change the redis code or the secondary development of MySQL, but other people's kafka itself can solve a lot of problems, your losses outweigh the gain, and the second opening may not necessarily solve the problem.
fourth, your question is a lot of beginners who just started typing code, including when I first started learning in college. In particular, the favorite question asked by phper, who has learned more than half of the prospective job seekers in the training class, is to rely on their own ideas to build the use environment, but the vision is too narrow and only one or two links are limited. The large concurrency from the network to the final storage is an overall situation. Many people are thinking about this problem in nginx alone, and php, which encapsulates many layers with little efficiency, is also to solve this problem. That can only show that this is not concurrency at all. Php has to deal with tens of thousands of connections and thousands of concurrency without crashing. You may not have any idea of thousands of concurrency. You can find out how ddos attacks, and you will know that the number of concurrency you usually think is not worth mentioning.

Menu