Optimization of mysql in the case of continuous concurrent writes

Business scenario

7pm 24-hour continuous data writing and updating (market system), it is necessary to store the transaction record and update the k line after calculating the k line according to the transaction record

current solution

use Celery asynchronous task distribution. After obtaining the records, the process is formatted and distributed to worker, through mq. Each project has been divided into databases and tables. Worker uses gevent concurrency, and use INSERT xxxx ON DUPLICATE KEY UPDATE xxxx to insert updates in batches

.

encounter problems

after mq shows that the number of tasks flows into more than 100 tasks per second, there is a lock wait timeout exceeded try restarting transaction problem. At this time, increasing the number of consumers makes the lock competition more fierce. Without increasing the number of consumers, the tasks cannot be handled in time

.

current expectations

add mongo, before mysql to insert and update data in mongo for k-line, and insert historical data into mysql, periodically to clear mongo

I wonder if this scheme has made enough optimization, and what data storage should be used to deal with the continuous insertion and update of the market system?

Jun.07,2021
Menu