Data Synchronize problem under High concurrency of mysql

at present, if you encounter such a problem of using mysql, first describe the scenario (some simplifications have been made to facilitate the answer):

now there are two machines that use the same database account and access and operate a mysql address at the same time. Both of them will have the following operations:

  • query a piece of data based on id.
  • make incremental changes based on the original data.
  • updates the changed data according to id.

so, sometimes there is a situation like this:

  • Machine one gets the data according to an id query.
  • Machine 2 gets the data according to an id query.
  • Machine one makes incremental changes based on the original data.
  • Machine 2 makes incremental changes based on the original data.
  • The
  • machine requests mysql to store data.
  • Machine 2 requests mysql to store data.

at this time, the update traces of the data stored by the machine are completely erased. The process here should actually "become" serialized, that is, I want to keep both incremental changes.

try to use mysql"s transaction to solve this problem, using serialized isolation level, but I find that it doesn"t seem to solve, transaction can guarantee that there is no extra effect when some statements go wrong, but there are no errors here.

of course, I don"t have a deep understanding of affairs, maybe I don"t use it very well.

also here, incremental changes are time-consuming and require data dependencies and additional calculations, and cannot be done simply through update statements.

I"m using nodejs and sequelize,. I haven"t found a solution yet.

if you have any experience or suggestions, I hope you can give us some advice. I would appreciate it.


this is actually a dirty reading problem for transactions. There are two solutions:

1, the transaction starts the
query (querying for a basis that can be used as a basis that has not been updated by other transactions, such as the last update time)
update-> is updated with where and update the last update time of the data, for example, where last_update_time (last update time) = the time found at the beginning of the transaction.
commit submission. In fact, this method under
parsing uses the where condition of update not to be affected by dirty reads. If the data is updated by other transactions, update fails and rollback.
result: transaction a runs successfully, and transaction b is rolled back because the update condition does not hold.

2, use select. For update locks the table.
result: transaction b waits for transaction a to commit before running.

personally, it is suggested that both method 1 and method 2 can solve the problem of dirty reading during concurrent requests, and it is best to use the two methods together.

Menu