The statistics of a large amount of data in mysql is too time-consuming, and php has timed out.

excuse me, I want to do a data statistics function.
I first take out the order_id, in the order table and then go to the order_ detail table in (order_id) to extract the data
and then do the statistics in php!
now the volume of business is surging, resulting in a timeout for one month"s information statistics! Is there any good optimization method? Please give me some advice!

Mar.20,2021

Let's not check all at once, for example, if you have 100W pieces of data for statistics, it is impossible for him to read 100W items at a time. You can count them in paged form or default condition statistics, so that you will not time out


mysql doesn't do a good job in big data and federated queries

method 1:

  1. make sure that all order_id fields are indexed
  2. then control each bar through php with select * from order_detail where order_id =? Make inquiries. But memory is required to be large enough

method 2:
1 the data of order and order_detail are transferred to hbase to do data statistics and time series data, which is its strong point


gives you an idea: 1. For example, 12W data; query 4 times; get a result each time, cache the result in redis;, write this result value and the last orderid value of this statistic (if self-increasing) into a key; and count crotab every morning, and update it into redis; when you actually execute a http request to get the statistical results, you can get the data by taking out the statistical results and adding the new data stored before orderid. For reference

Menu