A 100000-line ngx log, how to quickly insert mysql? after parsing

Hello, everyone.
nginx log currently has about 100000 rows of data a day. Now I have parsed each field with python. Now I want to insert it into the mysql database. It is too slow to use for. Is there any good way to do it?

Mar.04,2021

batch insert, 1000 at a time

executemany


first of all, as mentioned above, executemany, is inserted in bulk. Insert 500-1000 entries at a time, and then commit it.
in addition, I'm a little confused:
1. 100000 original logs. After parsing, shouldn't they be merged into the database? There was still so much data at that time?
2. If you need to insert 100000 pieces of data into the database every day, it will be nearly 10 million in three months. For mysql, when the amount of data is tens of millions, the efficiency is relatively low. You can consider other storage methods, such as ElasticSearch.

tracking: insert the code in batches like this (not actually running)

    conn = ...
    cursor = ...
    with open('access.log', 'r') as f:
        c = 0
        data_list = []
        for line in f.readlines():
            -sharp :URLIP
            data_list.append(('2018-04-20 12:12:12', '/login/', '1.1.1.1'))
            c += 1
            if c % 1000 == 0:
                cursor.executemany('insert into access_table values(%s, %s, %s)', data_list)
                conn.commit()
                data_list = []
        if data_list:
            cursor.executemany('insert into access_table values(%s, %s, %s)', data_list)
            conn.commit()
    cursor.close()
    conn.close()

try multiprocessing.dummy.Pool

Menu