How does mysql export a data table of 6 million records?

how does 1.mysql export a data table of 6 million records?
2. I"ve tried that direct export will overload the database.

Jun.23,2022

here's what I'm writing right now:

<?php
/**
 * Created by IntelliJ IDEA.
 * User: felix
 * Date: 2019-03-01
 * Time: 18:49
 */

set_time_limit(0);

$servername = 'localhost';
$username   = '';
$password   = '';
$dbname     = '';

$i = 57007;

$bingTime = microtime(true);
echo 'bing ' . $bingTime . ' s' . PHP_EOL;

while (9690183 > $i) {
    $iPP;

    A:

    // 
    $conn = new mysqli($servername, $username, $password, $dbname);

    /* check connection */
    if ($conn->connect_errno) {
        printf("Connect failed: %s\n", $conn->connect_error);
        sleep(3);
        goto A;
    }

    $pySql = "select count(*) as sum from wzjs_product_journal_21_6 where product_journal_id = {$i}";

    $result = $conn->query($pySql);

    if ($result === false) {
        printf("Invalid query: %s\nWhole query: %s\n", $conn->error, $pySql);
    }

    if ($result->fetch_assoc()['sum'] == 0) {
        $pySql = "INSERT INTO wzjs_product_journal_21_6 SELECT * FROM wzjs_product_journal_21 WHERE product_journal_id = {$i}";

        $result = $conn->query($pySql);

        if ($result === false) {
            printf("Invalid query: %s\nWhole query: %s\n", $conn->error, $pySql);
        }
    }

    usleep(5);

    $conn->close();

    usleep(5);
}

$endTime = microtime(true);
echo ' end ' . $endTime . ' s' . PHP_EOL;

$diffTime = $endTime - $bingTime;

echo 'Diff Time ' . $diffTime . ' s' . PHP_EOL;

echo 'i:' . $i . "\n";

copy one by one according to the primary key. I have more than 9690183 pieces of data to migrate because the previous table can't be indexed.

but for some reason, it often comes out

.
PHP Warning:  mysqli::mysqli(): (HY000/2002): No such file or directory in /data/wwwroot/www.scienceport.cn/copy_2.php on line 24

the goto above is intended to solve this problem. It's currently being tested. The reason why
queries first and then replicates is that it always stops before. And this is also the case now, it takes a long time to connect to the shell, using Aliyun, it is not long before the shell is turned off. That is, you have to look at it, or else you will disappear.
run like this

php copy.php > copy.log &

when I wrote this, I printed log again, but the program survived.

March 04, 2019
this is also unreliable, there are more than 1 million do not know why it has not been in the past. We are looking for the cause of the failure.
No such file or directory seems to be out of order with Aliyun. Anyway, there is news today.

March 11, 2019
found that there will be some cases where the write data will not be added to the index, and something will be lost. It can be copied now.

March 12, 2019
https://my.oschina.net/guodap...
this link describes how to exit ssh and continue to run.


if it's just a simple query and then everywhere, it can be read by stream, so that the client queries the data read-only, and then writes the query results to a file.
if you need to process a piece of data multiple times, a paged query is recommended.


it must be a background task that reads pages and then keeps append into excel or csv, and then compresses the zip package directly after the final completion

Menu