How does timeit understand the output time?

in the document of python.org/3.6/library/timeit.html-sharpbasic-examples" rel=" nofollow noreferrer "> timeit, there is this example code

.
$ python3 -m timeit ""-".join(str(n) for n in range(100))"
10000 loops, best of 3: 30.2 usec per loop
$ python3 -m timeit ""-".join([str(n) for n in range(100)])"
10000 loops, best of 3: 27.5 usec per loop
$ python3 -m timeit ""-".join(map(str, range(100)))"
10000 loops, best of 3: 23.2 usec per loop

where 10000 loops, best of 3: 30.2 usec per loop 10000 loop come from?

>>> import timeit
>>> timeit.timeit(""-".join(str(n) for n in range(100))", number=10000)
0.3018611848820001
>>> timeit.timeit(""-".join([str(n) for n in range(100)])", number=10000)
0.2727368790656328
>>> timeit.timeit(""-".join(map(str, range(100)))", number=10000)
0.23702679807320237

I know from the above code, such as the first example, that it took a total of 0.301 seconds, so there are 301861 microseconds (usec), has a total of 10000 runs "-" .join (str (n) for n in range (100) , so each loop takes about 30.18 microseconds (usec).

Does the command line of

timeit default to 10000 times loop?

Mar.29,2021

the example on the document is as follows:

$ python -m timeit -s 'text = "sample string"; char = "g"'  'char in text'
10000000 loops, best of 3: 0.0877 usec per loop
$ python -m timeit -s 'text = "sample string"; char = "g"'  'text.find(char)'
1000000 loops, best of 3: 0.342 usec per loop

that is, when executed on the command line, timeit automatically determines how many times the loop is. I think the shorter the time spent at a single time, the more times it will be executed.

Menu