Are there restrictions on win10 that affect scrapy crawlers?

the company computer, plus domain, win10 system, when there are many retries in the collection process, part of the data will be collected and will be retried all the time, unable to continue, the reason is unknown.
has nothing to do with agent availability, and there is no problem with the same script running under centos7.
for example:

2018-04-25 08:44:42 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.autozi.com/goods/search.html?carModelId=1711281226033381&_=1524472973719&categoryId=148000000000000&categoryLevel=1> (failed 3 times): User timeout caused connection failure: Getting https://www.autozi.com/goods/search.html?carModelId=1711281226033381&_=1524472973719&categoryId=148000000000000&categoryLevel=1 took longer than 20.0 seconds..
2018-04-25 08:44:42 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.autozi.com/goods/search.html?carModelId=1711281226033381&_=1524472973719&categoryId=144000000000000&categoryLevel=1> (failed 3 times): User timeout caused connection failure: Getting https://www.autozi.com/goods/search.html?carModelId=1711281226033381&_=1524472973719&categoryId=144000000000000&categoryLevel=1 took longer than 20.0 seconds..

your IP should be blocked

Menu