Why does the python3 crawler always succeed once and then reject it once?

is submitting.
-ok! Submission successful: article 47-
is being submitted.
server rejected, repeating.
switching iP. https://42.232.157.172:23528
is submitting.
-ok! Submission successful: article 48-
is being submitted.
server rejected, repeating.
switching iP. https://39.81.145.23:39501
is submitting.
-ok! Submission successful: article 49-
is being submitted.
server rejected, repeating.
switching iP. https://175.4.20.194:25601
is submitting.
-ok! Submission successful: article 50-
is being submitted.
server rejected, repeating.
switching iP. https://114.230.147.179:21960

is submitting.

https Gaoni ip, my process is like this: first request.get, and then execute request.post,. The result is successful once, and the second time I feel that request.post has become a get request. This is why

Mar.16,2021

whether the other party has csrf check, you need to check whether there is a special token in cookie or get request


whether there is a limit on the frequency of IP requests from the same origin.
for example, after you make a request, sleep asks for it again a few seconds later.


I tried using my own ip, without using proxies.

Menu