Pyspider imports bulk url from a file

now there is a batch of irregular url, stored in the file.
wants to crawl the page corresponding to each url and extract specific content from it.
there is no need for recursive fetching for each url,.

how can I implement it through pyspider?

Jul.12,2021

can be saved in the database and read in the database
but how do you load these url? the page elements are also different

.
Menu