Python scrapy.Request could not download the web page

uses the scrapy.Request method to collect pages, but nothing is done.

import scrapy
def ret(response):
    print("start print")
    print(response.body)


url = "https://doc.scrapy.org/en/latest/intro/tutorial.html"
v = scrapy.http.Request(url=url, callback=ret)
print(url, v)

output:

https://doc.scrapy.org/en/latest/intro/tutorial.html
<GET https://doc.scrapy.org/en/latest/intro/tutorial.html>
The

method ret is not executed at all and cannot print out the corresponding content


you just define a Request defined by Request, and do not set up a network connection and download it yourself, but can do it through scrapy's Downloader and Spider.
refer to the official documentation:

generally speaking, the Request object is generated in spiders and eventually passed to the downloader (Downloader), downloader to process it and return a Response object, and the Response object is also returned to the spider that generated the request.

if you want him to run, you can define the following spider

import scrapy
from scrapy.spiders import CrawlSpider, Rule

url = 'https://doc.scrapy.org/en/latest/intro/tutorial.html'


def ret(response):
    print('start print\n')
    print(response.body)

def errorcb(err):
    print(err+"\n")
    pass



class MySpider(CrawlSpider):
    name="test"
    def start_requests(self):
        return [scrapy.http.Request(url=url, callback=ret, errback=errorcb)]

Save as a file scrapy_cb.py , and then pass

scrapy runspider scrapy_cb.py 

to run

Menu