The-O command of scrapy saves an empty file

class EastSpider(scrapy.Spider):
    name = "East"
    allowed_domains = ["****.com"]
    start_urls = ["http://finance.***.com/news.html"]

    def parse(self, response):
        nextUrl = response.xpath("//*[contains(@class,"page-btn")]/@href")
        for url in nextUrl.extract():
            time.sleep(1)
            yield Request(urljoin(response.url,url))

        contentUrl = response.xpath("//p[@class="title"]/a/@href")
        for urls in contentUrl.extract():
            time.sleep(1)
            yield Request(urls,callback = self.parse)

        pass
The

code is like this, but as a result of running scrapy crawl East-o East.csv on the command line, East.csv is an empty file with nothing written in it.
I think people say they want yield, but they can"t do it by themselves.
has tried to add yield url and yield urls to the for loop, saying that it is referenced before the definition, and then adding it in the for loop has no effect, or an empty file.

Mar.30,2021

No pipeline, is defined. For more information, please see https://www.cnblogs.com/wzjbg.

.
Menu