首页 > 解决方案 > Scrapy-Selenium 分页

问题描述

谁能帮我?我正在练习,我无法理解我在分页上做错了什么!它只返回第一页给我,有时会出现错误。当它工作时,它只返回第一页。

“内容安全策略指令 'frame-src' 的源列表包含无效的源 '*trackcmp.net' 将被忽略”,来源:https ://naturaldaterra.com.br/hortifruti.html?page=2 "

import scrapy
from scrapy_selenium import SeleniumRequest

class ComputerdealsSpider(scrapy.Spider):
    name = 'produtos'
    
    def start_requests(self):
        yield SeleniumRequest(
            url='https://naturaldaterra.com.br/hortifruti.html?page=1',
            wait_time=3,
            callback=self.parse
        )

    def parse(self, response):

        for produto in response.xpath("//div[@class='gallery-items-1IC']/div"):
            yield {
                'nome_produto': produto.xpath(".//div[@class='item-nameContainer-1kz']/span/text()").get(),
                'valor_produto': produto.xpath(".//span[@class='itemPrice-price-1R-']/text()").getall(),

            }
            
        next_page = response.xpath("//button[@class='tile-root-1uO'][1]/text()").get()
        if next_page:
            absolute_url = f"https://naturaldaterra.com.br/hortifruti.html?page={next_page}"
            yield SeleniumRequest(
                url=absolute_url,
                wait_time=3,
                callback=self.parse
            )

标签: seleniumweb-scrapingscrapyscrapy-selenium

解决方案


问题是您的 xpath 选择器返回None而不是下一个页码。考虑将其从

next_page = response.xpath("//button[@class='tile-root-1uO'][1]/text()").get()

next_page = response.xpath("//button[@class='tile-root_active-TUl tile-root-1uO']/following-sibling::button[1]/text()").get()

对于您未来的项目,请考虑使用scrapy-playwright来抓取 js 渲染的网站。它使用起来更快更简单。查看刮板的示例实现,使用scrapy-playwright

import scrapy
from scrapy.crawler import CrawlerProcess


class ComputerdealsSpider(scrapy.Spider):
    name = 'produtos'

    def start_requests(self):

        yield scrapy.Request(
            url='https://naturaldaterra.com.br/hortifruti.html?page=1',
            meta={"playwright": True}
        )

    def parse(self, response):
        for produto in response.xpath("//div[@class='gallery-items-1IC']/div"):
            yield {
                'nome_produto': produto.xpath(".//div[@class='item-nameContainer-1kz']/span/text()").get(),
                'valor_produto': produto.xpath(".//span[@class='itemPrice-price-1R-']/text()").getall(),
            }
        # scrape next page
        next_page = response.xpath(
            "//button[@class='tile-root_active-TUl tile-root-1uO']/following-sibling::button[1]/text()").get()
        yield scrapy.Request(
            url='https://naturaldaterra.com.br/hortifruti.html?page=' + next_page,
            meta={"playwright": True}
        )


if __name__ == "__main__":
    process = CrawlerProcess(settings={
        "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
        "DOWNLOAD_HANDLERS": {
            "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
        }, })
    process.crawl(ComputerdealsSpider)
    process.start()

推荐阅读