首页 > 解决方案 > Scrapy,试图爬取多个页面

问题描述

我是scrapy的新手。在我的第一个项目中,我尝试爬取具有多页的网络。我从第一页(索引 = 0)获取数据,但我无法从以下页面获取数据:

https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?sort=default>=4-col&offset=4&index=1

https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?sort=default>=4-col&offset=4&index=2

https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?sort=default>=4-col&offset=4&index=3

……

我尝试了不同的方法Rules,但它对我不起作用。

这是我的代码:

import scrapy
from ..items import myfirstItem
from scrapy.spiders import CrawlSpider, Rule
from scrapy import Request
from scrapy.linkextractors import LinkExtractor
from scrapy.item import Field, Item



class myfirstSpider(CrawlSpider):
name = 'myfirst'

start_urls = ["https://www.leroymerlin.es/decoracion-navidena/arboles-navidad"]
allowed_domains= ["leroymerlin.es"]

rules = (
    Rule(LinkExtractor(allow= (), restrict_xpaths=('//li[@class="next"]/a'))),
    Rule(LinkExtractor(allow= (), restrict_xpaths=('//a[@class="boxCard"]')), callback = 'parse_item', follow = False),
)

def parse_item(self, response):
    items = myfirstItem()

    product_name = response.css ('.titleTechniqueSheet::text').extract()

    items['product_name'] = product_name

    yield items

尽管我已经阅读了数千篇同样问题的帖子,但没有一个对我有用..有什么帮助吗?

*编辑:在@Fura 的建议下,我找到了一个更好的解决方案。这是它的外观:

class myfirstSpider(CrawlSpider):
    name = 'myfirst'

    start_urls = ["https://www.leroymerlin.es/decoracion-navidena/arboles-navidad?index=%s" % (page_number) for page_number in range(1,20)]
    allowed_domains= ["leroymerlin.es"]

    rules = (
        Rule(LinkExtractor(allow= r'/fp',), callback = 'parse_item'),
    )

    def parse_item(self, response):
        items = myfirstItem()

        product_name = response.css ('.titleTechniqueSheet::text').extract()

        items['product_name'] = product_name

        yield items

标签: pythonxpathweb-scrapingweb-crawler

解决方案


推荐阅读