首页 > 解决方案 > Scrapy Spider 不会刮任何东西

问题描述

我正在尝试抓取此网站https://www.bhp.com/media-and-insights/reports-and-presentations?q0_r=category%3dAnnual%2bReports但是蜘蛛不会返回任何已加载的项目。我错过了什么?

我用scrapy shell测试了xpaths,它们似乎工作得很好。

蜘蛛:

import scrapy
from third_stage.items import ThirdStageItem
from scrapy.loader import ItemLoader
from scrapy.loader.processors import MapCompose
from urllib.parse import urljoin


class BhpSpider(scrapy.Spider):
    name = 'bhp'
    allowed_domains = ['web']
    start_urls = ['https://www.bhp.com/media-and-insights/reports-and- 
                  presentations?q0_r=category%3dAnnual%2bReports/',]

    def parse(self, response):
        i = ItemLoader(item=ThirdStageItem(), response=response)
        i.add_xpath('title', '//h2/a/text()')
        i.add_xpath('description', '//*[@class="col-9"]/p/text()')
        i.add_xpath('info_url', '//h2/a/@href', MapCompose(lambda i: 
                    urljoin(response.url, i)))
        return i.load_item()

项目:

import scrapy
from scrapy.item import Item
from scrapy.item import Field

class ThirdStageItem(Item):
    title = Field()
    description = Field()
    info_url = Field()
    pass

设置:

BOT_NAME = 'third_stage'
SPIDER_MODULES = ['third_stage.spiders']
NEWSPIDER_MODULE = 'third_stage.spiders'
ROBOTSTXT_OBEY = False

输出:

2019-07-14 16:53:34 [scrapy.core.engine] INFO: Spider opened
2019-07-14 16:53:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-07-14 16:53:34 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2019-07-14 16:53:34 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.bhp.com/media-and-insights/reports-and-presentations?q0_r=category%3dAnnual%2bReports/> (referer: None)
2019-07-14 16:53:34 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.bhp.com/media-and-insights/reports-and-presentations?q0_r=category%3dAnnual%2bReports/>
{}
2019-07-14 16:53:34 [scrapy.core.engine] INFO: Closing spider (finished)
2019-07-14 16:53:34 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 288,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 6601,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 7, 14, 14, 53, 34, 791840),
 'item_scraped_count': 1,
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 7, 14, 14, 53, 34, 528793)}
2019-07-14 16:53:34 [scrapy.core.engine] INFO: Spider closed (finished)

标签: pythonpython-3.xweb-scrapingscrapy

解决方案


xpaths 似乎在scrapy 中不起作用。我不确定你想要什么描述,但你可以在这个 xpath 下找到标题和 info-url: response.xpath(//*[@class="lvl1"]/li/a)。如果您想获取多个项目(而不是其中包含所有数据的 1 个项目),您可以像这样更改您的解析方法:

def parse(self, response):
    xpath_urls = response.xpath('//*[@class="lvl1"]/li/a')
    for xpath_url in xpath_urls:
        i = ItemLoader(item=ThirdStageItem(), response=response)
        title = xpath_url.xpath('./text()').extract_first()
        info_url = xpath_url.xpath('./@href').extract_first()
        i.add_value('title', title)
        i.add_value('info_url', urljoin(response.url, info_url))
        yield i.load_item()

您提供的物品也与您在蜘蛛中所做的不符。应该看起来像这样:

class ThirdStageItem(Item):
    title = Field()
    description = Field()
    info_url = Field()
    pass

推荐阅读