首页 > 解决方案 > Scrapy爬虫类跳过链接并且不返回响应正文

问题描述

现在我正在尝试抓取这个网页:http ://search.siemens.com/en/?q=iot

为此,我需要提取链接并解析它们,我刚刚学到的应该可以使用 Crawl 类。但是我的实现似乎不起作用。出于测试目的,我试图从每个网站返回响应正文。不幸的是,蜘蛛只打开每三个左右的链接,并没有给我回复正文。

任何想法我做错了什么?

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule


class SiemensCrawlSSpider(CrawlSpider):
    name = 'siemens_crawl_s'
    allowed_domains = ['search.siemens.com/en/?q=iot']
    start_urls = ['http://search.siemens.com/en/?q=iot']

    rules = (
        Rule(LinkExtractor(restrict_xpaths='.//dl[@id="search-resultlist"]/dt/a'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        yield response.body

标签: python-3.xscrapyweb-crawler

解决方案


设置LOG_LEVEL = 'DEBUG'settings.py你可以看到一些请求由于allowed_domains参数而被过滤

2019-05-10 00:38:27 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'www.siemens.com': <GET https://www.siemens.com/global/en/home/products/software/mindsphere-iot.html>
2019-05-10 00:38:27 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'www.industry.siemens.com.cn': <GET https://www.industry.siemens.com.cn/automation/cn/zh/pc-based-automation/industrial-iot/iok2k/Pages/iot.aspx>
2019-05-10 00:38:27 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'w3.siemens.com': <GET https://w3.siemens.com/mcms/pc-based-automation/en/industrial-iot>
2019-05-10 00:38:27 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'new.siemens.com': <GET https://new.siemens.com/global/en/products/services/iot-siemens.html>

你可以试试allowed_domains = ['siemens.com', 'siemens.com.cn']

或者allowed_domains根本不设置

https://docs.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.Spider.allowed_domains


推荐阅读