首页 > 解决方案 > 为什么我的“非常简单”的爬虫 CrawlSpider 无法连接到任何页面?

问题描述

我已经解决了文档。但一定忽略了一些基本的东西。它只是一个从http://quotes.toscrape.com/开始的蜘蛛,然后只使用一个规则和解析函数来记录链接。但它不会抓取任何页面,甚至不会抓取“start_urls”。

这是代码:

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class Crawl_All(CrawlSpider):
    name = 'Crawl_All'
    strat_urls = ['http://quotes.toscrape.com/']
    rules = [
        Rule(LinkExtractor(), callback='Parse_for_new_url', follow=True),
            ]

    def Parse_for_new_url(self, response):
        self.logger.log('got a new url:', response.url)

这是输出:

2020-02-27 13:58:55 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: Auto_Contest)
2020-02-27 13:58:55 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.6 (default, Jan  8 2020, 19:59:22) - [GCC 7.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Linux-5.3.0-40-generic-x86_64-with-debian-buster-sid
2020-02-27 13:58:55 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'Auto_Contest', 'NEWSPIDER_MODULE': 'Auto_Contest.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['Auto_Contest.spiders']}
2020-02-27 13:58:55 [scrapy.extensions.telnet] INFO: Telnet Password: 928bba99b8a0c238
2020-02-27 13:58:56 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2020-02-27 13:58:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-02-27 13:58:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-02-27 13:58:56 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-02-27 13:58:56 [scrapy.core.engine] INFO: Spider opened
2020-02-27 13:58:56 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-27 13:58:56 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-27 13:58:56 [scrapy.core.engine] INFO: Closing spider (finished)
2020-02-27 13:58:56 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 2, 27, 12, 58, 56, 114277),
 'log_count/INFO': 9,
 'memusage/max': 54910976,
 'memusage/startup': 54910976,
 'start_time': datetime.datetime(2020, 2, 27, 12, 58, 56, 104321)}
2020-02-27 13:58:56 [scrapy.core.engine] INFO: Spider closed (finished)

编辑:已解决,似乎是因为应该只是一个简单的strat_urls错字start_urls

标签: pythonscrapyweb-crawler

解决方案


你有简单的错字strat_urls,应该是start_urls


您还必须使用log()两个值:

  • 信息您发送的消息类型(即警告、调试等),

  • 单个字符串 - 所以你必须连接'got a new url:' + response.url

您也可以使用预定义函数,然后不需要第一个参数,但仍然必须使用单个字符串

self.logger.warning('got a new url:' + response.url)

我的代码无需创建项目即可运行

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class Crawl_All(CrawlSpider):
    name = 'Crawl_All'
    start_urls = ['http://quotes.toscrape.com/']
    rules = [Rule(LinkExtractor(), callback='Parse_for_new_url', follow=True),]

    def Parse_for_new_url(self, response):
        #print(response.url)
        self.logger.warning('got a new url:' + response.url)

from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    'USER_AGENT': 'Mozilla/5.0',
    # save in file CSV, JSON or XML
    'FEED_FORMAT': 'csv',     # csv, json, xml
    'FEED_URI': 'output.csv', #
})
c.crawl(Crawl_All)
c.start()

推荐阅读