首页 > 解决方案 > scrapy 不适用于 youtube 搜索查询?返回 404

问题描述

我正在尝试在 youtube 上搜索给定的查询并使用 scrapy 从 youtube 获取视频信息,但是当我让蜘蛛编写 start_urls 时不知何故:

start_urls = [ ........, ' http://www.youtube.com/results?search_query=web+development ', ........., ]

它说robot.txt禁止并返回响应404,当我从项目外部运行命令scrapy shell url =时,它返回响应200,并且相同的命令(scrapy shell url)从项目内部返回404。我怎样才能让我的蜘蛛为此工作?我需要添加哪些标题或其他内容?非常感谢你提前

这是代码和日志:

import scrapy

class YoutubeSpider(scrapy.Spider):

    name = 'youtube'
    allowed_domains = ['youtube.com']
    start_urls = [
        "http://youtube.com/results?search_query=web+development"
    ]
    def parse(self, response):   print('*************success*******************************************')
        print(self.start_urls)




    2019-09-07 18:43:15 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: emailExtractor)
2019-09-07 18:43:15 [scrapy.utils.log] INFO: Versions: lxml 4.4.0.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.7.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1c  28 May 2019), cryptography 2.7, Platform Windows-8.1-6.3.9600-SP0
2019-09-07 18:43:15 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'emailExtractor', 'LOG_FILE': 'D:/log.txt', 'NEWSPIDER_MODULE': 'emailExtractor.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['emailExtractor.spiders']}
2019-09-07 18:43:15 [scrapy.extensions.telnet] INFO: Telnet Password: 7d4ad6d6005e1b68
2019-09-07 18:43:15 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-09-07 18:43:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-09-07 18:43:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-09-07 18:43:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-09-07 18:43:16 [scrapy.core.engine] INFO: Spider opened
2019-09-07 18:43:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-09-07 18:43:16 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2019-09-07 18:43:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.youtube.com/robots.txt> (referer: None)
2019-09-07 18:43:17 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET http://www.youtube.com/results?search_query=web+development>
2019-09-07 18:43:17 [scrapy.core.engine] INFO: Closing spider (finished)
2019-09-07 18:43:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,
 'downloader/request_bytes': 224,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 679,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 1.585362,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 9, 7, 13, 13, 17, 679025),
 'log_count/DEBUG': 2,
 'log_count/INFO': 10,
 'response_received_count': 1,
 'robotstxt/forbidden': 1,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 9, 7, 13, 13, 16, 93663)}
2019-09-07 18:43:17 [scrapy.core.engine] INFO: Spider closed (finished)

标签: pythonscrapy

解决方案


默认情况下,Scrapy 将遵守robots.txt策略(请参阅文档)。要更改此行为,请ROBOTSTXT_OBEYsettings.py项目文件中设置为False.


推荐阅读