首页 > 解决方案 > Scrapy FormRequest 在信用卡登录表单上不起作用

问题描述

我无法让 Scrapy 蜘蛛抓取我的发现帐户页面。

我是 Scrapy 的新手。我已阅读所有相关文档,但似乎无法正确提交表单请求。我添加了表单名、用户 ID 和密码。

import scrapy

class DiscoverSpider(scrapy.Spider):
    name = "Discover"
    start_urls = ['https://www.discover.com']

    def parse(self, response):
        return scrapy.FormRequest.from_response(
            response,
            formname='loginForm',
            formdata={'userID': 'userID', 'password': 'password'},
            callback=self.after_login
        )

    def after_login(self, response):
        # check login succeed before going on
        if "authentication failed" in response.body:
            self.logger.error("Login failed")
        return

提交表单后,我希望蜘蛛抓取我的帐户页面。相反,蜘蛛被重定向到“ https://portal.discover.com/psv1/notification.html ”。以下是蜘蛛控制台输出:

2018-12-26 11:39:46 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: 
MoneySpiders)
2018-12-26 11:39:46 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, 
libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.7.0, 
Python 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)], 
pyOpenSSL 18.0.0 (OpenSSL 1.0.2p  14 Aug 2018), cryptography 2.3.1, 
Platform Windows-10-10.0.17134-SP0
2018-12-26 11:39:46 [scrapy.crawler] INFO: Overridden settings: 
{'BOT_NAME': 'MoneySpiders', 'NEWSPIDER_MODULE': 'MoneySpiders.spiders', 
'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['MoneySpiders.spiders']}
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled downloader 
middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-12-26 11:39:47 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-12-26 11:39:47 [scrapy.core.engine] INFO: Spider opened
2018-12-26 11:39:47 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 
0 pages/min), scraped 0 items (at 0 items/min)
2018-12-26 11:39:47 [scrapy.extensions.telnet] DEBUG: Telnet console 
listening on 
2018-12-26 11:39:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://www.discover.com/robots.txt> (referer: None)
2018-12-26 11:39:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://www.discover.com> (referer: None)
2018-12-26 11:39:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://portal.discover.com/robots.txt> (referer: None)
2018-12-26 11:39:48 [scrapy.downloadermiddlewares.redirect] DEBUG: 
Redirecting (302) to <GET 
https://portal.discover.com/psv1/notification.html> from <POST 
https://portal.discover.com/customersvcs/universalLogin/signin>
2018-12-26 11:39:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://portal.discover.com/psv1/notification.html> (referer: 
https://www.discover.com)
2018-12-26 11:39:48 [scrapy.core.scraper] ERROR: Spider error processing 
<GET https://portal.discover.com/psv1/notification.html> (referer: 
https://www.discover.com)

标签: scrapy

解决方案


从回复中我得到了这个:

当前无法访问您的帐户。过时的浏览器会使您的计算机面临安全风险。要在 Discover.com 上获得最佳体验,您可能需要将浏览器更新到最新版本,然后重试。

所以看起来网站没有将你的蜘蛛识别为有效的浏览器。为了解决这个问题,您需要设置一个适当的 User-Agent 以及该浏览器常用的其他一些标头


推荐阅读