首页 > 解决方案 > 通过代理使用 TLSv1.0 的 Scrapy 握手失败

问题描述

我目前正在尝试使用 Scrapy 开发一个网络爬虫来抓取我公司以外无法访问的网站。问题是我必须通过代理,我成功了,我能够在“ http://quotes.toscrape.com ”上运行我的蜘蛛。问题是我应该运行它的网站正在使用 TLS 1.0,我尝试了几种不起作用的解决方案:

第一个解决方案:

import scrapy
from w3lib.http import basic_auth_header

class QuotesSpider(scrapy.Spider):
   name = "quotes"
   def start_requests(self):
       urls = [
           'https://10.20.106.170/page.aspx'
       ]
       for url in urls:
           yield scrapy.Request(url=url, callback=self.parse,
            meta={'proxy': 'http://<my_proxy_url>:<my_proxy_port>'},
            headers={'Proxy-Authorization': basic_auth_header('<my_id>', '<my_pwd>')})

   def parse(self, response):
       page = response.url.split("/")[-2]
       filename = 'quotes-%s.html' % page
       with open(filename, 'wb') as f:
           f.write(response.body)
       self.log('Saved file %s' % filename)

输出 :

    2018-09-26 14:38:00 [twisted] CRITICAL: Error during info_callback
Traceback (most recent call last):
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\twisted\protocols\tls.py", line 315, in dataReceived
    self._checkHandshakeStatus()
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\twisted\protocols\tls.py", line 235, in _checkHandshakeStatus
    self._tlsConnection.do_handshake()
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\OpenSSL\SSL.py", line 1906, in do_handshake
    result = _lib.SSL_do_handshake(self._ssl)
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\OpenSSL\SSL.py", line 1288, in wrapper
    callback(Connection._reverse_mapping[ssl], where, return_code)
--- <exception caught here> ---
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\twisted\internet\_sslverify.py", line 1102, in infoCallback
    return wrapped(connection, where, ret)
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\scrapy\core\downloader\tls.py", line 67, in _identityVerifyingInfoCallback
    verifyHostname(connection, self._hostnameASCII)
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\service_identity\pyopenssl.py", line 47, in verify_hostname
    cert_patterns=extract_ids(connection.get_peer_certificate()),
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\service_identity\pyopenssl.py", line 75, in extract_ids
    ids.append(DNSPattern(n.getComponent().asOctets()))
  File "C:\Users\1etiennr\Anaconda\lib\site-packages\service_identity\_common.py", line 156, in __init__
    "Invalid DNS pattern {0!r}.".format(pattern)
service_identity.exceptions.CertificateError: Invalid DNS pattern '10.20.106.170'.

2018-09-26 14:38:00 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://10.20.106.170/link.aspx> (failed 3 times): [<twisted.python.failure.Failure service_identity.exceptions.CertificateError: Invalid DNS pattern '10.20.106.170'.>]
2018-09-26 14:38:00 [scrapy.core.scraper] ERROR: Error downloading <GET https://10.20.106.170/link.aspx>: [<twisted.python.failure.Failure service_identity.exceptions.CertificateError: Invalid DNS pattern '10.20.106.170'.>]
2018-09-26 14:38:00 [scrapy.core.engine] INFO: Closing spider (finished)
2018-09-26 14:38:00 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 6,
 'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 6,
 'downloader/request_bytes': 1548,
 'downloader/request_count': 6,
 'downloader/request_method_count/GET': 6,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 9, 26, 12, 38, 0, 338000),
 'log_count/CRITICAL': 6,
 'log_count/DEBUG': 7, 

在发现该网站使用 TLS 1.0 后,我尝试添加如下自定义设置:

class QuotesSpider(scrapy.Spider):
   name = "quotes"
    custom_settings = {
    'DOWNLOADER_CLIENT_TLS_METHOD' : 'TLSv1.0'
    }
   def start_requests(self):
       urls = [
           'https://10.20.106.170/page.aspx'
       ]
       for url in urls:
           yield scrapy.Request(url=url, callback=self.parse,
            meta={'proxy': 'http://<my_proxy_url>:<my_proxy_port>'},
            headers={'Proxy-Authorization': basic_auth_header('<my_id>', '<my_pwd>')})

   def parse(self, response):
       page = response.url.split("/")[-2]
       filename = 'quotes-%s.html' % page
       with open(filename, 'wb') as f:
           f.write(response.body)
       self.log('Saved file %s' % filename)

不幸的是,在这样做之后我得到了同样的错误,我不知道我能做些什么来解开自己。

如果你有想法,我很乐意接受!

提前致谢

标签: pythonsslweb-scrapingproxyscrapy

解决方案


我相信这是一个错误,它已经在 scrapy 版本1.5.1上解决了。


推荐阅读