python - aiohttp 并发 GET 请求导致 ClientConnectorError(8, 'nodename nor servname provided, or not known')
问题描述
我被一个看似与asyncio
+相关的问题难住了aiohttp
,即当发送大量并发 GET 请求时,超过 85% 的请求会引发aiohttp.client_exceptions.ClientConnectorError
异常,该异常最终源于
socket.gaierror(8, 'nodename nor servname provided, or not known')
在发送单个 GET 请求或在主机/端口上执行底层 DNS 解析时不会引发此异常。
虽然在我的真实代码中我正在进行大量的自定义,例如使用自定义TCPConnector
实例,但我可以仅使用“默认”aiohttp
类实例和参数来重现该问题,如下所示。
我跟踪了回溯,异常的根源与 DNS 解析有关。它来自调用的_create_direct_connection
方法。aiohttp.TCPConnector
._resolve_host()
我也试过:
- 使用(和不使用)
aiodns
sudo killall -HUP mDNSResponder
family=socket.AF_INET
用作参数TCPConnector
(尽管我很确定aiodns
无论如何都会使用它)。这使用而不是该参数2
的默认 int0
ssl=True
和_ssl=False
一切都无济于事。
完整的重现代码如下。输入 URL 位于https://gist.github.com/bsolomon1124/fc625b624dd26ad9b5c39ccb9e230f5a。
import asyncio
import itertools
import aiohttp
import aiohttp.client_exceptions
from yarl import URL
ua = itertools.cycle(
(
"Mozilla/5.0 (X11; Linux i686; rv:64.0) Gecko/20100101 Firefox/64.0",
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.10; rv:62.0) Gecko/20100101 Firefox/62.0",
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.13; ko; rv:1.9.1b2) Gecko/20081201 Firefox/60.0",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36"
)
)
async def get(url, session) -> str:
async with await session.request(
"GET",
url=url,
raise_for_status=True,
headers={'User-Agent': next(ua)},
ssl=False
) as resp:
text = await resp.text(encoding="utf-8", errors="replace")
print("Got text for URL", url)
return text
async def bulk_get(urls) -> list:
async with aiohttp.ClientSession() as session:
htmls = await asyncio.gather(
*(
get(url=url, session=session)
for url in urls
),
return_exceptions=True
)
return htmls
# See https://gist.github.com/bsolomon1124/fc625b624dd26ad9b5c39ccb9e230f5a
with open("/path/to/urls.txt") as f:
urls = tuple(URL(i.strip()) for i in f)
res = asyncio.run(bulk_get(urls)) # urls: Tuple[yarl.URL]
c = 0
for i in res:
if isinstance(i, aiohttp.client_exceptions.ClientConnectorError):
print(i)
c += 1
print(c) # 21205 !!!!! (85% failure rate)
print(len(urls)) # 24934
打印每个异常字符串res
如下所示:
Cannot connect to host sigmainvestments.com:80 ssl:False [nodename nor servname provided, or not known]
Cannot connect to host giaoducthoidai.vn:443 ssl:False [nodename nor servname provided, or not known]
Cannot connect to host chauxuannguyen.org:80 ssl:False [nodename nor servname provided, or not known]
Cannot connect to host www.baohomnay.com:443 ssl:False [nodename nor servname provided, or not known]
Cannot connect to host www.soundofhope.org:80 ssl:False [nodename nor servname provided, or not known]
# And so on...
令人沮丧的是,我可以ping
毫无问题地使用这些主机,甚至调用底层._resolve_host()
:
重击/外壳:
[~/] $ ping -c 5 www.hongkongfp.com
PING www.hongkongfp.com (104.20.232.8): 56 data bytes
64 bytes from 104.20.232.8: icmp_seq=0 ttl=56 time=11.667 ms
64 bytes from 104.20.232.8: icmp_seq=1 ttl=56 time=12.169 ms
64 bytes from 104.20.232.8: icmp_seq=2 ttl=56 time=12.135 ms
64 bytes from 104.20.232.8: icmp_seq=3 ttl=56 time=12.235 ms
64 bytes from 104.20.232.8: icmp_seq=4 ttl=56 time=14.252 ms
--- www.hongkongfp.com ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 11.667/12.492/14.252/0.903 ms
Python:
In [1]: import asyncio
...: from aiohttp.connector import TCPConnector
...: from clipslabapp.ratemgr import default_aiohttp_tcpconnector
...:
...:
...: async def main():
...: conn = default_aiohttp_tcpconnector()
...: i = await asyncio.create_task(conn._resolve_host(host='www.hongkongfp.com', port=443))
...: return i
...:
...: i = asyncio.run(main())
In [2]: i
Out[2]:
[{'hostname': 'www.hongkongfp.com',
'host': '104.20.232.8',
'port': 443,
'family': <AddressFamily.AF_INET: 2>,
'proto': 6,
'flags': <AddressInfo.AI_NUMERICHOST: 4>},
{'hostname': 'www.hongkongfp.com',
'host': '104.20.233.8',
'port': 443,
'family': <AddressFamily.AF_INET: 2>,
'proto': 6,
'flags': <AddressInfo.AI_NUMERICHOST: 4>}]
我的设置:
- Python 3.7.1
- aiohttp 3.5.4
- 发生在 Mac OSX High Sierra 和 Ubuntu 18.04 上
有关异常本身的信息:
例外是,它作为底层aiohttp.client_exceptions.ClientConnectorError
包装。socket.gaierror
OSError
由于我有return_exceptions=True
in asyncio.gather()
,我可以自己获取异常实例以供检查。这是一个例子:
In [18]: i
Out[18]:
aiohttp.client_exceptions.ClientConnectorError(8,
'nodename nor servname provided, or not known')
In [19]: i.host, i.port
Out[19]: ('www.hongkongfp.com', 443)
In [20]: i._conn_key
Out[20]: ConnectionKey(host='www.hongkongfp.com', port=443, is_ssl=True, ssl=False, proxy=None, proxy_auth=None, proxy_headers_hash=None)
In [21]: i._os_error
Out[21]: socket.gaierror(8, 'nodename nor servname provided, or not known')
In [22]: raise i.with_traceback(i.__traceback__)
---------------------------------------------------------------------------
gaierror Traceback (most recent call last)
~/Scripts/python/projects/clab/lib/python3.7/site-packages/aiohttp/connector.py in _create_direct_connection(self, req, traces, timeout, client_error)
954 port,
--> 955 traces=traces), loop=self._loop)
956 except OSError as exc:
~/Scripts/python/projects/clab/lib/python3.7/site-packages/aiohttp/connector.py in _resolve_host(self, host, port, traces)
824 addrs = await \
--> 825 self._resolver.resolve(host, port, family=self._family)
826 if traces:
~/Scripts/python/projects/clab/lib/python3.7/site-packages/aiohttp/resolver.py in resolve(self, host, port, family)
29 infos = await self._loop.getaddrinfo(
---> 30 host, port, type=socket.SOCK_STREAM, family=family)
31
/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py in getaddrinfo(self, host, port, family, type, proto, flags)
772 return await self.run_in_executor(
--> 773 None, getaddr_func, host, port, family, type, proto, flags)
774
/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py in run(self)
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags)
747 addrlist = []
--> 748 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
749 af, socktype, proto, canonname, sa = res
gaierror: [Errno 8] nodename nor servname provided, or not known
The above exception was the direct cause of the following exception:
ClientConnectorError Traceback (most recent call last)
<ipython-input-22-72402d8c3b31> in <module>
----> 1 raise i.with_traceback(i.__traceback__)
<ipython-input-1-2bc0f5172de7> in get(url, session)
19 raise_for_status=True,
20 headers={'User-Agent': next(ua)},
---> 21 ssl=False
22 ) as resp:
23 return await resp.text(encoding="utf-8", errors="replace")
~/Scripts/python/projects/clab/lib/python3.7/site-packages/aiohttp/client.py in _request(self, method, str_or_url, params, data, json, cookies, headers, skip_auto_headers, auth, allow_redirects, max_redirects, compress, chunked, expect100, raise_for_status, read_until_eof, proxy, proxy_auth, timeout, verify_ssl, fingerprint, ssl_context, ssl, proxy_headers, trace_request_ctx)
474 req,
475 traces=traces,
--> 476 timeout=real_timeout
477 )
478 except asyncio.TimeoutError as exc:
~/Scripts/python/projects/clab/lib/python3.7/site-packages/aiohttp/connector.py in connect(self, req, traces, timeout)
520
521 try:
--> 522 proto = await self._create_connection(req, traces, timeout)
523 if self._closed:
524 proto.close()
~/Scripts/python/projects/clab/lib/python3.7/site-packages/aiohttp/connector.py in _create_connection(self, req, traces, timeout)
852 else:
853 _, proto = await self._create_direct_connection(
--> 854 req, traces, timeout)
855
856 return proto
~/Scripts/python/projects/clab/lib/python3.7/site-packages/aiohttp/connector.py in _create_direct_connection(self, req, traces, timeout, client_error)
957 # in case of proxy it is not ClientProxyConnectionError
958 # it is problem of resolving proxy ip itself
--> 959 raise ClientConnectorError(req.connection_key, exc) from exc
960
961 last_exc = None # type: Optional[Exception]
ClientConnectorError: Cannot connect to host www.hongkongfp.com:443 ssl:False [nodename nor servname provided, or not known
为什么我认为这不是操作系统级别的 DNS 解析问题?
我可以成功 ping 我的 ISP 的 DNS 服务器的 IP 地址,这些地址在 (Mac OSX) System Preferences > Network > DNS 中给出:
[~/] $ ping -c 2 75.75.75.75
PING 75.75.75.75 (75.75.75.75): 56 data bytes
64 bytes from 75.75.75.75: icmp_seq=0 ttl=57 time=16.478 ms
64 bytes from 75.75.75.75: icmp_seq=1 ttl=57 time=21.042 ms
--- 75.75.75.75 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 16.478/18.760/21.042/2.282 ms
[~/] $ ping -c 2 75.75.76.76
PING 75.75.76.76 (75.75.76.76): 56 data bytes
64 bytes from 75.75.76.76: icmp_seq=0 ttl=54 time=33.904 ms
64 bytes from 75.75.76.76: icmp_seq=1 ttl=54 time=32.788 ms
--- 75.75.76.76 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 32.788/33.346/33.904/0.558 ms
[~/] $ ping6 -c 2 2001:558:feed::1
PING6(56=40+8+8 bytes) 2601:14d:8b00:7d0:6587:7cfc:e2cc:82a0 --> 2001:558:feed::1
16 bytes from 2001:558:feed::1, icmp_seq=0 hlim=57 time=14.927 ms
16 bytes from 2001:558:feed::1, icmp_seq=1 hlim=57 time=14.585 ms
--- 2001:558:feed::1 ping6 statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 14.585/14.756/14.927/0.171 ms
[~/] $ ping6 -c 2 2001:558:feed::2
PING6(56=40+8+8 bytes) 2601:14d:8b00:7d0:6587:7cfc:e2cc:82a0 --> 2001:558:feed::2
16 bytes from 2001:558:feed::2, icmp_seq=0 hlim=54 time=12.694 ms
16 bytes from 2001:558:feed::2, icmp_seq=1 hlim=54 time=11.555 ms
--- 2001:558:feed::2 ping6 statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 11.555/12.125/12.694/0.569 ms
解决方案
经过进一步调查,此问题似乎不是由aiohttp
/直接引起的,asyncio
而是由以下两者引起的限制/限制:
- DNS 服务器的容量/速率限制
- 系统级别的最大打开文件数。
首先,对于那些希望获得一些增强型 DNS 服务器的人(我可能不会走那条路),大牌选项似乎是:
- 1.1.1.1 (Cloudflare)
- 8.8.8.8(谷歌公共 DNS)
- 亚马逊 53 号公路
(对于像我这样缺乏网络概念的人来说,这是很好的 DNS 介绍。)
我做的第一件事是在一个增强的 AWS EC2 实例上运行上述内容 - h1.16xlarge 运行 IO 优化的 Ubuntu。我不能说这本身有帮助,但它肯定不会伤害。我对 EC2 实例使用的默认 DNS 服务器不太熟悉,但是在复制上述脚本时,上面带有 errno == 8 的 OSError 消失了。
然而,这带来了一个新的异常,代码为 24 的 OSError,“打开的文件太多”。我的修补程序解决方案(不认为这是最可持续或最安全的)是增加最大文件限制。我是通过以下方式做到的:
sudo vim /etc/security/limits.conf
# Add these lines
root soft nofile 100000
root hard nofile 100000
ubuntu soft nofile 100000
ubuntu hard nofile 100000
sudo vim /etc/sysctl.conf
# Add this line
fs.file-max = 2097152
sudo sysctl -p
sudo vim /etc/pam.d/commmon_session
# Add this line
session required pam_limits.so
sudo reboot
诚然,我在黑暗中感觉周围,但将其与asyncio.Semaphore(1024)
(此处示例)结合导致上述两个异常中恰好有 0 个被引发:
# Then call this from bulk_get with asyncio.Sempahore(n)
async def bounded_get(sem, url, session) -> str:
async with sem:
return await get(url, session)
在大约 25k 的输入 URL 中,只有大约 100 个 GET 请求返回异常,主要是由于这些网站被合法破坏,完成的总时间在几分钟之内,在我看来是可以接受的。
推荐阅读
- r-exams - R/exams,数字到单选转换
- python - 如何增加标签的大小
- build - ant 执行一个带有输出的 sql 任务,直到在输出文件中找到一个字符串
- python - 格式化 Numpy Python 后添加数组的所有行
- c++ - 处理具有多个子项目的 CMake 项目的依赖关系
- java - 如何在java中对数字进行四舍五入
- javascript - 打字稿:类型“字符串”不可分配给类型“数字”| Date::toLocaleDateString() 中的“2 位”
- sql - 当我删除仍由外键引用的行时,SQLite 不会引发异常
- c++ - esp8266 网络服务器:ERR_CONNECTION_TIMED_OUT
- reactjs - 带有安装的反应酶测试用例给出“不变违规:超出最大更新深度”问题