python - 使用 Asyncio 和 Aiohttp 下载数千张图片
问题描述
我一直在尝试在我的本地文件系统中下载数千张图像,但它没有正常工作,因为当我下载了大约 5,000 个按目录分隔的图像时,出现了一个名为asyncio.exceptions.TimeoutError的异常。
第一次执行下一个脚本时,我下载了 16.000 次,但每次执行它时,下载的图像数量都会减少,目前我大约有 5,000 个图像。
这是我实现的脚本:
import os
import asyncio
import aiofiles
import async_timeout
from aiohttp import ClientSession
from generator import generate_hash
from logger import logger
from typing import List, Dict, Any
async def download_file(session: Any, remote_url: str, filename: str) -> None:
try:
async with async_timeout.timeout(120):
async with session.get(remote_url) as response:
if response.status == 200:
async with aiofiles.open(filename, mode='wb') as f:
async for data in response.content.iter_chunked(1024):
await f.write(data)
else:
logger.error(f"Error to get {filename} from Remote Server")
except asyncio.TimeoutError:
logger.error(f"Timeout error to download {filename} into Local Server")
raise
async def download_files(images: List[Dict[str, Any]], path: str) -> None:
headers = {"user-agent": "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"}
async with ClientSession(headers=headers) as session:
tasks = [asyncio.ensure_future(download_file(session, image['resource'], get_filename(image, path))) for image in images]
await asyncio.gather(*tasks)
def download_images(images: List[Dict[str, Any]], path: str) -> None:
try:
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(download_files(images, path))
loop.run_until_complete(future)
logger.info(f'Images from Remote Server have been downloaded successfully')
except Exception as error:
logger.error(f'Error to download images from Remote Server: {error}')
raise
def get_filename(image: Dict[str, Any], path: str) -> str:
image_dir = '{}/{}'.format(path, image['id'])
image_file = '{}.jpg'.format(generate_hash(image['resource']))
if not os.path.exists(image_dir):
os.makedirs(image_dir)
return os.path.join(image_dir, image_file)
def main():
images = [
{
'id': '10755431',
'resource': 'http://image1.jpg'
},
{
'id': '10755432',
'resource': 'http://image2.jpg'
},
{
'id': '101426201',
'recurso': 'http://image3.jpg'
}
]
IMAGES_PATH = '/home/stivenramireza'
download_images(images, IMAGES_PATH)
if __name__ == "__main__":
main()
我收到了这个错误:
ERROR:root:Timeout error to download /home/stivenramireza/10755431/664e3bdd10cd69452774f38ec822a9eb.jpg into Local Server
ERROR:root:Error to download images from Remote Server:
Traceback (most recent call last):
File "/home/stivenramireza/storage/main.py", line 17, in download_file
async for data in response.content.iter_chunked(1024):
File "/home/stivenramireza/.local/lib/python3.8/site-packages/aiohttp/streams.py", line 39, in __anext__
rv = await self.read_func()
File "/home/stivenramireza/.local/lib/python3.8/site-packages/aiohttp/streams.py", line 368, in read
await self._wait('read')
File "/home/stivenramireza/.local/lib/python3.8/site-packages/aiohttp/streams.py", line 296, in _wait
await waiter
asyncio.exceptions.CancelledError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 70, in <module>
main()
File "main.py", line 67, in main
download_images(images, IMAGES_PATH)
File "/home/stivenramireza/storage/main.py", line 34, in download_images
loop.run_until_complete(future)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/stivenramireza/storage/main.py", line 28, in download_files
await asyncio.gather(*[asyncio.ensure_future(download_file(session, image['recurso'], get_filename(image, path))) for image in images])
File "/home/stivenramireza/storage/main.py", line 20, in download_file
logger.error(f"Error to get {filename} from Re Server")
File "/home/stivenramireza/.local/lib/python3.8/site-packages/async_timeout/__init__.py", line 55, in __aexit__
self._do_exit(exc_type)
File "/home/stivenramireza/.local/lib/python3.8/site-packages/async_timeout/__init__.py", line 92, in _do_exit
raise asyncio.TimeoutError
asyncio.exceptions.TimeoutError
我该怎么办?
提前致谢。
解决方案
您的download_file
函数捕获超时错误并重新引发它。您的download_files
函数使用asyncio.gather()
which 在第一个异常时退出并将其传播给调用者。可以合理地假设,当下载大量文件时,迟早其中一个会超时,在这种情况下,您的整个程序都会中断。
我该怎么办?
这取决于您希望程序在超时时执行的操作。例如,您可能想要重试该文件,或者您可能想要放弃。但是您很可能不想因为单个文件超时而中断整个下载。
虽然在许多情况下重新引发您捕获的异常是正确的做法,但在这里却不是正确的做法。您可以raise
在末尾更改download_file
为return (remote_url, filename)
哪个将导致gather()
返回失败下载列表,您可以尝试再次下载它们。
推荐阅读
- mysql - 环回:如何确保将值发送到本地数据库,而不是远程数据库?
- java - 从右到左阅读语言的 Java 消息属性选择
- android - java.lang.NoClassDefFoundError Firebase 和 Unity
- python - Python StaleElementReferenceException:消息:过时的元素引用:元素未附加到页面文档
- pointers - 为什么 Rust 认为泄漏内存是安全的?
- python - 如何在 keras 中操作可训练的张量乘法运算?
- rgraph - 如何解决 RGraph 饼图中的标签冲突问题?
- javascript - React:如何关闭从父组件打开的子模式
- python - 是否可以在按下按钮时从谷歌表格运行 python 脚本?
- c# - Fluent Assertions Should().BeEquivalentTo 只有私有字段