首页 > 解决方案 > 从网页中提取所有图像的脚本

问题描述

我正在尝试使用以下代码从网页中提取所有图像,但它给出了错误“Nonetype”对象没有属性“组”。有人可以告诉我这里有什么问题吗?

import re
import requests
from bs4 import BeautifulSoup

site = 'http://pixabay.com'

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]


for url in urls:
    filename = re.search(r'/([\w_-]+[.](jpg|gif|png))$', url)
    with open(filename.group(1), 'wb') as f:
        if 'http' not in url:
            # sometimes an image source can be relative 
            # if it is provide the base url which also happens 
            # to be the site variable atm. 
            url = '{}{}'.format(site, url)
        response = requests.get(url)
        f.write(response.content)

标签: pythonpython-3.xbeautifulsoup

解决方案


编辑:对于上下文,由于原始问题已由其他人更新,并且更改了原始代码,因此用户使用的原始模式是r'/([\w_-]+.)$'. 这是最初的问题。这种情况将使以下答案更有意义:

我选择了类似的模式r'/([\w_.-]+)$'。您使用的模式不允许路径包含一个.except 作为最后一个字符,因为.outside[]表示任何字符,并且您之前就拥有它$(字符串的结尾)。所以我.进入[]which 意味着允许.字符组中的文字。这允许该模式在 URL 的末尾捕获图像文件名。

import re
import requests
from bs4 import BeautifulSoup

site = 'http://pixabay.com'

response = requests.get(site)

soup = BeautifulSoup(response.text, 'html.parser')
img_tags = soup.find_all('img')

urls = [img['src'] for img in img_tags]

for url in urls:
    filename = re.search(r'/([\w_.-]+)$', url)
    with open(filename.group(1), 'wb') as f:
        if 'http' not in url:
            # sometimes an image source can be relative
            # if it is provide the base url which also happens
            # to be the site variable atm.
            url = '{}{}'.format(site, url)
        response = requests.get(url)
        f.write(response.content)

推荐阅读