首页 > 解决方案 > Python 多页网页仅抓取文本

问题描述

我是 python 新手。我目前正在研究网络抓取。任务是抓取戴尔社区 Inspiron 问题的前 5 页。我有代码可以运行并返回我需要的信息。但是,我无法仅获取文本。我当前的代码返回文本+ html。我尝试将 .text 放置在代码的各个点,但这样做时只会出错。

最常见的错误是:“AttributeError: ResultSet object has no attribute 'text'。您可能将项目列表视为单个项目。当您打算调用 find() 时是否调用了 find_all()?”

下面是我的代码:

from urllib.request import urlopen
from bs4 import BeautifulSoup
import os, csv
from time import sleep



pages = ['https://www.dell.com/community/Inspiron/bd-p/Inspiron',
        'https://www.dell.com/community/Inspiron/bd-p/Inspiron/page/2',
        'https://www.dell.com/community/Inspiron/bd-p/Inspiron/page/3',
        'https://www.dell.com/community/Inspiron/bd-p/Inspiron/page/4',
        'https://www.dell.com/community/Inspiron/bd-p/Inspiron/page/5'
    
    ]
import requests
data = []

for page in pages:
    r = requests.get(page)
    soup = BeautifulSoup(r.content, 'html.parser')
    rows = soup.select('tbody tr')
    
    for row in rows:
        d = dict()
        d['title'] = soup.find_all ('a', attrs = {'class': 'page-link lia-link-navigation lia-custom-event'})
        d['author'] = soup.find_all ('span', attrs = {'class': 'login-bold'})
        d['time'] = soup.find_all ('span', attrs = {'class': 'local-time'})
        d['kudos'] = soup.find_all ('div', attrs = {'class': 'lia-component-messages-column-message-kudos-count'})
        d['messages'] = soup.find_all ('div', attrs = {'class': 'lia-component-messages-column-message-replies-count'})
        d['views'] = soup.find_all ('div', attrs = {'class': 'lia-component-messages-column-topic-views-count'})
        d['solved'] = soup.find_all ('td', attrs = {'aria-label': 'triangletop lia-data-cell-secondary lia-data-cell-icon'})
        d['latest']= soup.find_all ('span', attrs = {'cssclass': 'lia-info-area-item'})
        data.append(d)
    
    sleep(10)
print(data[0])

任何帮助是极大的赞赏。谢谢!

标签: pythonpython-3.xweb-scrapingbeautifulsoup

解决方案


find_all返回一个html 元素列表。如果您希望打印每个元素的文本,您需要遍历您使用创建的每个列表,find_all然后将该.text方法应用于每个单独的条目。例如:

titles = soup.find_all ('a', attrs = {'class': 'page-link lia-link-navigation lia-custom-event'})
for title in titles:
    print(title.text())

推荐阅读