首页 > 解决方案 > 阅读美丽汤的链接列表

问题描述

我一直在尝试从我成功提取的 URL 列表中读取链接。我的问题是TypeError Traceback (most recent call last)当我尝试阅读整个列表时得到一个。但是,当我阅读单个链接时,该urlopen(urls).read()行的执行没有问题。

response = requests.get('some_website')
doc = BeautifulSoup(response.text, 'html.parser')
headlines = doc.find_all('h3')

links = doc.find_all('a', { 'rel':'bookmark' })
for link in links:
    print(link['href'])

for urls in links:
    raw_html = urlopen(urls).read()  <----- this row here
    articles = BeautifulSoup(raw_html, "html.parser")

标签: pythonweb-scrapingbeautifulsoup

解决方案


考虑使用BeautifulSoupwith requests.Session(),以提高重用连接的效率,并添加标头

import requests
from bs4 import BeautifulSoup

with requests.Session() as s:

    url = 'https://newspunch.com/category/news/us/'
    headers = {'User-Agent': 'Mozilla/5'}
    r = s.get(url, headers = headers)
    soup = BeautifulSoup(r.text, 'lxml')
    links = [item['href'] for item in soup.select('[rel=bookmark]')]

    for link in links:
        r = s.get(link)
        soup = BeautifulSoup(r.text, 'lxml')
        print(soup.select_one('.entry-title').text)

推荐阅读