首页 > 解决方案 > 用 Python 抓取新闻网站

问题描述

我正在努力搜集一些消息。我有一个来自这个站点的 3k 文章的更大列表,按标准选择,并且(考虑到我是 Python 新手)我拿出这个脚本来抓取它们:

import pandas as pd
import bs4

from urllib.request import urlopen
from bs4 import BeautifulSoup

import csv
# get the URL list
list1 = []

a = 'https://www.dnes.bg/sofia/2019/03/13/borisov-se-pohvali-prihodite-ot-gorivata-sa-sys-7-poveche.404467'
b = 'https://www.dnes.bg/obshtestvo/2019/03/13/pazim-ezika-si-pravopis-pod-patronaja-na-radeva.404462'
c = 'https://www.dnes.bg/politika/2019/01/03/politikata-nekanen-gost-na-praznichnata-novogodishna-trapeza.398091'
list1.append(a)
list1.append(b)
list1.append(c)
# define the variables
#url = "https://www.dnes.bg/politika/2019/01/03/politikata-nekanen-gost-na-praznichnata-novogodishna-trapeza.398091"
list2 = list1 #[0:10]
#type(list2)

href = []
title = []
subtitle = []
time = []
article = []
art1 = []

#
#dd = soup.find("div", "art_author").text
#dd

filename = "scraped.csv"
f = open(filename, "w")
#headers = "href;title;subtitle;time;article\n"
headers = "title;subtitle;time;article\n"
f.write(headers)


for url in list2:
    html = urlopen(url)
    soup = BeautifulSoup(html, 'lxml').decode('windows-1251')

    href = url
    title = soup.find("h1", "title").string
    #title = soup.find("h1", "title").string
    #title.extend(soup.find("h1", "title").string) # the title string
    subtitle = soup.find("div", "descr").string
    #subtitle.extend(soup.find("div", "descr").string) # the subtitle string
    time = soup.find("div", "art_author").text
    #time.extend(soup.find("div", "art_author").text)
    #par = soup.find("div", id="art_start").find_all("p")
    art1.extend(soup.find("div", id="art_start").find_all("p"))

    for a in art1:
        #article.extend(art1.find_all("p"))
        article = ([a.text.strip()])
        break

    #href = "".join(href)    
    title = "".join(title)
    subtitle = "".join(subtitle)
    time = "".join(time)
    article = "".join(article)

    #f.write(href + ";" + title + ";" + subtitle + ";" + time + ";" + article + "\n")
    f.write(title + ";" + subtitle + ";" + time + ";" + article +"\n")
f.close()

现在的主要问题是我收到一个错误:

  File "<ipython-input-12-9a796b182a82>", line 24, in <module>
    title = soup.find("h1", "title").string
TypeError: slice indices must be integers or None or have an __index__ method

我真的找不到解决办法。

第二个问题是每当我成功抓取一个站点时,就会出现一些空单元格,这意味着我必须找到通过 Ajax 的方法。

我使用 Anaconda 版本 2018.12。

标签: pythonbeautifulsoupscrape

解决方案


好的。我修复了您的soup对象存储为字符串的问题,因此您可以使用 bs4 来解析 html。我还选择使用 pandas .to_csv(),因为我对它更熟悉,但它可以为您提供所需的输出:

import pandas as pd
from bs4 import BeautifulSoup
import requests


# get the URL list
list1 = []

a = 'https://www.dnes.bg/sofia/2019/03/13/borisov-se-pohvali-prihodite-ot-gorivata-sa-sys-7-poveche.404467'
b = 'https://www.dnes.bg/obshtestvo/2019/03/13/pazim-ezika-si-pravopis-pod-patronaja-na-radeva.404462'
c = 'https://www.dnes.bg/politika/2019/01/03/politikata-nekanen-gost-na-praznichnata-novogodishna-trapeza.398091'
list1.append(a)
list1.append(b) 
list1.append(c) 
# define the variables
#url = "https://www.dnes.bg/politika/2019/01/03/politikata-nekanen-gost-na-praznichnata-novogodishna-trapeza.398091"
list2 = list1 #[0:10]
#type(list2)



results = pd.DataFrame()
for url in list2:

    html = requests.get(url)
    soup = BeautifulSoup(html.text, 'html.parser')

    href = url
    title = soup.find("h1", "title").text
    #title = soup.find("h1", "title").string
    #title.extend(soup.find("h1", "title").string) # the title string
    subtitle = soup.find("div", "descr").text
    #subtitle.extend(soup.find("div", "descr").string) # the subtitle string
    time = soup.find("div", "art_author").text
    #time.extend(soup.find("div", "art_author").text)
    #par = soup.find("div", id="art_start").find_all("p")
    art1 = soup.find("div", id="art_start").find_all("p")

    article = []
    for a in art1:
        if 'googletag.cmd.push' not in a.text:
            article.append(a.text.strip())
    article = ' '.join(article)



    temp_df = pd.DataFrame([[title, subtitle, time, article]], columns = ['title','subtitle','time','article'])
    results = results.append(temp_df).reset_index(drop=True)

results.to_csv("scraped.csv", index=False, encoding='utf-8-sig')

输出:

print (results.to_string())
                                               title                                           subtitle                                               time                                            article
0  Борисов се похвали: Приходите от горивата са с...  Мерките за изсветляване на сектора действат, к...  Обновена: 13 мар 2019 13:24 | 13 мар 2019 11:3...  Приходите от горивата са със 7% повече. Това с...
1  "Пазим езика си": Правопис под патронажа на Ра...  Грамотността зависи не само от училището, смят...  Обновена: 13 мар 2019 11:34 | 13 мар 2019 11:2...  За втора поредна година Сдружение "Живата вода...
2  Политиката – "неканен гост" на празничната нов...  Основателни ли бяха критиките на президента Ру...               3 яну 2019 10:45, Цветелин Димитров   Оказа ли се политиката "неканен гост" на празн...

推荐阅读