首页 > 解决方案 > 抓取维基百科信息(表格)

问题描述

我需要在 Wikipedia 上按每个区域收集有关 Elenco dei comuni 的信息。我想创建一个数组,可以让我将每个 comune 与相应的区域相关联,例如:

'Abbateggio': 'Pescara' -> Abruzzo

我尝试使用以下方法获取BeautifulSoup信息requests

from bs4 import BeautifulSoup as bs
import requests

     with requests.Session() as s: # use session object for efficiency of tcp re-use
        s.headers = {'User-Agent': 'Mozilla/5.0'}
        r = s.get('https://it.wikipedia.org/wiki/Comuni_d%27Italia')
        soup = bs(r.text, 'html.parser')
        for ele in soup.find_all('h3')[:6]:
            tx = bs(str(ele),'html.parser').find('span', attrs={'class': "mw-headline"})
            if tx is not None:
                print(tx['id'])

但是它不起作用(它返回一个空列表)。我使用 Inspect of Google Chrome 查看的信息如下:

<span class="mw-headline" id="Elenco_dei_comuni_per_regione">Elenco dei comuni per regione</span> (table)

<a href="/wiki/Comuni_dell%27Abruzzo" title="Comuni dell'Abruzzo">Comuni dell'Abruzzo</a> 

(此字段应针对每个区域更改)

然后<table class="wikitable sortable query-tablesortes">

你能给我建议如何获得这样的结果吗?任何帮助和建议将不胜感激。

编辑:

例子:

我有一句话:comunediabbateggio。这个词包括Abbateggio。我想知道哪个地区可以与那个城市相关联,如果它存在的话。来自 Wikipedia 的信息需要创建一个数据集,该数据集可以让我检查该字段并与某个地区的社区/城市相关联。我应该期待的是:

WORD                         REGION/STATE
comunediabbateggio           Pescara

我希望这可以帮助你。抱歉,如果不清楚。另一个可能会更好理解英语的例子如下:

除了上面的意大利语链接,您还可以考虑以下内容:https ://en.wikipedia.org/wiki/List_of_comuni_of_Italy 。对于每个地区(伦巴第大区、威尼托大区、西西里岛...),我都需要收集有关list of communes of the Provinces. 如果您单击 的链接List of Communes of ...,则会出现一个列出该社区的表格,例如https://en.wikipedia.org/wiki/List_of_communes_of_the_Province_of_Agrigento

标签: pythonweb-scrapingbeautifulsoup

解决方案


import re
import requests
from bs4 import BeautifulSoup
import pandas as pd
from tqdm import tqdm



target = "https://en.wikipedia.org/wiki/List_of_comuni_of_Italy"


def main(url):
    with requests.Session() as req:
        r = req.get(url)
        soup = BeautifulSoup(r.content, 'html.parser')

        provinces = [item.find_next("span").text for item in soup.findAll(
            "span", class_="tocnumber", text=re.compile(r"\d[.]\d"))]

        search = [item.replace(
            " ", "_") if " " in item else item for item in provinces]

        nested = []
        for item in search:
            for a in soup.findAll("span", id=item):
                goes = [b.text.split("of ")[-1]
                        for b in a.find_next("ul").findAll("a")]
                nested.append(goes)

        dictionary = dict(zip(provinces, nested))

        urls = [f'{url[:24]}{b.get("href")}' for item in search for a in soup.findAll(
            "span", id=item) for b in a.find_next("ul").findAll("a")]
    return urls, dictionary


def parser():
    links, dics = main(target)
    com = []
    for link in tqdm(links):
        try:
            df = pd.read_html(link)[0]
            com.append(df[df.columns[1]].to_list()[:-1])
        except ValueError:
            com.append(["N/A"])
    com = iter(com)
    for x in dics:
        b = dics[x]
        dics[x] = dict(zip(b, com))
    print(dics)


parser()

推荐阅读