首页 > 解决方案 > 通过抓取收集信息

问题描述

我试图通过抓取维基百科来收集政治家的名字。我需要的是从这个页面上刮掉所有政党:https ://it.wikipedia.org/wiki/Categoria:Politici_italiani_per_partito ,然后为他们每个人刮掉该党内政治家的所有名字(对于列在我上面提到的链接)。

我写了以下代码:

from bs4 import BeautifulSoup as bs
import requests

res = requests.get("https://it.wikipedia.org/wiki/Categoria:Politici_italiani_per_partito")
soup = bs(res.text, "html.parser")
array1 = {}
possible_links = soup.find_all('a')
for link in possible_links:
    url = link.get("href", "")
    if "/wiki/Provenienza" in url: # It is incomplete, as I should scrape also links including word "Politici di/dei"
        res1=requests.get("https://it.wikipedia.org"+url)
        print("https://it.wikipedia.org"+url)
        soup = bs(res1, "html.parser")
        possible_links1 = soup.find_all('a')
        for link in possible_links1:
            url_1 = link.get("href", "")
            array1[link.text.strip()] = url_1

但它不起作用,因为它不会为每一方收集姓名。它收集了所有政党(来自我上面提到的维基百科页面):但是,当我尝试抓取政党的页面时,它不会收集该政党内政客的姓名。

我希望你能帮助我。

标签: pythonpython-3.xweb-scraping

解决方案


您可以从第一页收集 url 和政党名称,然后循环这些 url 并将相关政治家名称的列表添加到以政党名称为键的字典中。您将从使用会话对象中获得效率,从而重用底层 tcp 连接

from bs4 import BeautifulSoup as bs
import requests

results = {}

with requests.Session() as s: # use session object for efficiency of tcp re-use
    s.headers = {'User-Agent': 'Mozilla/5.0'}
    r = s.get('https://it.wikipedia.org/wiki/Categoria:Politici_italiani_per_partito')
    soup = bs(r.content, 'lxml')
    party_info = {i.text:'https://it.wikipedia.org/' + i['href'] for i in soup.select('.CategoryTreeItem a')} #dict of party names and party links

    for party, link in party_info.items():
        r = s.get(link)
        soup = bs(r.content, 'lxml')
        results[party] = [i.text for i in soup.select('.mw-content-ltr .mw-content-ltr a')] # get politicians names 

推荐阅读