首页 > 解决方案 > 从多个页面抓取表格并从链接添加数据

问题描述

我对python很陌生,我希望你能解决一个问题。我想从这个链接刮桌子:http ://creationdentreprise.sn/rechercher-une-societe?field_rc_societe_value=&field_ninea_societe_value=&denomination=&field_localite_nid=All&field_siege_societe_value=&field_forme_juriduqe_nid=All&field_secteur_nid=All&field_date_crea_societe_value

正如您在网站上看到的,在最后一列中,每行都有一个名为“Voir 详细信息”的链接。实际上我想创建 3 个新列:“Region”、“Capital”和“Objet Social”,点击链接并添加到包含一般信息的表格中。

我的代码已经提取了不同页面中的表格

from bs4 import BeautifulSoup as bsoup
import requests as rq
import re

base_url = 'http://www.creationdentreprise.sn/rechercher-une-societe?field_rc_societe_value=&field_ninea_societe_value=&denomination=&field_localite_nid=All&field_siege_societe_value=&field_forme_juriduqe_nid=All&field_secteur_nid=All&field_date_crea_societe_value='
r = rq.get(base_url)

soup = bsoup(r.text)

page_count_links = soup.find_all("a",href=re.compile(r".http://www.creationdentreprise.sn/rechercher-une-societe?field_rc_societe_value=&field_ninea_societe_value=&denomination=&field_localite_nid=All&field_siege_societe_value=&field_forme_juriduqe_nid=All&field_secteur_nid=All&field_date_crea_societe_value=&page=.*"))
try: 
    num_pages = int(page_count_links[-1].get_text())
except IndexError:
    num_pages = 1


url_list = ["{}&page={}".format(base_url, str(page)) for page in range(1, 3)]

with open("results.txt","w") as acct:
    for url_ in url_list:
        print("Processing {}...".format(url_))
        r_new = rq.get(url_)
        soup_new = bsoup(r_new.text)
        for tr in soup_new.find_all('tr'): 
            stack = []
            for td in tr.findAll('td'):
                stack.append(td.text.replace('\n', '').replace('\t', '').strip())
            acct.write(", ".join(stack) + '\n')

我的查询可以返回表格:

Dénomination - Date Création - Siège social - Forme Juridique - Secteur d'activité。

如何将我的脚本转换为具有 3 个新列,例如:

Dénomination - Date Création - Siège social - Forme Juridique - Secteur d'activité - 地区 - 首都 - Objet Social

谢谢你们的帮助

标签: pythondatabaseweb-scrapingbeautifulsouppython-requests

解决方案


您必须提取链接并解析该链接的 html。本质上,您将拥有一个嵌套循环,这与您拥有初始循环的方式大致相同。

from bs4 import BeautifulSoup as bsoup
import requests as rq
import re

base_url = 'http://www.creationdentreprise.sn/rechercher-une-societe?field_rc_societe_value=&field_ninea_societe_value=&denomination=&field_localite_nid=All&field_siege_societe_value=&field_forme_juriduqe_nid=All&field_secteur_nid=All&field_date_crea_societe_value='
r = rq.get(base_url)

soup = bsoup(r.text, 'html.parser')

page_count_links = soup.find_all("a",href=re.compile(r".http://www.creationdentreprise.sn/rechercher-une-societe?field_rc_societe_value=&field_ninea_societe_value=&denomination=&field_localite_nid=All&field_siege_societe_value=&field_forme_juriduqe_nid=All&field_secteur_nid=All&field_date_crea_societe_value=&page=.*"))
try: 
    num_pages = int(page_count_links[-1].get_text())
except IndexError:
    num_pages = 1


url_list = ["{}&page={}".format(base_url, str(page)) for page in range(1, 3)]

with open("results.txt","w") as acct:
    for url_ in url_list:
        print("Processing {}...".format(url_))
        r_new = rq.get(url_)
        soup_new = bsoup(r_new.text)
        for tr in soup_new.find_all('tr'): 
            stack = []

            # set link_ext to None
            link_ext = None

            # try to get link in last column. If not present, pass
            try:
                link_ext = tr.select('a')[-1]['href']
            except:
                pass

            for td in tr.findAll('td'):
                stack.append(td.text.replace('\n', '').replace('\t', '').strip())

            # if a link was extracted from last column, use it to get html from link and parse wanted data
            if link_ext is not None:
                r_link = rq.get('http://creationdentreprise.sn' + link_ext)
                soup_link_ext = bsoup(r_link.text, 'html.parser')
                region = soup_link_ext.find(text=re.compile('Région:')).parent.nextSibling.text
                capital = soup_link_ext.find(text=re.compile('Capital:')).parent.nextSibling.text
                objet = soup_link_ext.find(text=re.compile('Objet social:')).parent.nextSibling.text

                stack = stack + [region, capital, objet]

            acct.write(", ".join(stack) + '\n') 

另外,我昨天在您的第一个问题中注意到了这一点,但没有提及,但是您page_count_linksnum_pages没有习惯于代码中的任何内容。为什么会有它?

只是好奇,为什么你有 2 个用户帐户,相同的屏幕名称?


推荐阅读