首页 > 解决方案 > 在 python 中使用 BeautifulSoup 从网站上抓取报告

问题描述

我正在尝试从公司网站https://www.investorab.com/investors-media/reports-presentations/下载报告。最后,我想下载所有可用的报告。

我几乎没有网络抓取经验,因此在定义正确的搜索模式时遇到了一些麻烦。以前我需要取出所有包含pdf的链接,即我可以使用soup.select('div[id="id-name"] a[data-type="PDF"]')。但是对于这个网站,没有列出链接的数据类型。如何选择“报告和演示”下的所有链接?这是我尝试过的,但它返回一个空列表:

from bs4 import BeautifulSoup
import requests

url = "https://www.investorab.com/investors-media/reports-presentations/"
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')

# Select all reports, publication_dates
reports = soup.select('div[class="html not-front not-logged-in no-sidebars page-events-archive i18n-en"] a[href]')
pub_dates = soup.select('div[class="html not-front not-logged-in no-sidebars page-events-archive i18n-en"] div[class="field-content"]')

我还想选择所有发布日期,但最终还是一个空列表。感谢您在正确方向上的任何帮助。

标签: python-3.xweb-scrapingbeautifulsoup

解决方案


您需要做的是遍历页面,或者我所做的只是遍历 year 参数。获得年度列表后,获取每个报告的链接,然后在每个链接中找到 pdf 链接。然后,您将使用该 pdf 链接写入文件:

from bs4 import BeautifulSoup
import requests
import os

# Gets all the links
linkList = []
url = 'https://vp053.alertir.com/v3/en/events-archive?'
for year in range(1917,2021):

    query = 'type%5B%5D=report&type%5B%5D=annual_report&type%5B%5D=cmd&type%5B%5D=misc&year%5Bvalue%5D%5Byear%5D=' + str(year)

    response = requests.get(url + query )
    soup = BeautifulSoup(response.text, 'html.parser')

    links = soup.find_all('a', href=True)
    linkList += [link['href'] for link in links if 'v3' in link['href']]
    print ('Gathered links for year %s.' %year)

# Go to each link and get the pdsf within them
print ('Downloading PDFs...')
for link in linkList:
    url = 'https://vp053.alertir.com' + link
    response = requests.get(url)
    soup = BeautifulSoup(response.text, 'html.parser')

    for pdflink in soup.select("a[href$='.pdf']"):
        folder_location = 'C:/test/pdfDownloads/'
        if not os.path.exists(folder_location):
            os.mkdir(folder_location)

        try:
            filename = os.path.join(folder_location,pdflink['href'].split('/')[-1])
            with open(filename, 'wb') as f:
                f.write(requests.get('https://vp053.alertir.com' + pdflink['href']).content)
                print ('Saved: %s' %pdflink['href'].split('/')[-1])
        except Exception as ex:
             print('%s not saved. %s' %(pdflink['href'],ex))

推荐阅读