首页 > 解决方案 > 继续子链接并下载 PDF 文件

问题描述

我有一个从指定网页下载 PDF 文件的代码https://webpage.com/products/waste-water/。在这个页面上有很多格式的链接,https://webpage.com/product/每个页面上都有 PDF 文件。

如何添加功能以进入链接格式的每个子页面 -https://webpage.com/product/并从那里下载 PDF 文件?

我当前的代码:

import os
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup

url = "https://webpage.com/products/waste-water/"

#If there is no such folder, the script will create one automatically
folder_location = r'C:\temp\webscraping'
if not os.path.exists(folder_location):os.mkdir(folder_location)

response = requests.get(url)
soup= BeautifulSoup(response.text, "html.parser")     
for link in soup.select("a[href$='.pdf']"):
    #Name the pdf files using the last portion of each link which are unique in this case
    filename = os.path.join(folder_location,link['href'].split('/')[-1])
    with open(filename, 'wb') as f:
        f.write(requests.get(urljoin(url,link['href'])).content)

编辑:

这是链接

https://www.nordicwater.com/products/waste-water/

标签: pythonweb-scrapingbeautifulsoup

解决方案


import requests
from bs4 import BeautifulSoup

main = "https://www.nordicwater.com/products/waste-water/"


def Get_Links():
    r = requests.get(main).text
    soup = BeautifulSoup(r, 'html.parser')
    links = []
    for item in soup.findAll("a", {'class': 'ap-area-link'}):
        links.append(item.get("href"))
    return links


def Parse_Links():
    pdf = set()
    for url in Get_Links():
        r = requests.get(url).text
        soup = BeautifulSoup(r, 'html.parser')
        for item in soup.findAll("div", {'class': 'dl-items'}):
            for link in item.findAll("a"):
                link = link.get("href")
                if link:
                    pdf.add(link)
    return pdf


def Save():
    for item in Parse_Links():
        print(f"Downloading File: {item[55:]}")
        r = requests.get(item)
        with open(f"{item[55:]}", 'wb') as f:
            f.write(r.content)
    print("done")


Save()

输出:

在此处输入图像描述


推荐阅读