首页 > 解决方案 > 检查字符串是否具有 .pdf 扩展名

问题描述

我对刮擦很陌生。我有2个问题。第一个是我需要废弃网站中包含锚标签的特定部分。我只需要获取锚标签 pdf 链接及其标题,但不幸的是,锚标签也有正常的链接。这是我的第一个问题

第二个问题是输出有不需要的线路中断。对于这两个问题,代码是相同的。对于相同的代码,我有这两个问题。

网站.html

<div>
<a href="www.url.com/somethin.pdf">pdf
link</a>

<a href="www.url.com/somethin.pdf">pdf
link</a>

<a href="www.url.com/somethin">normal
link</a>
</div>

剪贴画.py

import requests
from bs4 import BeautifulSoup

page = requests.get('https://www.privacy.gov.ph/advisories/')
soup = BeautifulSoup(page.content,'html.parser')

section = soup.find("section", {"class": "news_content"})
for link in section.find_all("a"):
   pdf =  link['href'].replace("..", "")
   title =  link.text.strip()
   print("title: " + title + "\t")
   print("pdf_link: " + pdf + "\t")
   print('\n')

如果您运行此代码,您会发现该 html 代码的标题具有不需要的换行符

标签: pythonweb-scrapingbeautifulsoup

解决方案


您可以使用正则表达式来获取以 pdf 扩展名结尾的 href。对于不需要的换行符,我不确定您的意思。我只能假设您的意思是每次打印之间有 2 条新行。如果这个假设是正确的,那是因为每个print函数都将在一个新行上。因此,当您在那里时print('\n'),它将在新行上打印,然后再打印新行。如果您只想要 1 个空格,请删除最后一个打印功能并将其更改\t\n

import requests
from bs4 import BeautifulSoup
import re

page = requests.get('https://www.privacy.gov.ph/advisories/')
soup = BeautifulSoup(page.content,'html.parser')

section = soup.find("section", {"class": "news_content"})
links = section.findAll(href=re.compile("\.pdf$")) # <---- SEE HERE

for link in links:
   pdf =  link['href'].replace("..", "")
   title =  link.text.strip().replace('\n','')
   print("title: " + title)
   print("pdf_link: " + pdf + "\n")

输出:

title: Updated Templates on Security Incident and Personal Data Breach Reportorial Requirements 
pdf_link: https://www.privacy.gov.ph/wp-content/files/attachments/nwsltr/Final_Advisory18-02_6.26.18.pdf        

title: Guidelines on Privacy Impact Assessments   
pdf_link: https://www.privacy.gov.ph/wp-content/files/attachments/nwsltr/NPC_AdvisoryNo.2017-03.pdf     

title: Access to Personal Data Sheets of Government Personnel 
pdf_link: https://www.privacy.gov.ph/wp-content/files/attachments/nwsltr/NPC_Advisory_No.2017-02.pdf  

推荐阅读