首页 > 解决方案 > 使用 Python 从网页中抓取 URL 时返回 {{link}}

问题描述

我正在从如下网页中抓取网址

from bs4 import BeautifulSoup
import requests

url = "https://www.investing.com/search/?q=Axon&tab=news"
response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
soup = BeautifulSoup(response.content, "html.parser")

for s in soup.find_all('div',{'class':'articleItem'}):

    for a in s.find_all('div',{'class':'textDiv'}):
        for b in a.find_all('a',{'class':'title'}):
            print(b.get('href'))

输出如下

/news/stock-market-news/axovant-updates-on-parkinsons-candidate-axolentipd-1713474
/news/stock-market-news/digital-alley-up-24-on-axon-withdrawal-from-patent-challenge-1728115
/news/stock-market-news/axovant-sciences-misses-by-009-763209
/analysis/microns-mu-shares-gain-on-q3-earnings-beat-upbeat-guidance-200529289
/analysis/axon,-espr,-momo,-zyne-200182141
/analysis/factors-likely-to-impact-axon-enterprises-aaxn-q4-earnings-200391393
{{link}}
{{link}}

问题是

  1. 未提取所有 URL
  2. 看到最后两个条目,为什么会这样?

以上两个问题有什么解决办法吗?

标签: pythonweb-scrapingbeautifulsouppython-requests

解决方案


解决此问题的一种方法是使用 selenium:

driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

当 selenium 向下滚动到页面底部时,您会阅读 pagesource 并关闭 selenium 并使用 Beautifulsoup 解析 pagesource。你也可以用 selenium 来解析。

首先硒和bs4:

from selenium import webdriver
from bs4 import BeautifulSoup

import time

PAUSE_TIME = 1
driver = webdriver.Firefox(executable_path='c:/program/geckodriver.exe')
driver.get('https://www.investing.com/search/?q=Axon&tab=news')
lh = driver.execute_script("return document.body.scrollHeight")

while True:

    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")


    time.sleep(PAUSE_TIME)


    nh = driver.execute_script("return document.body.scrollHeight")
    if nh == lh:
        break
    lh = nh
pagesourece = driver.page_source
driver.close()

soup = BeautifulSoup(pagesourece, "html.parser")

for s in soup.find_all('div',{'class':'articleItem'}):

    for a in s.find_all('div',{'class':'textDiv'}):
        for b in a.find_all('a',{'class':'title'}):
            print(b.get('href'))

和只有硒的版本:

from selenium import webdriver
from bs4 import BeautifulSoup

import time

PAUSE_TIME = 1
driver = webdriver.Firefox(executable_path='c:/program/geckodriver.exe')
driver.get('https://www.investing.com/search/?q=Axon&tab=news')
lh = driver.execute_script("return document.body.scrollHeight")

while True:

    driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")


    time.sleep(PAUSE_TIME)


    nh = driver.execute_script("return document.body.scrollHeight")
    if nh == lh:
        break
    lh = nh
pagesourece = driver.page_source




for s in driver.find_elements_by_css_selector('div.articleItem'):

    for a in s.find_elements_by_css_selector('div.textDiv'):
        for b in a.find_elements_by_css_selector('a.title'):
            print(b.get_attribute('href'))
driver.close()

注意你必须安装selenium并下载geckodriver来运行它。如果您想在其他路径中使用 geckodriver,则 c:/program 您必须更改:

driver = webdriver.Firefox(executable_path='c:/program/geckodriver.exe')

到你的 geckodriver 路径。


推荐阅读