首页 > 解决方案 > 无法使用美汤​​提取网页数据

问题描述

  url =  "https://www.telegraph.co.uk/formula-1/2018/08/25/f1-live-belgian-grand-prix-2018-qualifying-latest-updates/"
  soup = bs(urlopen(url), "lxml")
  divs = soup.findAll('div')
  base_url = "https://www.telegraph.co.uk"
  images = []
  print (divs)
  []

我得到空输出。我认为这个页面是动态加载的。如何从此页面中提取 div。

标签: python-3.xbeautifulsoupweb-crawler

解决方案


页面内容由 JS/动态加载,所以你必须使用 selenium ......你可以做这样的事情......

from bs4 import BeautifulSoup
from selenium import webdriver#you need to install selenium
from selenium.webdriver.chrome.options import Options

options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
#copy your chromedriver to python folder
driver = webdriver.Chrome(chrome_options=options)
url =  ("https://www.telegraph.co.uk/"
"formula-1/2018/08/25/f1-live-belgian"
"-grand-prix-2018-qualifying-latest-updates/")
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'lxml')
divs = soup.findAll('div')
print(divs)

推荐阅读