首页 > 解决方案 > 无法使用硒同时从两个不同的深度收集信息

问题描述

我已经使用 selenium 在 python 中编写了一个脚本,以从其登录页面获取namereputationusing函数,然后单击不同帖子的链接以到达内页,以便从那里解析using函数。get_names()titleget_additional_info()

我试图解析的所有信息都在登录页面和内页中可用。而且,它们不是动态的,所以硒绝对是矫枉过正。但是,我的目的是利用 selenium 从两个不同的深度同时抓取信息。

在下面的脚本中,如果我注释掉namerep行,我可以看到该脚本可以对登录页面的链接进行点击,并title完美地解析来自内页的 s。

但是,当我按原样运行脚本时,我得到selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document指向这一name = item.find_element_by_css_selector()行的错误。

我怎样才能摆脱这个错误并让它完美地运行符合我已经实现的逻辑?

到目前为止我已经尝试过:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

lead_url = 'https://stackoverflow.com/questions/tagged/web-scraping'

def get_names():
    driver.get(lead_url)
    for count, item in enumerate(wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary")))):
        usableList = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary .question-hyperlink")))

        name = item.find_element_by_css_selector(".user-details > a").text
        rep = item.find_element_by_css_selector("span.reputation-score").text

        driver.execute_script("arguments[0].click();",usableList[count])
        wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink")))

        title = get_additional_info()
        print(name,rep,title)

        driver.back()
        wait.until(EC.staleness_of(usableList[count]))

def get_additional_info():
    title = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink"))).text
    return title

if __name__ == '__main__':
    driver = webdriver.Chrome()
    wait = WebDriverWait(driver,5)
    get_names()

标签: pythonpython-3.xseleniumselenium-webdriverweb-scraping

解决方案


与您的设计模式保持广泛...不要工作item。用于count索引从当前提取的元素列表,page_source例如

driver.find_elements_by_css_selector(".user-details > a")[count].text

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

lead_url = 'https://stackoverflow.com/questions/tagged/web-scraping'

def get_names():
    driver.get(lead_url)
    for count, item in enumerate(wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary")))):
        usableList = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary .question-hyperlink")))

        name = driver.find_elements_by_css_selector(".user-details > a")[count].text
        rep = driver.find_elements_by_css_selector("span.reputation-score")[count].text

        driver.execute_script("arguments[0].click();",usableList[count])
        wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink")))

        title = get_additional_info()
        print(name,rep,title)

        driver.back()
        wait.until(EC.staleness_of(usableList[count]))

def get_additional_info():
    title = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,"h1 > a.question-hyperlink"))).text
    return title

if __name__ == '__main__':
    driver = webdriver.Chrome()
    wait = WebDriverWait(driver,5)
    get_names()

在此处输入图像描述


推荐阅读