首页 > 解决方案 > Python - Selenium 网页抓取,但仅重复第一项正确的次数

问题描述

很长一段时间以来,我一直试图从 Tradingview 中解析出一个列表并尝试了所有方法。到目前为止,这是我的代码:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time

WEBDRIVER_PATH = 'chromedriver.exe'

driver = webdriver.Chrome(WEBDRIVER_PATH)
URL = 'https://tradingview.com/markets/stocks-usa/market-movers-most-volatile/'
driver.get(URL)
print(driver.title)

# waiting data to be loaded
time.sleep(5)

stocks = []
for result in driver.find_elements_by_xpath('//*[@id="js-screener-container"]/div/table/tbody/tr'):
 stock = result.find_element_by_xpath('//*[@id="js-screener-container"]/div/table/tbody/tr/td/div/div/span[2]').text
 stocks.append({'stock': stock})

print(stocks)

问题是它只重复列表中第一个正确数量的第一个项目。我见过很多通过添加点“。”重写这部分来解决这个问题的案例。

stock = result.find_element_by_xpath('//*[@id="js-screener-container"]/div/table/tbody/tr/td/div/div/span[2]').text

所以它看起来像这样:

stock = result.find_element_by_xpath('.//*[@id="js-screener-container"]/div/table/tbody/tr/td/div/div/span[2]').text

但这让我破坏了代码并给了我这个错误:

    stock = result.find_element_by_xpath('.//*[@id="js-screener-container"]/div/table/tbody/tr/td/div/div/span[2]').text
  File "C:\Python\lib\site-packages\selenium\webdriver\remote\webelement.py", line 351, in find_element_by_xpath
    return self.find_element(by=By.XPATH, value=xpath)
  File "C:\Python\lib\site-packages\selenium\webdriver\remote\webelement.py", line 659, in find_element
    {"using": by, "value": value})['value']
  File "C:\Python\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
    return self._parent.execute(command, params)
  File "C:\Python\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "C:\Python\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":".//*[@id="js-screener-container"]/div/table/tbody/tr/td/div/div/span[2]"}
  (Session info: chrome=91.0.4472.77)


Process finished with exit code 1

有人可以帮我走得更远吗?

亲切的问候

标签: pythonpython-3.xseleniumselenium-webdriverweb-scraping

解决方案


尝试这个


for result in driver.find_elements_by_xpath('//*[@id="js-screener-container"]/div/table/tbody/tr'):
 stock = result.find_element_by_xpath('.//td/div/div/span[2]').text
 stocks.append({'stock': stock})

print(stocks)

推荐阅读