首页 > 解决方案 > 如何使用硒修复“未找到表”错误

问题描述

我正在尝试将网页上的表格中的数据提取到熊猫数据框中。该网站是https://nextgenstats.nfl.com/stats/passing/2019/1。我正在使用 selenium 和 chrome webdriver。我相信我的问题是我无法识别表格的元素 ID。我没有使用 html 的经验,因此很难进行故障排除。

我尝试使用 pandas 的内置 read_html() 函数,但遇到了“未找到表”错误。我切换到 selenium 并使用 chrome webdriver,但仍然有同样的错误。我也尝试添加延迟以让页面加载,但这似乎没有帮助。

import pandas as pd
from selenium import webdriver

driver= webdriver.Chrome()
# scrape webpage
driver.implicitly_wait(10)
driver.get('https://nextgenstats.nfl.com/stats/passing/2019/1')

html = driver.page_source
tables = pd.read_html(html)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-29-e2e5cae6f0b5> in <module>
      1 url = f'https://nextgenstats.nfl.com/stats/passing/2019/1'
----> 2 tats_list = pd.read_html(url)

C:\Python37-32\lib\site-packages\pandas\io\html.py in read_html(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, thousands, encoding, decimal, converters, na_values, keep_default_na, displayed_only)
   1103         na_values=na_values,
   1104         keep_default_na=keep_default_na,
-> 1105         displayed_only=displayed_only,
   1106     )

C:\Python37-32\lib\site-packages\pandas\io\html.py in _parse(flavor, io, match, attrs, encoding, displayed_only, **kwargs)
    910             break
    911     else:
--> 912         raise_with_traceback(retained)
    913 
    914     ret = []

C:\Python37-32\lib\site-packages\pandas\compat\__init__.py in raise_with_traceback(exc, traceback)
     45     if traceback == Ellipsis:
     46         _, _, traceback = sys.exc_info()
---> 47     raise exc.with_traceback(traceback)
     48 
     49 

ValueError: No tables found
import pandas as pd
from selenium import webdriver

driver= webdriver.Chrome()
# scrape webpage
driver.implicitly_wait(10)

html = driver.page_source
# find table by using suspected table id
tables = driver.find_element_by_id("gs-data-table")

---------------------------------------------------------------------------
NoSuchElementException                    Traceback (most recent call last)
<ipython-input-31-5596e919e5ff> in <module>
      5 html = driver.page_source
      6 
----> 7 tables = driver.find_element_by_id("gs-data-table")

C:\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py in find_element_by_id(self, id_)
    358             element = driver.find_element_by_id('foo')
    359         """
--> 360         return self.find_element(by=By.ID, value=id_)
    361 
    362     def find_elements_by_id(self, id_):

C:\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py in find_element(self, by, value)
    976         return self.execute(Command.FIND_ELEMENT, {
    977             'using': by,
--> 978             'value': value})['value']
    979 
    980     def find_elements(self, by=By.ID, value=None):

C:\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py in execute(self, driver_command, params)
    319         response = self.command_executor.execute(driver_command, params)
    320         if response:
--> 321             self.error_handler.check_response(response)
    322             response['value'] = self._unwrap_value(
    323                 response.get('value', None))

C:\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py in check_response(self, response)
    240                 alert_text = value['alert'].get('text')
    241             raise exception_class(message, screen, stacktrace, alert_text)
--> 242         raise exception_class(message, screen, stacktrace)
    243 
    244     def _value_or_default(self, obj, key, default):

NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="gs-data-table"]"}
  (Session info: chrome=77.0.3865.120)

我希望输出是位于网页上的表格的数据框,但无法解决此错误。任何替代方法的帮助,或帮助识别表格元素 ID 都会很棒。谢谢你。

标签: pythonpandasseleniumselenium-chromedriver

解决方案


请尝试以下解决方案:

from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome('C:\chromedriver.exe')
driver.maximize_window()
driver.get("https://nextgenstats.nfl.com/stats/passing/2019/1")
driver.implicitly_wait(10)
table_id=driver.find_element_by_xpath("//div[@class='el-table__body-wrapper']//tbody");

rows = table_id.find_elements(By.TAG_NAME, "tr") # get all of the rows in the table
for row in rows:
    # Get the columns (all the column 2)
    col = row.find_elements(By.TAG_NAME, "td")[1] #note: index start from 0, 1 is col 2
    print col.text #prints text from the element

推荐阅读