首页 > 解决方案 > 将硒中的表格变成熊猫数据框?

问题描述

我正在尝试抓取一个包含 og 45 列和 7 行的表。该表是使用 ajax 加载的,我无法访问 API。因此我需要在 Python 中使用 selenium。我很接近得到我想要的东西,但我不知道如何将我的“硒查找元素”变成 Pandas DataFrame。到目前为止,我的代码如下所示:

import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time

driver = webdriver.Chrome()
url = "http://www.hctiming.com/myphp/resources/login/browse_results.php?live_action=yes&smartphone_action=no" #a redirect to a login page occurs
driver.get(url)
driver.find_element_by_id("open").click()

user = driver.find_element_by_name("username")
password = driver.find_element_by_name("password")
user.clear()
user.send_keys("MyUserNameWhichIWillNotShare")
password.clear()
password.send_keys("myPasswordWhicI willNotShare")
driver.find_element_by_name("submit").click()

try:
    element = WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.LINK_TEXT, "Results Services")) # I must first click in this line
    )
    element.click()

    element = WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.LINK_TEXT, "View Live")) # Then I must click in this link. Now I have access to the result database
    )
    element.click()

except:
    driver.quit()

time.sleep(5) #I have set a timesleep to 5 secunds. There must be a better way to accomplish this. I just want to make sure that the table is loaded when I try to scrape it

columns = len(driver.find_elements_by_xpath("/html/body/div[2]/div/form[3]/div[2]/div[1]/div/div/div/div[2]/div[4]/section[1]/div[2]/div/div/table/thead/tr[2]/th"))
rows = len(driver.find_elements_by_xpath("/html/body/div[2]/div/form[3]/div[2]/div[1]/div/div/div/div[2]/div[4]/section[1]/div[2]/div/div/table/tbody/tr"))
print(columns, rows)

最后一行代码打印 45 和 7。因此,这似乎可行。但是,我不明白如何制作它的数据框?谢谢你。

标签: pythonpandasselenium

解决方案


看不到数据结构很难说,但是如果表很简单,你可以尝试直接通过 pandas read_html解析它。

df = pd.read_html(driver.page_source)[0]

您还可以通过正确操作 xpath 遍历所有表元素来创建 datafame:

df = pd.DataFrame()
    for i in range(rows):
        s = pd.Series()
        for c in range(columns):
            s[c] = driver.find_elements_by_xpath(f"/html/body/div[2]/div/form[3]/div[2]/div[1]/div/div/div/div[2]/div[4]/section[1]/div[2]/div/div/table/tbody/tr[{i+1}]/td[{c+1}]")
        df = df.append(s, ignore_index=True)

推荐阅读