首页 > 解决方案 > 使用 Selenium 和 Beautiful Soup 单击“下载 csv”按钮

问题描述

我正在尝试从该网站下载 csv 文件:https ://invasions.si.edu/nbicdb/arrivals?state=AL&submit=Search+database&begin=2000-01-01&end=2020-11-11&type=General+Cargo&bwms =任何

为此,我需要单击下载 CSV 文件的 CSV 按钮。但是,我需要对多个链接执行此操作,这就是为什么我想使用 Selenium 来自动执行单击链接的任务。

我当前运行的代码,但它实际上并没有将 csv 文件下载到指定的文件夹(或任何地方)。

这是我目前拥有的代码:

import selenium
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import time

options = webdriver.ChromeOptions() 
options.add_argument("download.default_directory=folder") # Set the download Path
driver = webdriver.Chrome(options=options)

url = 'https://invasions.si.edu/nbicdb/arrivals?state=AL&submit=Search+database&begin=2000-01-01&end=2020-11-11&type=General+Cargo&bwms=any'

driver.get(url)

python_button = driver.find_element_by_xpath('//*[contains(concat( " ", @class, " " ), concat( " ", "csvbutton", " " ))]')
python_button.click()

我将不胜感激任何帮助!谢谢

标签: pythonseleniumcsvselenium-webdriverweb-scraping

解决方案


您可以使用以下方式解决您的问题:

import requests
from bs4 import BeautifulSoup

# url of initial page with data
url = 'https://invasions.si.edu/nbicdb/arrivals?state=AL&submit=Search+database&begin=2000-01-01&end=2020-11-11&type=General+Cargo&bwms=any'
# name of csv file where to store downloaded csv data
csv_file_name = '/Users/eilyasov/Documents/arrivals_data.csv'

# get html content of initial page
html_data = requests.get(url=url) \
                    .content
# generate Beautifulsoup object based on hrml content of initial page
soup = BeautifulSoup(markup=html_data)
# extract url extension of downloadable csv file
csv_url_extension = soup.find(name='a', attrs={'class': 'csvbutton'}) \
                        .get(key='href')
# construct url of downloadable csv file
csv_url = 'https://invasions.si.edu' + csv_url_extension
# get content of downloadable csv file and saving it to file
response = requests.get(url=csv_url)
if response.status_code == 200:
    with open(csv_file_name, 'wb') as file:
        file.write(response.content)

推荐阅读