python - 如何使用硒从一个页面中抓取多个网页?
问题描述
最近,我一直在尝试从网站上获取大量定价,方法是从一个页面开始,每个页面都链接到起始页面。我希望运行一个脚本,允许我单击某个项目的框,抓取该项目的定价和描述,然后返回起始页面并继续该循环。但是,我在抓取第一个项目后遇到了一个明显的问题。返回起始页面后,容器未定义,因此会给出一个陈旧元素错误,该错误会中断循环并阻止我获取其余项目。这是我使用的示例代码,希望能将所有项目一个接一个地刮掉。
driver = webdriver.Chrome(r'C:\Users\Hank\Desktop\chromedriver_win32\chromedriver.exe')
driver.get('https://steamcommunity.com/market/search?q=&category_440_Collection%5B%5D=any&category_440_Type%5B%5D=tag_misc&category_440_Quality%5B%5D=tag_rarity4&appid=440#p1_price_asc')
import time
time.sleep(5)
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import StaleElementReferenceException
action = ActionChains(driver)
next_button=wait(driver, 10).until(EC.element_to_be_clickable((By.ID,'searchResults_btn_next')))
def prices_and_effects():
action = ActionChains(driver)
imgs = wait(driver, 5).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, 'img.market_listing_item_img.economy_item_hoverable')))
for img in imgs:
ActionChains(driver).move_to_element(img).perform()
print([my_element.text for my_element in wait(driver, 10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div.item_desc_description div.item_desc_descriptors#hover_item_descriptors div.descriptor")))])
prices = driver.find_elements(By.CSS_SELECTOR, 'span.market_listing_price.market_listing_price_with_fee')
for price in prices:
print(price.text)
def unusuals():
unusuals = wait(driver, 5).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '.market_listing_row.market_recent_listing_row.market_listing_searchresult')))
for unusual in unusuals:
unusual.click()
time.sleep(2)
next_button=wait(driver, 10).until(EC.element_to_be_clickable((By.ID,'searchResults_btn_next')))
next_button.click()
time.sleep(2)
back_button=wait(driver, 10).until(EC.element_to_be_clickable((By.ID,'searchResults_btn_prev')))
back_button.click()
time.sleep(2)
prices_and_effects()
ref_val = wait(driver, 10).until(EC.presence_of_element_located((By.ID, 'searchResults_start'))).text
while next_button.get_attribute('class') == 'pagebtn':
next_button.click()
wait(driver, 10).until(lambda driver: wait(driver, 10).until(EC.presence_of_element_located((By.ID,'searchResults_start'))).text != ref_val)
prices_and_effects()
ref_val = wait(driver, 10).until(EC.presence_of_element_located((By.ID, 'searchResults_start'))).text
time.sleep(2)
driver.execute_script("window.history.go(-1)")
time.sleep(2)
unusuals = wait(driver, 5).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '.market_listing_row.market_recent_listing_row.market_listing_searchresult')))
unusuals()
然而,在成功抓取第一个项目后,它会返回页面并抛出一个过时的元素错误。这个错误对我来说很有意义,但是有什么办法可以规避这个错误,以便我可以保留函数并使用循环?
解决方案
Selenium 对此太过分了。您可以将 HTTP GET 请求模仿到浏览器在呈现页面时发出请求的相同 API。请注意,您每天向 Steam API 发出的请求不要超过 100,000 个。此外,如果请求发生得太频繁,Steam 服务器会推断并停止响应请求,直到某个超时到期,即使您还没有达到每天 100,000 个请求的限制 - 这就是为什么我添加了一些time.sleep
s 以作为良好的衡量标准在每个请求之后使用item_id
.
首先,您向市场列表页面发出请求 - 显示所有项目的页面。然后,对于结果列表中的每个项目,我们提取项目的名称,我们向该项目的概览页面发出请求,并item_id
使用正则表达式从 HTML 中提取项目的名称。然后,我们再次请求https://steamcommunity.com/market/itemordershistogram
获取该商品的最新价格信息。
随意使用字典中的start
和count
查询字符串参数。param
现在它只打印前十项的信息:
def main():
import requests
from bs4 import BeautifulSoup
import re
import time
url = "https://steamcommunity.com/market/search/render/"
params = {
"query": "",
"start": "0",
"count": "10",
"search_descriptions": "0",
"sort_column": "price",
"sort_dir": "asc",
"appid": "440",
"category_440_Collection[]": "any",
"category_440_Type[]": "tag_misc",
"category_440_Quality[]": "tag_rarity4"
}
response = requests.get(url, params=params)
response.raise_for_status()
time.sleep(1)
item_id_pattern = r"Market_LoadOrderSpread\( (?P<item_id>\d+) \)"
soup = BeautifulSoup(response.json()["results_html"], "html.parser")
for result in soup.select("a.market_listing_row_link"):
url = result["href"]
product_name = result.select_one("div")["data-hash-name"]
try:
response = requests.get(url)
response.raise_for_status()
time.sleep(1)
item_id_match = re.search(item_id_pattern, response.text)
assert item_id_match is not None
except:
print(f"Skipping {product_name}")
continue
url = "https://steamcommunity.com/market/itemordershistogram"
params = {
"country": "DE",
"language": "english",
"currency": "1",
"item_nameid": item_id_match.group("item_id"),
"two_factor": "0"
}
response = requests.get(url, params=params)
response.raise_for_status()
time.sleep(1)
data = response.json()
highest_buy_order = float(data["highest_buy_order"]) / 100.0
print(f"The current highest buy order for \"{product_name}\" is ${highest_buy_order}")
return 0
if __name__ == "__main__":
import sys
sys.exit(main())
输出:
The current highest buy order for "Unusual Cadaver's Cranium" is $12.16
The current highest buy order for "Unusual Backbreaker's Skullcracker" is $13.85
The current highest buy order for "Unusual Hard Counter" is $13.04
The current highest buy order for "Unusual Spiky Viking" is $14.26
The current highest buy order for "Unusual Carouser's Capotain" is $12.72
The current highest buy order for "Unusual Cyborg Stunt Helmet" is $12.89
The current highest buy order for "Unusual Stately Steel Toe" is $12.67
The current highest buy order for "Unusual Bloke's Bucket Hat" is $12.71
The current highest buy order for "Unusual Pugilist's Protector" is $12.94
The current highest buy order for "Unusual Shooter's Sola Topi" is $13.25
>>>
推荐阅读
- angular - 已发布库中的 TS 文件通常是库打包不良的标志
- php - 如何在codeigniter中创建一个文件夹
- android - WorkManager:隐藏logcat中的日志
- java - 使用模型映射器的外键映射
- radio-button - 在android中以编程方式创建Radiogroup
- spring-integration - Spring Integration - 消息偶尔没有得到处理并且将进入空通道
- kubernetes - 尝试在 Kubernetes 中创建秘密时遇到一些问题
- visual-studio-code - 在 Visual Studio Code 上,如何指定我的 pytest.ini 文件以进行测试发现
- reactjs - 当用户可以输入用户名或电子邮件时,如何更新正确的字段?
- c# - 段落元素上的新行
不起作用