首页 > 解决方案 > Webscraping - Selenium BeautifulSoup - 通过分页循环

问题描述

我想稍微弄乱一下硒(只是学习一些东西——问了一些关于beautifulsoup的问题,并得到了一些很好的建议。

无论如何,我只是简单地尝试循环浏览页面并获取 div.details 并打印它找到的数量(作为初始测试)。问题是它似乎只是坐在第一页并重新加载它被卡在循环中。

我将如何更改它以使其循环通过第 1 页、第 2 页然后结束?

from bs4 import BeautifulSoup
import requests
import csv
import pandas
from pandas import DataFrame
import re
import os
import locale
os.environ["PYTHONIOENCODING"] = "utf-8"


from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager

page = 1

driver = webdriver.Chrome(ChromeDriverManager().install())
url="https://www.gunstar.co.uk/view-trader/global-rifle-snipersystems/58782?page={page}"





#grab all links which contain the href specifed

with requests.Session() as session:
  while True:
    res=session.get(url.format(page=page))
    soup=BeautifulSoup(res.content,'html.parser')
    gun_details = soup.select('div.details')
    if soup.select("nav_next") is None:
        break
    page += 1
    driver.get(url) #navigate to the page
print(len(gun_details))

标签: pythonweb-scrapingbeautifulsoup

解决方案


你不需要 selenium 来导航你可以使用 request 方法来做。

from bs4 import BeautifulSoup
import requests
import csv
import pandas
from pandas import DataFrame
import re
import os
import locale
os.environ["PYTHONIOENCODING"] = "utf-8"

page = 1
url="https://www.gunstar.co.uk/view-trader/global-rifle-snipersystems/58782?page={}"

with requests.Session() as session:
  while True:
    print(url.format(page))
    res=session.get(url.format(page))
    soup=BeautifulSoup(res.content,'html.parser')
    gun_details = soup.select('div.details')
    print(len(gun_details))
    if len(soup.select(".nav_next"))==0:
        break
    page += 1

我已提供打印和控制台显示。

https://www.gunstar.co.uk/view-trader/global-rifle-snipersystems/58782?page=1
10
https://www.gunstar.co.uk/view-trader/global-rifle-snipersystems/58782?page=2
4

推荐阅读