首页 > 解决方案 > 当 url 更改并添加 'offset=[# here]' 时,Web 抓取多个页面

问题描述

from bs4 import BeautifulSoup
import pandas as pd
import requests

r = requests.get('https://reelgood.com/source/netflix')
soup = BeautifulSoup(r.text, 'html.parser')

title = soup.find_all('tr',attrs={'class':'cM'})

records = []
for t in title:
    movie = t.find(attrs={'class':'cI'}).text
    year = t.find(attrs={'class':'cJ'}).findNext('td').text
    rating = t.find(attrs={'class':'cJ'}).findNext('td').findNext('td').text
    score = t.find(attrs={'class':'cJ'}).findNext('td').findNext('td').findNext('td').text
    rottenTomatoe = t.find(attrs={'class':'cJ'}).findNext('td').findNext('td').findNext('td').findNext('td').text
    episodes = t.find(attrs={'class':'c0'}).text[:3]
    records.append([movie, year, rating, score, rottenTomatoe, episodes])

df = pd.DataFrame(records, columns=['movie', 'year', 'rating', 'score', 'rottenTomatoe', 'episodes'])

上面的代码给我 49 条记录,这是第一页。我要刮掉 43 页。每次您转到下一页以获取接下来的 50 个视频时,最初从第一页到第二页的 url 添加“?offset=150”,然后每页增加 100。以下是 url 外观的示例就像最后一页一样(你可以看到 offset=4250)“ https://reelgood.com/source/netflix?offset=4250

关于如何获得所有页面的结果集的任何帮助都会非常有帮助。谢谢你

标签: pythonweb-scrapingbeautifulsoup

解决方案


我想最简单的方法就是抓住更多内容链接所在的 class='eH' 。

它是页面上唯一具有该值的类。当您到达 offset=4250 时,链接就消失了。

所以循环会是这样的:

records = []
keep_looping = True
url = "https://reelgood.com/source/netflix"
while keep_looping:
    r = requests.get(url)
    soup = BeautifulSoup(r.text, "html.parser")
    # grab your content here and store it and find the next link to visit.
    title = soup.find....
    for t in title:
        ....
        records.append...
    # if the tag does not exist, url will be None
    # we will then tell the while-loop to stop by setting the keep_looping flag to False"
    url_tag = soup.find('a', class_='eH')
    # returns not absolute urls but "/source/netflix?offset=150"
    if not url_tag:
        keep_looping = False
    else:
        url = "https://www.reelgood.com" + url_tag.get('href')
df = pd.DataFrame...

推荐阅读