首页 > 解决方案 > 使用python从div中抓取h3

问题描述

我想使用 DIV 中的 Python 3.6、H3 标题从页面中抓取:

https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1

请注意,页码会发生变化,增量为 1。

我正在努力返回或确定标题。

from requests import get
url = 'https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1'
response = get(url)
from bs4 import BeautifulSoup
html_soup = BeautifulSoup(response.text, 'lxml')
type(html_soup)
movie_containers = html_soup.find_all('div', class_ = 'card card--rentals')
print(type(movie_containers))
print(len(movie_containers))

我也尝试过遍历它们:

for dd in page("div.card__content"):
    print(div.select_one("h3.card__title").text.strip())

任何帮助都会很棒。

谢谢,

我期待每一页的每部电影的标题结果,包括电影的链接。例如。https://player.bfi.org.uk/rentals/film/watch-akenfield-1975-online

标签: pythonhtmlweb-scrapingbeautifulsoupscrape

解决方案


该页面正在通过 xhr 将内容加载到另一个 url,所以你错过了这个。您可以模仿页面使用的 xhr POST 请求并更改发送的 post json。如果你改变size你会得到更多的结果。

import requests

data = {"size":1480,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
r = requests.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
for film in r['hits']['hits']:
    print(film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url'])

实际结果计数rentals在 json, 中r['hits']['total'],因此您可以执行初始请求,从比您预期高得多的数字开始,检查是否需要另一个请求,然后通过更改fromsize来收集任何额外的以清除任何未完成的.

import requests
import pandas as pd

initial_count = 10000
results = []

def add_results(r):
    for film in r['hits']['hits']:
        results.append([film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url']])

with requests.Session() as s:
    data = {"size": initial_count,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
    r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
    total_results = int(r['hits']['total'])
    add_results(r)

    if total_results > initial_count :
        data['size'] = total_results - initial_count
        data['from'] = initial_count
        r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
        add_results(r)

df = pd.DataFrame(results, columns = ['Title', 'Link'])
print(df.head())

推荐阅读