首页 > 解决方案 > 使用请求抓取页面不会返回所有 html 标记

问题描述

我正在尝试抓取此页面以提取 [ol id="prices"] 中每个 [li] 标记的详细信息。问题是返回的 .html 代码有一些空白标签。具体来说,在每个[li]中,不返回标签[div class="shop cf"]的内容。我为此使用了 requests 和 BeautifulSoup,如下所示:

import requests
import time
from bs4 import BeautifulSoup

headers = {
    "Connection": "keep-alive",
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36"
}

url = "https://www.skroutz.gr/s/11706397/Guy-Laroche-Linda-Red.html"
page = requests.get(url, headers=headers )

# i also tried the following two commands in order to wait for the page to load
#seconds = 10
#page = requests.get(url, time.sleep(seconds), headers=headers)

soup = BeautifulSoup(page.content, 'html.parser')

eshops_grid = soup.find("ol", id="prices")
eshops_product = eshops_grid.findAll("li", class_='cf card js-product-card')
for eshop in eshops_product[0:]:
    eshop_name = eshop.find("div", class_="shop-name").text
    print(eshop_name) # I need to print the eshop_name for each eshop

虽然我需要使用 requests 库来做到这一点,但出于这个原因,我也使用了 selenium,但出现了同样的问题。

from selenium import webdriver
from pyvirtualdisplay import Display
from bs4 import BeautifulSoup

# We are opening a browser but not visible
print('- Open a browser but not visible ')
display = Display(visible=0, size=(1920, 1080))
display.start()

driver = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")

url = 'https://www.skroutz.gr/s/11706397/Guy-Laroche-Linda-Red.html?o=%CE%9C%CF%80%CE%BF%CF%85%CF%81%CE%BD%CE%BF%CF%8D%CE%B6%CE%B9%20Guy%20Laroche%20Linda%20Red'
#print('- Get the initial url of brandwatch')
driver.get(url)

page = driver.page_source
soup = BeautifulSoup(page, 'html.parser')

eshops_grid = soup.find("ol", id="prices")
eshops_product = eshops_grid.findAll("li", class_='cf card js-product-card')
for eshop in eshops_product[0:]:
    eshop_name = eshop.find("div", class_="shop-name").text
    print(eshop_name) # I need to print the eshop_name for each eshop

有没有办法获取每个 ["li"] 的所有内容以提取和打印“eshop_name”?

标签: pythonweb-scrapingbeautifulsouprequest

解决方案


这就是你想要的。

import requests
import demjson
from bs4 import BeautifulSoup as bs

headers = {
    'authority': 'www.skroutz.gr',
    'cache-control': 'max-age=0',
    'sec-ch-ua': '"Google Chrome"; v="83"',
    'sec-ch-ua-mobile': '?0',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36',
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
    'sec-fetch-site': 'none',
    'sec-fetch-mode': 'navigate',
    'sec-fetch-user': '?1',
    'sec-fetch-dest': 'document',
    'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
    'cookie': '_helmet_couch=eyJzZXNzaW9uX2lkIjoiNjgzNzhmMmNmNjI5OTcxNjI5NzU2ZWNmMTM5MzE5MmIiLCJidWNrZXRfaWQiOiJmNTk1ZGRhYy00ZmVhLTQ5NmYtODNkNS00OWQzODgzMWFhYTAiLCJsYXN0X3NlZW4iOjE1OTEyNjgwNTUsInZvbCI6MSwiX2NzcmZfdG9rZW4iOiI1a3Yxb3FKTmhXTCs1YUxzdjYzRFk3TlNXeGs5TlhXYmZhM0UzSmtEL0NBPSJ9--22dfbfe582c0f3a7485e20d9d3932b32fbfb721b',
    'if-none-match': 'W/"e6fb8187391e99a90270c2351f9d17cd"',
}

params = (
    ('o', '\u039C\u03C0\u03BF\u03C5\u03C1\u03BD\u03BF\u03CD\u03B6\u03B9 Guy Laroche Linda Red'),
)

response = requests.get('https://www.skroutz.gr/s/11706397/Guy-Laroche-Linda-Red.html', headers=headers, params=params)

data = bs(response.text,'lxml')
s = data.find_all('script')[5].text.split('SKR.page.first_shop_name = ')[1].split(';')[0].replace('"','')
print(s)

输出是:

Spitishop

推荐阅读