首页 > 解决方案 > 使用 selenium 和 scrapy 抓取不成功的网页

问题描述

我正在尝试使用+抓取页面(进一步,主页)。 seleniumscrapy

向下滚动页面时,此处的所有内容都使用 javascript 加载。parse我抓取了方法中的每个特定产品页面(主页中的a.product-list__item.normal.size-normal链接)。我在这里找到向下滚动解决方案,但它似乎不起作用。webdriverafter调用ScrollUntilLoaded方法(start_requestmethod)中只出现了29个URLs标签。所有产品页面也由 处理webdriver,因为它们是由 javascript(parse方法) 加载的。

但这不是唯一的问题。在这 29 个页面中,只有 24 个页面的数据被爬取。所以我wait.until在从页面中提取数据之前添加了产品的图像。但这无济于事。

这种行为的原因可能是什么?是什么问题,硒还是网站本身?

import time
import scrapy
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException

class SilpoSpider(scrapy.Spider):
    name = 'SilpoSpider'

    def __init__(self):
        self.driver = webdriver.Chrome()
        self.wait = WebDriverWait(self.driver, 10)

    def ScrollUntilLoaded(self):
        """scroll webdriver`s content (web page) to the bottom
        the purpose of this method is to load all content that loads with javascript"""
        check_height = self.driver.execute_script("return document.body.scrollHeight;")
        while True:
            self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
            try:
                self.wait.until(lambda driver: self.driver.execute_script("return document.body.scrollHeight;")  > check_height)
                check_height = self.driver.execute_script("return document.body.scrollHeight;") 
            except TimeoutException:
                break

    def start_requests(self):
        # load all content from the page with references to all products
        self.main_url = 'https://silpo.ua/offers'
        self.driver.get(self.main_url)
        self.ScrollUntilLoaded()
        # get all URLs to all particular products pages
        urls = [ref.get_attribute('href') \
            for ref in self.driver.find_elements_by_css_selector('a.product-list__item.normal.size-normal')]
        # len(urls) == 29
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

        self.driver.quit()

    def parse(self, response):
        self.driver.get(response.url)
        self.wait.until(
            EC.presence_of_element_located((By.CSS_SELECTOR, ".image-holder img"))
        )
        yield {"image": self.driver.find_element_by_css_selector(".image-holder img").get_attribute('src'),
            "name": self.driver.find_element_by_css_selector('h1.heading3.product-preview__title span').text,
            "banknotes": int(self.driver.find_element_by_css_selector('.product-price__integer').text),
            "coins": int(self.driver.find_element_by_css_selector('.product-price__fraction').text),
            "old_price": float(self.driver.find_element_by_css_selector('.product-price__old').text),
            "market":"silpo"
            }

标签: pythonseleniumweb-scrapingscrapy

解决方案


完全摆脱您现有的ScrollUntilLoaded()方法,并尝试以下方法来代替它。原来上述方法根本不滚动。如果您给该页面加载更长的时间会更好。

def ScrollUntilLoaded(self):
    while True:
        footer = self.wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "h4.footer__site-map-heading")))
        current_len = len(self.wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "a.product-list__item"))))
        try:
            self.driver.execute_script("arguments[0].scrollIntoView();", footer)
            self.wait.until(lambda driver: len(self.driver.find_elements_by_css_selector("a.product-list__item")) > current_len)
        except TimeoutException:
            break

推荐阅读