首页 > 解决方案 > 用于加载更多按钮的 Scrapy POST 请求

问题描述

正在尝试获取产品名称和价格。

页面底部有一个加载更多按钮,我尝试使用邮递员修改表单数据,'productBeginIndex':并且'resultsPerPage':似乎修改了显示的产品数量。

但是,我不确定我的代码有什么问题——无论我如何调整值,它仍然返回 24 个产品。我试过使用FormRequest.from_response(),但它仍然只返回 24 个产品。

import scrapy


class PriceSpider(scrapy.Spider):
    name = "products"
    def parse(self, response):
        return [scrapy.FormRequest(url="https://www.fairprice.com.sg/baby-child",
                                   method='POST',
                                   formdata= {'productBeginIndex': '1', 'resultsPerPage': '1', },
                                   callback=self.logged_in)]

    def logged_in(self, response):
        # here you would extract links to follow and return Requests for
        # each of them, with another callback
      name = response.css("img::attr(title)").extract()
      price = response.css(".pdt_C_price::text").extract()

      for item in zip(name, price):
          scraped_info = {
                  "title" : item[0],
                  "value" : item[1]
                   }
          yield scraped_info

有人可以告诉我我错过了什么吗?我如何实现一个循环来提取该类别中的所有对象?

太感谢了!

标签: ajaxpostpaginationscrapy

解决方案


您应该发布到 (get 方法也可以使用)/ProductListingView而不是/baby-child.

要抓取所有对象,修改beginIndex循环中的参数并产生一个新请求。(顺便说一下,修改productBeginIndex不起作用)

我们不知道产品的总数,所以一个安全的方法是每次爬取一组产品。通过修改custom_settings,您可以轻松控制从哪里开始以及要抓取多少产品。

至于如何输出到CSV格式文件,请参考Scrapy pipeline 以正确的格式导出 csv 文件

为方便起见,我在PriceItem下面添加了类,您可以将其添加到items.py. 使用 command scrapy crawl PriceSpider -t csv -o test.csv,您将获得一个test.cvs文件。或者,您可以尝试CSVItemExporter

# OUTPUTS
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['Nestle Nan Optipro Gro Growing Up Milk Formula -Stage 3', 'Friso Gold Growing Up Milk Formula - Stage 3']
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['\n\t\t\t\t\t$199.50\n\t\t\t\t', '\n\t\t\t\t\t$79.00\n\t\t\t\t']
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['Aptamil Gold+ Toddler Growing Up Milk Formula - Stage 3', 'Aptamil Gold+ Junior Growing Up Milk Formula - Stage 4']
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['\n\t\t\t\t\t$207.00\n\t\t\t\t', '\n\t\t\t\t\t$180.00\n\t\t\t\t']
#
# \n and \t is not a big deal, just strip() it

import scrapy

class PriceItem(scrapy.Item):
  title = scrapy.Field()
  value = scrapy.Field()

class PriceSpider(scrapy.Spider):
  name = "PriceSpider"

  custom_settings = {
    "BEGIN_PAGE" : 0,
    "END_PAGE" : 2,
    "RESULTS_PER_PAGE" : 2,
  }

  def start_requests(self): 

    formdata = {
      "sType" : "SimpleSearch",
      "ddkey" : "ProductListingView_6_-2011_3074457345618269512",
      "ajaxStoreImageDir" : "%2Fwcsstore%2FFairpriceStorefrontAssetStore%2F",
      "categoryId" : "3074457345616686371",
      "emsName" : "Widget_CatalogEntryList_701_3074457345618269512",
      "beginIndex" : "0",
      "resultsPerPage" : str(self.custom_settings["RESULTS_PER_PAGE"]),
      "disableProductCompare" : "false",
      "catalogId" : "10201",
      "langId" : "-1",
      "enableSKUListView" : "false",
      "storeId" : "10151",
    }

    # loop to scrape different pages
    for i in range(self.custom_settings["BEGIN_PAGE"], self.custom_settings["END_PAGE"]):
      formdata["beginIndex"] = str(self.custom_settings["RESULTS_PER_PAGE"] * i)

      yield scrapy.FormRequest(
        url="https://www.fairprice.com.sg/ProductListingView",
        formdata = formdata,
        callback=self.logged_in
      )

  def logged_in(self, response):
      name = response.css("img::attr(title)").extract()
      price = response.css(".pdt_C_price::text").extract()

      self.logger.info(name)
      self.logger.info(price)

      # Output to CSV: refer to https://stackoverflow.com/questions/29943075/scrapy-pipeline-to-export-csv-file-in-the-right-format
      # 
      for item in zip(name, price):
        yield PriceItem(
          title = item[0].strip(),
          value = item[1].strip()
        )

推荐阅读