首页 > 解决方案 > 如何使用 scrapy 或任何其他工具抓取使用 if-non-match 和 cookie 的页面?

问题描述

我正在尝试抓取一个返回 JSON 对象的 API,但它只在第一次返回 JSON 之后,它没有返回任何内容。我正在使用带有 Cookie 的“if-none-match”标头,但我想在没有 Cookie 的情况下这样做,因为我有很多此类 API 可以抓取。

这是我的蜘蛛代码:

import scrapy
from scrapy import Spider, Request
import json
from scrapy.crawler import CrawlerProcess

header_data = {'authority': 'shopee.com.my',
    'method': 'GET',
    'scheme': 'https',
    'accept': '*/*',
    'if-none-match-': '*',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9',
    'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36',
    'x-requested-with': 'XMLHttpRequest',
    'x-shopee-language': 'en',
    'Cache-Control': 'max-age=0',
    }


class TestSales(Spider):
    name = "testsales"
    allowed_domains = ['shopee.com', 'shopee.com.my', 'shopee.com.my/api/']
    cookie_string = {'SPC_U':'-', 'SPC_IA':'-1' , 'SPC_EC':'-' , 'SPC_F':'7jrWAm4XYNNtyVAk83GPknN8NbCMQEIk', 'REC_T_ID':'476673f8-eeb0-11ea-8919-48df374df85c', '_gcl_au':'1.1.1197882328.1599225148', '_med':'refer', '_fbp':'fb.2.1599225150134.114138691', 'language':'en', '_ga':'GA1.3.1167355736.1599225151', 'SPC_SI':'mall.gTmrpiDl24JHLSNwnCw107mao3hd8qGP', 'csrftoken':'2ntG40uuWzOLUsjv5Sn8glBUQjXtbGgo', 'welcomePkgShown':'true', '_gid':'GA1.3.590966412.1602427202', 'AMP_TOKEN':'%24NOT_FOUND', 'SPC_CT_21c6f4cb':'1602508637.vtyz9yfI6ckMZBdT9dlICuAYf7crlEQ6NwQScaB2VXI=', 'SPC_CT_087ee755':'1602508652.ihdXyWUp3wFdBN1FGrKejd91MM8sJHEYCPqcgmKqpdA=', '_dc_gtm_UA-61915055-6':'1', 'SPC_R_T_ID':'vT4Yxil96kYSRG2GIhtzk8fRJldlPJ1/szTbz9sG21nTJr4zDoOnnxFEgYe2Ea+RhM0H8q0m/SFWBMO7ktpU5Kim0CJneelIboFavxAVwb0=', 'SPC_T_IV':'hhHcCbIpVvuchn7SbLYeFw==', 'SPC_R_T_IV':'hhHcCbIpVvuchn7SbLYeFw==', 'SPC_T_ID':'vT4Yxil96kYSRG2GIhtzk8fRJldlPJ1/szTbz9sG21nTJr4zDoOnnxFEgYe2Ea+RhM0H8q0m/SFWBMO7ktpU5Kim0CJneelIboFavxAVwb0='}

    custom_settings = {
        'AUTOTHROTTLE_ENABLED' : 'True',
        # The initial download delay
        'AUTOTHROTTLE_START_DELAY' : '0.5',
        # The maximum download delay to be set in case of high latencies
        'AUTOTHROTTLE_MAX_DELAY' : '10',
        # The average number of requests Scrapy should be sending in parallel to
        # each remote server
        'AUTOTHROTTLE_TARGET_CONCURRENCY' : '1.0',
        # 'DNSCACHE_ENABLED' : 'False',
        # 'COOKIES_ENABLED': 'False',
    }
            
        

    def start_requests(self):
        subcat_url = '/Baby-Toddler-Play-cat.27.23785'
        id = subcat_url.split('.')[-1]
        header_data['path'] = f'/api/v2/search_items/?by=sales&limit=50&match_id={id}&newest=0&order=desc&page_type=search&version=2'
        header_data['referer'] = f'https://shopee.com.my{subcat_url}?page=0&sortBy=sales'
        url = f'https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id={id}&newest=0&order=desc&page_type=search&version=2'

        yield Request(url=url, headers=header_data, #cookies=self.cookie_string,
                        cb_kwargs={'subcat': 'baby tobbler play cat', 'category': 'baby and toys' })



    def parse(self, response, subcat, category):
        # pass
        try:
            jdata = json.loads(response.body)
        except Exception as e:
            print(f'exception: {e}')
            print(response.body)
            return None

        items = jdata['items']

        for item in items:
            name = item['name']
            image_path = item['image']
            absolute_image = f'https://cf.shopee.com.my/file/{image_path}_tn'
            print(f'this is  absolute image {absolute_image}')
            subcategory = subcat
            monthly_sold = 'pending'
            price = float(item['price'])/100000
            total_sold = item['sold']
            location = item['shop_location']
            stock = item['stock']

            print(name)
            print(price)
            print(total_sold)
            print(location)
            print(stock)


app = CrawlerProcess()
app.crawl(TestSales)
app.start()

这是您可以在浏览器上看到输入的页面网址:https ://shopee.com.my/Baby-Toddler-Play-cat.27.23785?page=0&sortBy=sales

这是您也可以从该页面的开发人员工具中找到的 API 网址:https ://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id=23785&newest=0&order=desc&page_type=search&version=2

请告诉我如何处理“缓存”或“如果不匹配”,因为我不明白如何处理它。提前致谢!

标签: cachingweb-scrapingscrapyif-none-match

解决方案


生成 API GET 请求所需的只是类别标识符,即match_id和起始项目编号,即最新参数。

使用链接模板https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id={category_id}&newest={start_item_number}&order=desc&page_type=search&version=2您可以获取任何 API 类别端点。

在这种情况下,无需管理 cookie 甚至标头。API 完全没有限制。

更新:

这在scrapy shell中对我有用:

from scrapy import Request

url = 'https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id=23785&newest=50&order=desc&page_type=search&version=2'

headers = {
    "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:81.0) Gecko/20100101 Firefox/81.0",
    "Accept": "*/*",
    "Accept-Language": "en-US,en;q=0.5",
    "X-Requested-With": "XMLHttpRequest",
}


request = Request(
    url=url,
    method='GET',
    dont_filter=True,
    headers=headers,
)

fetch(request)

推荐阅读