首页 > 解决方案 > 使用多线程/多处理加快我的代码的抓取速度

问题描述

如何使用多线程/多处理加速我的scrapy代码?我在下面附上了我的代码我不熟悉python中的线程并且不知道从哪里开始如果有人可以帮助我使用这段代码

import scrapy
import logging

domain = 'https://www.spdigital.cl/categories/view/'
categories = [
   '334' , '335', '553', '607', '336', '340', '339', '540', '486', '489', '485', '598', '347', '562','348', '349', '353', '351', '352', '532', '350',
'477', '475', '476', '474', '559','355', '356', '580', '337', '357', '358', '360', '374', '363', '362', '361', '338', '344', '593', '359', '604',
'478', '507', '509', '508', '510', '512', '600', '590', '511', '459','564', '376', '375', '558', '341', '377', '378', '484', '554', '567', '563', '379', '342', '343',
'370', '481', '365', '556', '364', '541', '555', '492', '570','579', '576', '574', '575', '572', '578', '577', '588', '573',
'596', '597', '601', '595','387', '468', '536', '391', '390', '589', '389','399', '394', '396', '397', '398', '392', '592', '401', '402', '530', '560',
'407', '406', '408', '404', '403', '405','413', '411', '414', '410', '409', '412','418', '599', '603', '465', '415', '487', '416', '382', '419', '417', '479',
'515', '582', '518', '514', '581', '583', '517', '519', '520','420', '421', '422', '423', '424', '425', '521', '557', '538', '428', '430', '432', '434', '436', '433', '435', '427', '437', '429', '482',
'544', '552', '545', '546', '550', '547', '551', '549', '548','491', '535', '494', '493', '472', '471', '470', '534', '537',
'587', '586', '585','602', '569', '561','438', '446', '488', '439', '496', '440', '566', '445', '447', '565','547', '448', '449', '450', '451', '452', '531', '453', '454', '456', '455',
'501', '505', '506', '504', '502', '498', '500', '503', '369','527', '460', '529', '606', '528', '591', '462', '526', '525', '605', '463', '464',
]
class ProductosSpider(scrapy.Spider):
    name = 'productos'
    allowed_domains = ['www.spdigital.cl']

    def start_requests(self):
        for i in categories:
            yield scrapy.Request( url = domain + i, callback = self.parse, headers = { 
                'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/78.0.3904.108 Chrome/78.0.3904.108 Safari/537.36'
            })

    def parse(self, response):
        for product in response.xpath( '//div[@class="span8 grid-style-mosaic"]/div/div[@class="span2 product-item-mosaic"]' ):

            yield {
                'product_name': product.xpath( './/div[@class="name"]/a/text() | //div[@class="name"]/a/span/@data-original-title' ).get(),
                'product_brand': product.xpath( './/div[@class="brand"]/text()' ).get(),
                'product_url': response.urljoin(product.xpath('.//div[@class="name"]/a/@href').get()),
                'product_original': product.xpath( './/div[@class="cash-price"]/text()' ).get(),
                'product_discount': product.xpath( './/span[@class="cash-previous-price-value"]/text()' ).get()
            }

        next_page = response.urljoin( response.xpath( '//a[@class="next"]/@href').get() )

        if next_page:
            yield scrapy.Request( url = next_page, callback = self.parse, headers = { 
            'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/78.0.3904.108 Chrome/78.0.3904.108 Safari/537.36'
        })

标签: pythonweb-scrapingconcurrencyscrapy

解决方案


Scrapy 是单线程的,因此不支持多线程。Scrapy 在Twisted上构建时异步执行请求。为了加快你的爬取过程,你可以setting.py通过修改来增加你的并发请求CONCURRENT_REQUESTSCONCURRENT_REQUESTS_PER_DOMAIN默认数字是 16 和 8。在Scrapy 文档中阅读更多关于并发请求的内容,这将是建设性的。


推荐阅读