首页 > 解决方案 > 如何从外部 python 脚本正确运行爬虫并获取其项目输出

问题描述

所以我正在制作几个刮板,现在我正在尝试制作一个脚本,该脚本运行具有从数据库收集的 URL 的相应蜘蛛,但我找不到这样做的方法。

我的蜘蛛中有这个:

class ElCorteIngles(scrapy.Spider):
name = 'ElCorteIngles'
url = ''
DEBUG = False

def start_requests(self):
    if self.url != '':
        yield scrapy.Request(url=self.url, callback=self.parse)

def parse(self, response):
    # Get product name
    try:
        self.p_name = response.xpath('//*[@id="product-info"]/h2[1]/a/text()').get()
    except:
        print(f'{CERROR} Problem while getting product name from website - {self.name}')

    # Get product price
    try:
        self.price_no_cent = response.xpath('//*[@id="price-container"]/div/span[2]/text()').get()
        self.cent = response.xpath('//*[@id="price-container"]/div/span[2]/span[1]/text()').get()
        self.currency = response.xpath('//*[@id="price-container"]/div/span[2]/span[2]/text()').get()
        if self.currency == None:
            self.currency = response.xpath('//*[@id="price-container"]/div/span[2]/span[1]/text()').get()
            self.cent = None
    except:
        print(f'{CERROR} Problem while getting product price from website - {self.name}')

    # Join self.price_no_cent with self.cent
    try:
        if self.cent != None:
            self.price = str(self.price_no_cent) + str(self.cent)
            self.price = self.price.replace(',', '.')
        else:
            self.price = self.price_no_cent
    except:
        print(f'{ERROR} Problem while joining price with cents - {self.name}')

    # Return data
    if self.DEBUG == True:
        print([self.p_name, self.price, self.currency])

    data_collected = ShopScrapersItems()
    data_collected['url'] = response.url
    data_collected['p_name'] = self.p_name
    data_collected['price'] = self.price
    data_collected['currency'] = self.currency

    yield data_collected

通常,当我从控制台运行蜘蛛时,我会:

scrapy crawl ElCorteIngles -a url='https://www.elcorteingles.pt/electrodomesticos/A26601428-depiladora-braun-senso-smart-5-5500/'

现在我需要一种在外部脚本上执行相同操作并获取输出的方法yield data_collected

我目前在我的外部脚本中拥有的是:

import scrapy
from scrapy.crawler import CrawlerProcess
import sqlalchemy as db
# Import internal libraries
from Ruby.Ruby.spiders import *

# Variables
engine = db.create_engine('mysql+pymysql://DATABASE_INFO')

class Worker(object):

    def __init__(self):
        self.crawler = CrawlerProcess({})

    def scrape_new_links(self):
        conn = engine.connect()

        # Get all new links from DB and scrape them
        query = 'SELECT * FROM Ruby.New_links'
        result = conn.execute(query)
        for x in result:
            telegram_id = x[1]
            email = x[2]
            phone_number = x[3]
            url = x[4]
            spider = x[5]
            
            # In this cade the spider will be ElCorteIngles and
            # the url https://www.elcorteingles.pt/electrodomesticos/A26601428-depiladora- 
            # braun-senso-smart-5-5500/'

            self.crawler.crawl(spider, url=url)
            self.crawler.start()

Worker().scrape_new_links()

我也不知道这样做是否url=urlself.crawler.crawl()将 URL 提供给蜘蛛的正确方法,但请告诉我您的想法。来自的所有数据yield都由管道返回。我认为不需要额外的信息,但如果您需要任何信息,请告诉我!

标签: pythonscrapy

解决方案


Scrapy 异步工作...忽略我的导入,但这是我为 scrapy 制作的 JSON api。您需要使用 item_scraped 信号制作自定义跑步者。最初有一个 klein 端点,当蜘蛛完成时,它会返回一个 JSON 列表。我认为这就是你想要的,但没有克莱因端点,所以我把它拿出来了。我的蜘蛛是 GshopSpider 我用你的蜘蛛名字代替了它。

通过利用 deferred,我们可以在每次抓取项目时使用回调并发送信号。所以使用这段代码,我们将每个项目收集到一个带有信号的列表中,当蜘蛛完成时,我们有一个回调设置到 return_spider_output

# server.py
import json

from scrapy import signals
from scrapy.crawler import CrawlerRunner

from googleshop.spiders.gshop import GshopSpider
from scrapy.utils.project import get_project_settings


class MyCrawlerRunner(CrawlerRunner):
    def crawl(self, crawler_or_spidercls, *args, **kwargs):
        # keep all items scraped
        self.items = []

        crawler = self.create_crawler(crawler_or_spidercls)

        crawler.signals.connect(self.item_scraped, signals.item_scraped)

        dfd = self._crawl(crawler, *args, **kwargs)

        dfd.addCallback(self.return_items)
        return dfd

    def item_scraped(self, item, response, spider):
        self.items.append(item)

    def return_items(self, result):
        return self.items


def return_spider_output(output):
    return json.dumps([dict(item) for item in output])


if __name__=="__main__"
    settings = get_project_settings()
    runner = MyCrawlerRunner(settings)
    spider = ElCorteIngles()
    deferred = runner.crawl(spider)
    deferred.addCallback(return_spider_output)
    return deferred

推荐阅读