首页 > 解决方案 > 从scrapy爬取更新sqlite DB中的数据的问题

问题描述

在尝试根据从 scrapy 抓取的数据更新我的数据库时需要一些帮助。我想要做的是为每个抓取的 url 运行一个 db select 以查看数据库中是否有带有该 url 的记录(url 是主键)然后根据该行是否存在,我运行更新但如果我不会调用插入函数。只是测试了数据库选择,但它似乎一直返回第一行。当我从行中查看打印时,它只打印数据库中的第一行。它还应该打印第二行,其中包含 url 中的 hamilton beach。

输出 -

(price_monitor) C:\Users\\Documents\python_venv\price_monitor\price_monitor>scrapy crawl price_monitor
2019-06-23 15:19:20 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: price_monitor)
2019-06-23 15:19:20 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 16:07:46) [MSC v.1900 32 bit (Intel)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b  26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-06-23 15:19:20 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'price_monitor', 'NEWSPIDER_MODULE': 'price_monitor.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['price_monitor.spiders'], 'USER_AGENT': 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'}
2019-06-23 15:19:20 [scrapy.extensions.telnet] INFO: Telnet Password: 5fcea59cefb35de8
2019-06-23 15:19:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-06-23 15:19:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-06-23 15:19:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-06-23 15:19:20 [scrapy.middleware] INFO: Enabled item pipelines:
['price_monitor.pipelines.PriceMonitorPipeline']
2019-06-23 15:19:20 [scrapy.core.engine] INFO: Spider opened
2019-06-23 15:19:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-06-23 15:19:20 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-06-23 15:19:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/robots.txt> (referer: None)
2019-06-23 15:19:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.amazon.com/BLACK-DECKER-CM4202S-Programmable-Coffeemaker/dp/B07GWKT87L> from <GET https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview>
2019-06-23 15:19:21 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB> from <GET https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB/ref=sr_1_10?keywords=coffee+maker&qid=1559098604&s=home-garden&sr=1-10>
2019-06-23 15:19:22 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/BLACK-DECKER-CM4202S-Programmable-Coffeemaker/dp/B07GWKT87L> (referer: None)
print scrapeitem
{'email': 'cobi@noemail.com',
 'name': 'Cobi Maguire',
 'price': '$49.99',
 'title': 'BLACK+DECKER CM4202S Select-A-Size Easy Dial Programmable '
          'Coffeemaker, Extra Large 80 ounce Capacity, Stainless Steel',
 'url': 'https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview'}
print item from process_item
{'email': 'cobi@noemail.com',
 'name': 'Cobi Maguire',
 'price': '$49.99',
 'title': 'BLACK+DECKER CM4202S Select-A-Size Easy Dial Programmable '
          'Coffeemaker, Extra Large 80 ounce Capacity, Stainless Steel',
 'url': 'https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview'}
Inside get data
printing item
<class 'list'>
['BLACK+DECKER CM4202S Select-A-Size Easy Dial Programmable Coffeemaker, Extra Large 80 ounce Capacity, Stainless Steel', '$49.99', 'Cobi Maguire', 'cobi@noemail.com', 'https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview']
print url
https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview
Printing rows
('https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview', '$49.99')
process item run
2019-06-23 15:19:22 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.amazon.com/BLACK-DECKER-CM4202S-Programmable-Coffeemaker/dp/B07GWKT87L>
{'email': 'cobi@noemail.com',
 'name': 'Cobi Maguire',
 'price': '$49.99',
 'title': 'BLACK+DECKER CM4202S Select-A-Size Easy Dial Programmable '
          'Coffeemaker, Extra Large 80 ounce Capacity, Stainless Steel',
 'url': 'https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview'}
2019-06-23 15:19:22 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB> (referer: None)
print scrapeitem
{'email': 'ryan@noemail.com',
 'name': 'Ryan Murphy',
 'price': '$34.99',
 'title': 'Hamilton Beach 46310 Programmable Coffee Maker, 12 Cups, Black',
 'url': 'https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB/ref=sr_1_10?keywords=coffee+maker&qid=1559098604&s=home-garden&sr=1-10'}
print item from process_item
{'email': 'ryan@noemail.com',
 'name': 'Ryan Murphy',
 'price': '$34.99',
 'title': 'Hamilton Beach 46310 Programmable Coffee Maker, 12 Cups, Black',
 'url': 'https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB/ref=sr_1_10?keywords=coffee+maker&qid=1559098604&s=home-garden&sr=1-10'}
Inside get data
printing item
<class 'list'>
['Hamilton Beach 46310 Programmable Coffee Maker, 12 Cups, Black', '$34.99', 'Ryan Murphy', 'ryan@noemail.com', 'https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB/ref=sr_1_10?keywords=coffee+maker&qid=1559098604&s=home-garden&sr=1-10']
print url
https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB/ref=sr_1_10?keywords=coffee+maker&qid=1559098604&s=home-garden&sr=1-10
Printing rows
('https://www.amazon.com/dp/B07GWKT87L/?coliid=I36XKNB8MLE3&colid=KRASGH7290D0&psc=0&ref_=lv_ov_lig_dp_it#customerReview', '$49.99')
process item run
2019-06-23 15:19:22 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB>
{'email': 'ryan@noemail.com',
 'name': 'Ryan Murphy',
 'price': '$34.99',
 'title': 'Hamilton Beach 46310 Programmable Coffee Maker, 12 Cups, Black',
 'url': 'https://www.amazon.com/Hamilton-Beach-46310-Programmable-Coffee/dp/B07684BPLB/ref=sr_1_10?keywords=coffee+maker&qid=1559098604&s=home-garden&sr=1-10'}
2019-06-23 15:19:22 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-23 15:19:22 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1888,
 'downloader/request_count': 5,
 'downloader/request_method_count/GET': 5,
 'downloader/response_bytes': 261364,
 'downloader/response_count': 5,
 'downloader/response_status_count/200': 3,
 'downloader/response_status_count/301': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 6, 23, 19, 19, 22, 755529),
 'item_scraped_count': 2,
 'log_count/DEBUG': 7,
 'log_count/INFO': 9,
 'response_received_count': 3,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 4,
 'scheduler/dequeued/memory': 4,
 'scheduler/enqueued': 4,
 'scheduler/enqueued/memory': 4,
 'start_time': datetime.datetime(2019, 6, 23, 19, 19, 20, 885727)}
2019-06-23 15:19:22 [scrapy.core.engine] INFO: Spider closed (finished)

管道.py

import sqlite3

class PriceMonitorPipeline(object):

    def __init__(self):
        self.create_connection()
        self.create_table()

    def create_connection(self):
        self.conn = sqlite3.connect("price_monitor.db")
        self.curr = self.conn.cursor()

    def process_item(self, item, spider):

        print("print item from process_item")
        print(item)
        self.get_data(item)
        print("process item run")
        return item

    def get_data(self, item):
        """ Check if the row already exists for this url """
        print("Inside get data")
        item = list(item.values())
        print("printing item")
        print(type(item))
        print(item)

        url = item[-1]
        print("print url")
        print(url)
        self.curr.execute("""select url, new_price from price_monitor WHERE url=url""",
                              {'url': url})

        rows = self.curr.fetchone()

        print("Printing rows")
        print(rows)

items.py 导入scrapy

class AmazonItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    url = scrapy.Field()
    title = scrapy.Field()
    price = scrapy.Field()
    name = scrapy.Field()
    email = scrapy.Field()

蜘蛛侠.py

import scrapy
import json
import sys

from ..items import AmazonItem

class MySpider(scrapy.Spider):
    name = 'price_monitor'


    def start_requests(self):
        with open('C:\\Users\\Documents\\python_venv\\price_monitor\\price_monitor\\products.json') as f:
            data = json.load(f)

            itemdatalist = data['itemdata']

            for item in itemdatalist:
                yield scrapy.Request(url=item['url'], callback=self.parse, meta={'item': item}) 


    def parse(self, response):
#        for url in MySpider.start_urls:

        item = response.meta["item"]
        scrapeitem = AmazonItem()

        title = response.css('span#productTitle::text').extract_first()
        title = title.strip()
        price = response.css('span#priceblock_ourprice::text').extract_first()

        scrapeitem['title'] = title
        scrapeitem['price'] = price

        scrapeitem['name'] = item["name"] # from response.meta
        scrapeitem['email'] = item["email"] # from response.meta            
        scrapeitem['url'] = item['url']

        print("print scrapeitem")
        print(scrapeitem)
        yield scrapeitem

标签: python-3.xsqlitescrapy

解决方案


你的意思是不是像

self.curr.execute("""select url, new_price from price_monitor WHERE url=?""", url)

代替

self.curr.execute("""select url, new_price from price_monitor WHERE url=url""", {'url': url})

对我来说,您的版本似乎正试图从字面上选择 url “url”。


推荐阅读