首页 > 解决方案 > Scrapy插入PostgreSQL错误

问题描述

我目前是 Python 新手,我正在尝试使用 Scrapy 将其插入到我的 PostgreSQL 中。

我设法从这个网站上抓取信息,但是当我尝试插入 PostgreSQL 时,我收到以下错误。

"self.cur.execute("""insert into real_estate(estate_title,estate_address,estate_area,estate_description,estate_price,estate_type,estate_tag,estate_seller_name,estate_seller_address,estate_seller_phone,estate_seller_mobile,estate_seller_email) values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)""",(item['estate_title'],item['estate_address'],item['estate_area'],item['estate_description'],item['estate_price'],item['estate_type'],item['estate_tag'],item['estate_seller_name'],item['estate_seller_address'],item['estate_seller_phone'],item['estate_seller_mobile'],item['estate_seller_email']))
psycopg2.InternalError: current transaction is aborted, commands ignored until end of transaction block" error

我知道我的查询在某处出错,但我找不到出错的地方。我已经尝试解决此问题 2 天,但仍然无法找出错误。我希望有一个人可以帮助我。

我的蜘蛛:

import scrapy
from ..items import real_estateItem
class RealEstateSpider(scrapy.Spider):
   name = "estate"
   start_urls = [
    'https://batdongsan.com.vn/nha-dat-ban/p1',
  ]

def parse(self, response):
    # follow links to estate page
    for href in response.css('div.p-title h3 a::attr(href)'):
        yield response.follow(href, self.parse_estate)

    # follow pagination links
    for href in response.css('div.background-pager-right-controls a:nth-child(2)::attr(href)'):
        yield response.follow(href, self.parse)

def parse_estate(self, response):
        for extract_with_css in response.css('div#product-detail'):
            estate_title = extract_with_css.css('h1[itemprop="name"]::text').extract_first()
            estate_address = extract_with_css.css('div.table-detail div.row:nth-child(2) div.right::text').extract_first()
            estate_price =extract_with_css.css('span.gia-title.mar-right-15 strong::text').extract_first()
            estate_area =extract_with_css.css('span.gia-title strong::text').extract_first()
            estate_description =extract_with_css.css('div.pm-desc::text').extract_first()
            estate_tag =response.css('div.tagpanel a::attr(href)').extract()
            estate_type =extract_with_css.css('div.table-detail div.row:nth-child(1) div.right::text').extract_first()
            estate_seller_name =extract_with_css.css('div#LeftMainContent__productDetail_contactName div.right::text').extract_first()
            estate_seller_address =extract_with_css.css('div#LeftMainContent__productDetail_contactAddress div.right').extract_first()
            estate_seller_phone = extract_with_css.css('div#LeftMainContent__productDetail_contactPhone div.right').extract_first()
            estate_seller_mobile = extract_with_css.css('div#LeftMainContent__productDetail_contactMobile div.right::text').extract_first()
            estate_seller_email = extract_with_css.css('div#contactEmail a::text').extract_first()

        estateItem = real_estateItem(estate_title= estate_title,estate_address=estate_address,estate_area=estate_area,estate_description=estate_description,estate_price=estate_price,estate_type=estate_type,estate_tag=estate_tag,estate_seller_name=estate_seller_name,estate_seller_address=estate_seller_address,estate_seller_phone=estate_seller_phone,estate_seller_mobile=estate_seller_mobile,estate_seller_email=estate_seller_email)
        yield estateItem

我的管道.py:

import psycopg2
class TutorialPipeline(object):
def open_spider(self, spider):
    hostname = 'localhost'
    username = 'postgres'
    password = '123' # your password
    database = 'postgres'
    self.connection = psycopg2.connect(host=hostname, user=username, password=password, dbname=database)
    self.cur = self.connection.cursor()

def close_spider(self, spider):
    self.cur.close()
    self.connection.close()

def process_item(self, item, spider):
    self.cur.execute("""insert into real_estate(estate_title,estate_address,estate_area,estate_description,estate_price,estate_type,estate_tag,estate_seller_name,estate_seller_address,estate_seller_phone,estate_seller_mobile,estate_seller_email) values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)""",(item['estate_title'],item['estate_address'],item['estate_area'],item['estate_description'],item['estate_price'],item['estate_type'],item['estate_tag'],item['estate_seller_name'],item['estate_seller_address'],item['estate_seller_phone'],item['estate_seller_mobile'],item['estate_seller_email']))
    self.connection.commit()
    return item

我的物品.py:

class real_estateItem(scrapy.Item):
 estate_title = scrapy.Field()
 estate_address= scrapy.Field()
 estate_area = scrapy.Field()
 estate_description = scrapy.Field()
 estate_price = scrapy.Field()
 estate_type = scrapy.Field()
 estate_tag = scrapy.Field()
 estate_seller_name = scrapy.Field()
 estate_seller_address = scrapy.Field()
 estate_seller_phone = scrapy.Field()
 estate_seller_mobile = scrapy.Field()
 estate_seller_email = scrapy.Field()

这是我的专栏:

数据库

编辑: 我发现了我的错误,我的代码被插入到数据库中,但是它们是下一行的空格,所以除非我单击数据字段,否则我看不到数据。我在所有解析中都使用了 strip() 函数和现在他们显示在第一行而不是隐藏在第二行

pgadmin4

标签: pythonpostgresqlweb-scraping

解决方案


我发现了我的错误,我的代码被插入到数据库中,但是它们是下一行的空格,所以除非我单击数据字段,否则我看不到数据。我在所有解析中都使用了 strip() 函数,现在它们显示在第一行而不是隐藏在第二行


推荐阅读