首页 > 解决方案 > Scrapy Spider 给出处理错误和最近的回调

问题描述

我正面临网络抓取错误。从过去的两天开始,我一直坚持这一点。请任何人都可以指导我这个scrapy错误。

错误说:pider 错误处理 <GET http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html> (referer: http://books.toscrape.com/ ) Traceback (最近最后打电话):

这是命令提示符输出错误消息

2021-09-28 22:16:24 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2021-09-28 22:16:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/> (referer: None)
2021-09-28 22:16:25 [scrapy.core.scraper] DEBUG: Scraped from <200 http://books.toscrape.com/>
{'Category_Name': 'Historical Fiction', 'Kategorylink': 'http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html'}
2021-09-28 22:16:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html> (referer: http://books.toscrape.com/)
2021-09-28 22:16:26 [scrapy.core.scraper] ERROR: Spider error processing <GET http://books.toscrape.com/catalogue/category/books/historical-fiction_4/index.html> (referer: http://books.toscrape.com/)
Traceback (most recent call last):
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\utils\defer.py", line 120, in iter_errback
    yield next(it)
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\utils\python.py", line 353, in __next__
    return next(self.data)
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\utils\python.py", line 353, in __next__
    return next(self.data)
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 342, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\urllength.py", 
line 40, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\Abu Bakar Siddique\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\spidermw.py", line 56, in _evaluate_iterable
    for r in iterable:
  File "D:\tutorials\WEB scrapping\web scraping practice projects\scrapybooksspider\scrapybooksspider\spiders\selnext.py", line 23, in info_parse
    Category_Name=response.request.meta('category_name')
TypeError: 'dict' object is not callable
2021-09-28 22:16:26 [scrapy.core.engine] INFO: Closing spider (finished)
2021-09-28 22:16:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 529,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 11464,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'elapsed_time_seconds': 1.373366,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2021, 9, 29, 5, 16, 26, 107603),
 'httpcompression/response_bytes': 101403,
 'httpcompression/response_count': 2,
 'item_scraped_count': 1,
 'log_count/DEBUG': 3,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2021, 9, 29, 5, 16, 24, 734237)}
2021-09-28 22:16:26 [scrapy.core.engine] INFO: Spider closed (finished)

这是我的代码:

import scrapy
from scrapy.http import HtmlResponse
import requests
from bs4 import BeautifulSoup

class ScrapSpider(scrapy.Spider):
    name = 'scrapp'
    allowed_domains = ['books.toscrape.com']
    start_urls = ['http://books.toscrape.com/']

    def parse(self, response):
        categ=response.xpath('//div[@class="side_categories"]/ul[@class="nav nav-list"]/li/ul/li')
        # for category in  categ:
        Category_Name=categ.xpath('.//a[contains(text(),"Historical Fiction")]/text()').get().replace('\n',"").strip()
        Kategorylink=categ.xpath('.//a[contains(text(),"Historical Fiction")]/@href').get().replace('\n',"").strip()
        yield{
           'Category_Name':Category_Name,
           'Kategorylink':response.urljoin(Kategorylink)
        }
        yield scrapy.Request(url=response.urljoin(Kategorylink),callback=self.info_parse,meta={'category_name':Category_Name,'category_link':Kategorylink})  
    
    def info_parse(self,response):
        Category_Name=response.request.meta('category_name')
        Kategorylink=response.request.meta('category_link')      
        Book_Frame=response.xpath('//section/div/ol/li/article[@class="product_pod"]/h3/a/@href')
        for books in Book_Frame:            
            yield scrapy.Request(url=response.urljoin(books),callback=self.book_info)           
           
        

    def book_info(self,response):
        Category_Name=response.request.meta('category_name')
        Kategorylink=response.request.meta('category_link')
        name= response.xpath('//*[@class="price_color"]/text()').get()
        yield{
            'Category_Name':Category_Name,
            'Categorylink':Kategorylink,
            'Books':name

        }

等待强大的支持。谢谢!

标签: python-3.xweb-scrapingscrapy

解决方案


你有3个问题:

  1. 更改response.request.metaresponse.meta.get
  2. yield scrapy.Request(url=response.urljoin(books),callback=self.book_info) ,查看'books' url,看看为什么你不能加入他们,你应该把它改成response.follow(url=books, callback=self.book_info
  3. 您忘记将元数据传递给“book_info”函数。
import scrapy

class ScrapSpider(scrapy.Spider):
    name = 'scrapp'
    allowed_domains = ['books.toscrape.com']
    start_urls = ['http://books.toscrape.com/']

    def parse(self, response):
        categ=response.xpath('//div[@class="side_categories"]/ul[@class="nav nav-list"]/li/ul/li')
        # for category in  categ:
        Category_Name=categ.xpath('.//a[contains(text(),"Historical Fiction")]/text()').get().replace('\n',"").strip()
        Kategorylink=categ.xpath('.//a[contains(text(),"Historical Fiction")]/@href').get().replace('\n',"").strip()
        yield{
            'Category_Name':Category_Name,
            'Kategorylink':response.urljoin(Kategorylink)
        }
        yield scrapy.Request(url=response.urljoin(Kategorylink),callback=self.info_parse,meta={'category_name':Category_Name,'category_link':Kategorylink})

    def info_parse(self,response):
        Category_Name=response.meta.get('category_name')
        Kategorylink=response.meta.get('category_link')
        Book_Frame=response.xpath('//section/div/ol/li/article[@class="product_pod"]/h3/a/@href')
        for books in Book_Frame:
            yield response.follow(url=books, callback=self.book_info, meta={'category_name':Category_Name,'category_link':Kategorylink})



    def book_info(self,response):
        Category_Name=response.meta.get('category_name')
        Kategorylink=response.meta.get('category_link')
        name= response.xpath('//*[@class="price_color"]/text()').get()
        yield{
            'Category_Name':Category_Name,
            'Categorylink':Kategorylink,
            'Books':name
        }

推荐阅读