首页 > 解决方案 > Scrapy 没有在分页之后填充输出

问题描述

我写了一个简单的蜘蛛来提取一些书籍数据,我之前已经这样做了,但现在它不起作用,我无法找出问题所在。我正在使用Python 3.7.

RomHistSpider.py

import scrapy
from romanceshistoricos.items import Livro 

class RomHistSpider(scrapy.Spider):
    name = 'RomHistSpider'
    allowed_domains = ['ebook-romanceshistoricos.blogspot.com']
    start_urls = [
            'https://ebook-romanceshistoricos.blogspot.com/',
            ]

    def parse(self, response):

        livros = response.xpath('//h3/a/@href').extract()
        for livro in livros:
            absolute_url = response.urljoin(livro)
            yield scrapy.Request(absolute_url, callback=self.parse)

        titulo = response.xpath('//div[1]/h3/text()').extract_first()
        serie = response.xpath('//*[@class="post-body entry-content"]/b[1]/text()').extract_first()
        sinopse = ''.join(response.xpath('//*[@id="post-body-8314965505352045179"]/div/b/span/text()').extract())
        tags = response.xpath('//div[1]/div/div/div/div[1]/div[3]/div[2]/span/a/text()').extract()
        endereco = response.url

        item = Livro()
        item['titulo'] = titulo.strip() 
        if serie is not None:
            item['serie'] =  serie.strip() 
        else:
            item['serie'] = 'n/a' 
        item['sinopse'] = sinopse.partition('Capítulo')[0]
        item['tags'] = ', '.join(tags)
        item['url'] = endereco

        return item

        nxt_url = response.xpath(
                '//*[@id="Blog1_blog-pager-older-link"]/@href').extract_first()
        if nxt_url is not None:
            yield response.follow(nxt_url, callback=self.parse)

~scrapy crawl RomHistSpider -o test.csv

2018-08-20 15:48:58 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: romanceshistoricos)
2018-08-20 15:48:58 [scrapy.utils.log] INFO: Versions: lxml 4.2.4.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0dev0, Python 3.7.0 (default, Jul 15 2018, 10:44:58) - [GCC 8.1.1 20180531], pyOpenSSL 18.0.0 (OpenSSL 1.1.0i  14 Aug 2018), cryptography 2.3.1, Platform Linux-4.14.64-1-MANJARO-x86_64-with-arch-Manjaro-Linux
2018-08-20 15:48:58 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'romanceshistoricos', 'EDITOR': '/usr/bin/nano', 'HTTPCACHE_ENABLED': True, 'HTTPCACHE_EXPIRATION_SECS': 86400, 'NEWSPIDER_MODULE': 'romanceshistoricos.spiders', 'SPIDER_MODULES': ['romanceshistoricos.spiders']}
2018-08-20 15:48:58 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2018-08-20 15:48:58 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats',
 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware']
2018-08-20 15:48:58 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-08-20 15:48:58 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-08-20 15:48:58 [scrapy.core.engine] INFO: Spider opened
2018-08-20 15:48:58 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-08-20 15:48:58 [scrapy.extensions.httpcache] DEBUG: Using filesystem cache storage in /home/porco/code/scrap_env/romanceshistoricos/.scrapy/httpcache
2018-08-20 15:48:58 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/> (referer: None) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/o-despertar-de-belle.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/07/uma-paixao-francesa.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/o-sosia-do-duque.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/o-fim-do-inverno.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/o-desafio-do-highlander.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/a-falsa-noiva-do-major.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/lagrimas-do-coracao.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/uma-rosa-na-batalha.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/casualmente-valentina.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/o-mar-em-teus-olhos.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/as-brumas-da-memoria.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://ebook-romanceshistoricos.blogspot.com/2018/08/me-apaixonei-por-um-lorde.html> (referer: https://ebook-romanceshistoricos.blogspot.com/) ['cached']
2018-08-20 15:48:58 [scrapy.core.engine] INFO: Closing spider (finished)
2018-08-20 15:48:58 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 4147,
 'downloader/request_count': 13,
 'downloader/request_method_count/GET': 13,
 'downloader/response_bytes': 1085061,
 'downloader/response_count': 13,
 'downloader/response_status_count/200': 13,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 8, 20, 18, 48, 58, 787491),
 'httpcache/hit': 13,
 'log_count/DEBUG': 14,
 'log_count/INFO': 8,
 'memusage/max': 52195328,
 'memusage/startup': 52195328,
 'request_depth_max': 1,
 'response_received_count': 13,
 'scheduler/dequeued': 13,
 'scheduler/dequeued/memory': 13,
 'scheduler/enqueued': 13,
 'scheduler/enqueued/memory': 13,
 'start_time': datetime.datetime(2018, 8, 20, 18, 48, 58, 131156)}
2018-08-20 15:48:58 [scrapy.core.engine] INFO: Spider closed (finished)

标签: pythonscrapy

解决方案


首先不要return用于您的物品(使用yield)。接下来我建议您将parse方法分为两部分:

def parse(self, response):

    livros = response.xpath('//h3/a/@href').extract()
    for livro in livros:
        absolute_url = response.urljoin(livro)
        yield scrapy.Request(absolute_url, callback=self.parse_details)

    nxt_url = response.xpath(
            '//*[@id="Blog1_blog-pager-older-link"]/@href').extract_first()
    if nxt_url is not None:
        yield response.follow(nxt_url, callback=self.parse)

def parse_details(self, response):

    titulo = response.xpath('//div[1]/h3/text()').extract_first()
    serie = response.xpath('//*[@class="post-body entry-content"]/b[1]/text()').extract_first()
    sinopse = ''.join(response.xpath('//*[@id="post-body-8314965505352045179"]/div/b/span/text()').extract())
    tags = response.xpath('//div[1]/div/div/div/div[1]/div[3]/div[2]/span/a/text()').extract()
    endereco = response.url

    item = Livro()
    item['titulo'] = titulo.strip() 
    if serie is not None:
        item['serie'] =  serie.strip() 
    else:
        item['serie'] = 'n/a' 
    item['sinopse'] = sinopse.partition('Capítulo')[0]
    item['tags'] = ', '.join(tags)
    item['url'] = endereco

    yield item

推荐阅读