首页 > 解决方案 > 问题抓取 craigslist.org

问题描述

我写了一个蜘蛛,它由于某种原因而出现问题。

我是新手,但是,从日志来看,它似乎成功加载了页面?我已经在浏览器中测试了我的 XPath 选择器,它们运行良好。我查看了该craigslist.org/robots.txt文件,它并没有明确禁止我正在做的事情。

有人知道发生了什么吗?

它可能与用户代理字符串有关吗?是否为蜘蛛提供了不同版本的页面?

蜘蛛

import scrapy

class RentalsCrawler(scrapy.Spider):
    name = "rentals"
    allowed_domains = [
        'craigslist.org'
    ]
    custom_settings = {
        'DOWNLOAD_DELAY': 2,
        'CONCURRENT_REQUESTS_PER_DOMAIN': 2,
    }
    handle_httpstatus_list = [404]
    def start_requests(self):
        start = 0
        nopgs = 1
        pages = []
        for i in range(0, nopgs):
            i = i * 120 + start
            pages.append('https://vancouver.craigslist.ca/search/apa?s=' + str(i))
        for page in pages:
            yield scrapy.Request(url=page, callback=self.parse)
    def parse(self, response):
        prc_path = '//span[@class="result-meta"]/span[@class="result-price"]/text()'
        sqf_path = '//span[@class="result-meta"]/span[@class="housing"]/text()'
        loc_path = '//span[@class="result-meta"]/span[@class="result-hood"]/text()'
        prc_resp = response.xpath(prc_path).extract_first()
        sqf_resp = response.xpath(sqf_path).extract_first()
        loc_resp = response.xpath(loc_path).extract_first()
        objct = { 'prc': prc_resp }
        if sqf_resp:
            objct['sqf'] = sqf_resp
        if loc_resp:
            objct['loc'] = loc_resp
        yield objct

日志

(base) C:\Users\Anthony\tutorial\tutorial\spiders>scrapy runspider rentals.py -o rentals.json
0
2018-06-07 15:58:23 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: tutorial)
2018-06-07 15:58:23 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.5.0, Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o  27 Mar 2018), cryptography 2.2.2, Platform Windows-10-10.0.17134-SP0
2018-06-07 15:58:23 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'tutorial', 'FEED_FORMAT': 'json', 'FEED_URI': 'rentals.json', 'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_LOADER_WARN_ONLY': True, 'SPIDER_MODULES': ['tutorial.spiders']}
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-06-07 15:58:23 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-06-07 15:58:23 [scrapy.core.engine] INFO: Spider opened
2018-06-07 15:58:23 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-06-07 15:58:23 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
0
2018-06-07 15:58:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/robots.txt> (referer: None)
2018-06-07 15:58:24 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/search/apa?s=0> (referer: None)
2018-06-07 15:58:24 [scrapy.core.engine] INFO: Closing spider (finished)
2018-06-07 15:58:24 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 468,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 36594,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 6, 7, 22, 58, 24, 237666),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 6, 7, 22, 58, 23, 792075)}
2018-06-07 15:58:24 [scrapy.core.engine] INFO: Spider closed (finished)

输出

一个空json文件。

Scrapy.cfg

# Automatically created by: scrapy startproject
#
# For more information about the [deploy] section see:
# https://scrapyd.readthedocs.io/en/latest/deploy.html

[settings]
default = tutorial.settings

[deploy]
#url = http://localhost:6800/
project = tutorial

设置.py

# -*- coding: utf-8 -*-

# Scrapy settings for tutorial project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'tutorial'

SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'tutorial.middlewares.TutorialSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'tutorial.middlewares.TutorialDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'tutorial.pipelines.TutorialPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

日志(带有 `yield 对象)

(base) C:\Users\Anthony\tutorial\tutorial\spiders>scrapy runspider rentals.py -o rentals.json
2018-06-07 17:33:16 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: tutorial)
2018-06-07 17:33:16 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.4.0, w3lib 1.19.0, Twisted 17.5.0, Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2o  27 Mar 2018), cryptography 2.2.2, Platform Windows-10-10.0.17134-SP0
2018-06-07 17:33:16 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'tutorial', 'FEED_FORMAT': 'json', 'FEED_URI': 'rentals.json', 'NEWSPIDER_MODULE': 'tutorial.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_LOADER_WARN_ONLY': True, 'SPIDER_MODULES': ['tutorial.spiders']}
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-06-07 17:33:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-06-07 17:33:16 [scrapy.core.engine] INFO: Spider opened
2018-06-07 17:33:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-06-07 17:33:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-06-07 17:33:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/robots.txt> (referer: None)
2018-06-07 17:33:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://vancouver.craigslist.ca/search/apa?s=0> (referer: None)
2018-06-07 17:33:17 [scrapy.core.scraper] DEBUG: Scraped from <200 https://vancouver.craigslist.ca/search/apa?s=0>
{'prc': '$2400', 'sqf': '\n                    1br -\n                    895ft', 'loc': ' (North Vancouver)'}
2018-06-07 17:33:17 [scrapy.core.engine] INFO: Closing spider (finished)
2018-06-07 17:33:17 [scrapy.extensions.feedexport] INFO: Stored json feed (1 items) in: rentals.json
2018-06-07 17:33:17 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 468,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 37724,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 6, 8, 0, 33, 17, 36724),
'item_scraped_count': 1,
'log_count/DEBUG': 4,
'log_count/INFO': 8,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 6, 8, 0, 33, 16, 533959)}
2018-06-07 17:33:17 [scrapy.core.engine] INFO: Spider closed (finished)

结论

我终于写了一些代码来输出我所期望的。不幸的是,当我使用 XPath 时,该脚本会将所有价格集中在一个列表中,将平方英尺集中在另一个列表中,并将位置集中在另一个列表中。我更喜欢 XPath,而且我确信有一种方法可以保留 XPath,但在 dict 中将每个列表分开。

import scrapy

class RentalsCrawler(scrapy.Spider):
    name = "rentals"
    allowed_domains = [
        'craigslist.org',
        'kajiji.ca'
    ]
    custom_settings = {
        'DOWNLOAD_DELAY': 2,
        'CONCURRENT_REQUESTS_PER_DOMAIN': 2,
    }
    handle_httpstatus_list = [404]
    def start_requests(self):
        start = 0
        nopgs = 1
        pages = []
        for i in range(0, nopgs):
            i = i * 120 + start
            pages.append('https://vancouver.craigslist.ca/search/apa?s=' + str(i))
        for page in pages:
            yield scrapy.Request(url=page, callback=self.parse)
    def parse(self, response):
        for li in response.css('ul.rows li p span.result-meta'):
            prc = li.css('span.result-price::text').extract_first()
            sqf = li.css('span.housing::text').extract_first()
            loc = li.css('result-hood::text').extract_first()
            objct = { 'prc': prc }
            if sqf:
                objct['sqf'] = sqf
            if loc:
                objct['loc'] = loc
            yield objct

标签: pythonpython-3.xpython-2.7scrapy

解决方案


您的代码示例是否完整?如果是这样,您可能只是在 末尾缺少一行parse,即生成要添加到当前 scrapy 作业中的项目。我忘记了你是否必须真正产生一个scrapy Item,但从尝试开始yield objct

IE

def parse(self, response):
    ...
    objct['key'] = response.xpath("/my/clever/xpath")
    ...
    yield objct

推荐阅读