首页 > 解决方案 > 一段时间后,scrapy 停止爬行并产生项目,但继续运行

问题描述

我编写了一些scrapy代码,应该能够遍历一系列城市,转到这些城市的特定页面,获取该页面上表中的所有数据,并为此遍历表的所有页面城市。我的代码运行,但过了一段时间似乎超时或什么的,我开始在我的日志中得到这个:

2020-12-16 18:47:47 [yjs] INFO: Parsing table and getting job data for page url http://www.yingjiesheng.com/other-morejob-1372.html
2020-12-16 18:48:27 [scrapy.extensions.logstats] INFO: Crawled 113 pages (at 2 pages/min), scraped 111 items (at 2 items/min)
2020-12-16 18:49:27 [scrapy.extensions.logstats] INFO: Crawled 113 pages (at 0 pages/min), scraped 111 items (at 0 items/min)
2020-12-16 18:50:27 [scrapy.extensions.logstats] INFO: Crawled 113 pages (at 0 pages/min), scraped 111 items (at 0 items/min)
2020-12-16 18:51:27 [scrapy.extensions.logstats] INFO: Crawled 113 pages (at 0 pages/min), scraped 111 items (at 0 items/min)
2020-12-16 18:52:27 [scrapy.extensions.logstats] INFO: Crawled 113 pages (at 0 pages/min), scraped 111 items (at 0 items/min)

这似乎发生在随机的时间点。我第一次运行它时,我在 66 页后开始得到它。下面是我的蜘蛛代码:

URLROOT = "https://www.yingjiesheng.com/"
CITIES = {"beijing": "北京"}

class YjsSpider(scrapy.Spider):
    name = "yjs"

    def start_requests(self):
        # loop through cities and pass info
        for key, value in CITIES.items():
            self.logger.info('Starting requests for %s', key)
            url = URLROOT + str(key)
            yield scrapy.Request(
                url=url, callback=self.retrieve_tabsuffix, 
                meta={'city': key, 'city_ch': value},
                encoding='gb18030'
            )

    def retrieve_tabsuffix(self, response):
        city = response.meta['city']
        city_ch = response.meta['city_ch']

        morepages = response.xpath(
            '//*[contains(concat( " ", @class, " " ), concat( " ", "mbth", " " ))]')
        morepage_html = morepages.css("a::attr(href)").get()
        if "-morejob-" in morepage_html:
            jobpage_one = f"{URLROOT}{city}-morejob-1.html"
        elif "list_" in morepage_html:
            jobpage_one = f"{URLROOT}{city}/list_1.html"
        yield response.follow(
            url=jobpage_one, 
            callback=self.retrieve_tabhtmls,
            meta={'city': city, 'city_ch': city_ch},
            encoding='gb18030')


    def retrieve_tabhtmls(self, response):
        city = response.meta['city']
        city_ch = response.meta['city_ch']
        self.logger.info('Encodings are %s, %s', encoding1, encoding2)

        # htmls
        listhtmls = response.xpath(
                '//*[contains(concat( " ", @class, " " ), concat( " ", "clear", " " ))]').get()
 
        totalrecords = response.xpath(
            '//*[contains(concat( " ", @class, " " ), concat( " ", "act", " " ))]').get()
        self.logger.info("totalrecords: %s", totalrecords)

        # identify the last page number
        listhtmls = listhtmls.split("a href=\"")
        for listhtml in listhtmls:
            if "last page" in listhtml:
                lastpagenum = re.findall(r"\d+", listhtml)[0]
        morejobpages = list(range(1, int(lastpagenum) + 1))
        self.logger.info("total number tables %s", lastpagenum)

        self.logger.info('Getting all table page URLs for %s', city)
        morejobpages_urls = [
                "http://www.yingjiesheng.com/{}/list_{}.html".format(city, i) for i in morejobpages]

        self.logger.info(morejobpages)
        yield from response.follow_all(
            urls=morejobpages_urls,
            callback=self.parse_tab,
            meta={'city': city, 'city_ch': city_ch,
                  'totalrecords': totalrecords},
            encoding='gb18030')
    

    def parse_tab(self, response):
        city = response.meta['city']
        city_ch = response.meta['city_ch']
        totalrecords = response.meta['totalrecords']
        self.logger.info('Parsing table and getting job data for page url %s', response.url)

        # table content
        tabcontent = response.xpath(
            '//*[(@id = "tb_job_list")]')
        # list of rows
        tabrows = tabcontent.css("tr.jobli").getall()

        item = YjsTable()
        item['table'] = tabrows
        item['time_scraped'] = datetime.datetime.now().strftime(
                "%m/%d/%Y %H:%M:%S")
        item['city'] = city
        item['city_ch'] = city_ch
        item['totalrecords'] = totalrecords
        item['pageurl'] = response.url
        yield item

是我发现的唯一一篇似乎遇到相同问题的帖子,但它们是从 SQL 数据库中提取的,而我不是。

有谁知道为什么scrapy会工作一段时间然后突然停止请求页面和抓取数据,但继续运行?

编辑:我重新运行了调试日志设置并得到了这个:

2020-12-17 10:35:47 [scrapy.extensions.logstats] INFO: Crawled 41 pages (at 0 pages/min), scraped 39 items (at 0 items/min)
2020-12-17 10:35:49 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.yingjiesheng.com/app/job.php?Action=FullTimeMore&Location=guangzhou&Source=Other&Page=86> from <GET http://www.yingjiesheng.com/guangzhou-morejob-86.html>
2020-12-17 10:36:06 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://www.yingjiesheng.com/guangzhou-morejob-86.html> from <GET http://www.yingjiesheng.com/app/job.php?Action=FullTimeMore&Location=guangzhou&Source=Other&Page=86>
2020-12-17 10:36:24 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET http://www.yingjiesheng.com/app/job.php?Action=FullTimeMore&Location=guangzhou&Source=Other&Page=85> from <GET http://www.yingjiesheng.com/guangzhou-morejob-85.html>

所以看起来我被重定向了,但它没有成功地从重定向中抓取信息,然后转到下一页。有没有人知道你如何让scrapy继续尝试页面直到它成功?或者如果有更好的方法来解决这个问题?

标签: pythonweb-scrapingscrapyweb-crawler

解决方案


首先,您需要检查日志记录设置,以便根据您的情况启用更好的日志记录,设置起来会更简单, LOG_LEVEL='DEBUG'这样您就可以看到正在发生的一切,看起来它现在设置为“信息”。

可能发生的情况是蜘蛛一直在寻找请求,但所有请求都被拒绝,因此它们不计入“页面”,它们可能是 404、503 等。

你也可能有一些超时时间很长的页面,scrapy 不会停止工作,因为它的本质是异步的,所以日志可能会继续出现,即使 scrapy 正在等待正确的响应。

您还可以将您的 scrapy 项目配置为像这样工作(永不结束,继续生活),但从您分享的内容来看,它看起来不是那样的。尽管如此,还是应该检查一下你的扩展、管道和中间件在做什么,这样你就可以确定它们不会干扰你的蜘蛛。

您总是可以杀死蜘蛛,但要确保它自行停止,因此它还会返回一些统计信息,这些统计信息非常能说明执行期间发生的情况。


推荐阅读