首页 > 解决方案 > scrapy - 来自以下页面的数据

问题描述

我有个问题。跳转到下一页后如何下载数据?它只从第一页下载。我粘贴,我的代码:

    # -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.http import Request


class PronobelSpider(Spider):
    name = 'pronobel'
    allowed_domains = ['pronobel.pl']
    start_urls = ['http://pronobel.pl/praca-opieka-niemcy/']

    def parse(self, response):

        jobs = response.xpath('//*[@class="offer offer-immediate"]')
        for job in jobs:
            title = job.xpath('.//*[@class="offer-title"]/text()').extract_first()
            start_date = job.xpath('.//*[@class="offer-attr offer-departure"]/text()').extract_first()
            place = job.xpath('.//*[@class="offer-attr offer-localization"]/text()').extract_first()
            language = job.xpath('.//*[@class="offer-attr offer-salary"]/text()').extract()[1]

            print title
            print start_date
            print place
            print language

        next_page_url = response.xpath('//*[@class="page-nav nav-next"]/a/@href').extract_first()
        absolute_next_page_url = response.urljoin(next_page_url)
        yield Request(absolute_next_page_url)

我只从第一页获取数据

标签: scrapy

解决方案


您的问题不在于抓取下一页,而在于您的选择器。首先,在按类选择元素时,推荐使用 css。发生的事情是offer-immediate其他页面上没有该类的元素。

我对您的代码进行了一些更改,请参见下文:

from scrapy import Spider
from scrapy.http import Request


class PronobelSpider(Spider):
    name = 'pronobel'
    allowed_domains = ['pronobel.pl']
    start_urls = ['http://pronobel.pl/praca-opieka-niemcy/']

    def parse(self, response):
        jobs = response.css('div.offers-list div.offer')
        for job in jobs:
            title = job.css('a.offer-title::text').extract_first()
            start_date = job.css('div.offer-attr.offer-departure::text').extract_first()
            place = job.css('div.offer-attr.offer-localization::text').extract_first()
            language = job.css('div.offer-attr.offer-salary::text').extract()[1]
            yield {'title': title,
                    'start_date': start_date,
                    'place': place,
                    'language': language,
                    'url': response.url}

        next_page_url = response.css('li.page-nav.nav-next a::attr(href)').extract_first()
        absolute_next_page_url = response.urljoin(next_page_url)
        yield Request(absolute_next_page_url)

推荐阅读