首页 > 解决方案 > 将抓取的结果并排分类成一行

问题描述

所以我使用 python/scrapy 从网页中抓取数据。基本上,该网页由 15 个包含各种信息的块组成。我的蜘蛛通过每个块重复抓取一些特定的内容。我对结果的内容感到满意,但对数据的呈现方式不满意。我希望属于一个块的所有抓取信息都显示在一行中。你会从下面的截图中看到,同一个块的结果并没有并排呈现,这正是我想要的。

def parse(self, response):
    for i in response.css('span.dir'):
        yield {'address': i.css('b::text').extract()}
    for l in response.css('div.datos'):
        yield {'area': l.css('i::text').extract()}
    for x in response.css('div.opciones'):
        yield {'price stable': x.css('span.eur::text').extract()}
    for o in response.css('div.opciones'):
        yield {'price drop': o.css('div.mp_pvpant.baja::text').extract()}
    for y in response.css('div.opciones'):
        yield {'price decreased': y.css('span.eur_m::text').extract()}
    for u in response.css('div.datos'):
        yield {'link': u.css('a::attr(href)').extract_first()}

刮痧的结果

标签: pythonpython-3.xfor-loopscrapyscrapy-spider

解决方案


如果每行有相同数量的结果,您可以这样做:

def parse(self, response):
    addresses = []
    areas = []
    prices_stable = []
    prices_drop = []
    prices_decreased = []
    links = []
    for i in response.css('span.dir'):
        addresses.append(i.css('b::text').extract())
    for l in response.css('div.datos'):
        areas.append(l.css('i::text').extract())
    for x in response.css('div.opciones'):
        prices_stable.append(x.css('span.eur::text').extract())
    for o in response.css('div.opciones'):
        prices_drop.append(o.css('div.mp_pvpant.baja::text').extract())
    for y in response.css('div.opciones'):
        prices_decreased.append(y.css('span.eur_m::text').extract())
    for u in response.css('div.datos'):
        links.append(u.css('a::attr(href)').extract_first())

    for address, area, price_stable, price_drop, price_decreased, link in zip(addresses, areas, prices_stable, prices_drop, prices_decreased, links):
        yield {
            'address': address,
            'area': area,
            'price_stable': price_stable,
            'price_drop': price_drop,
            'price_decreased': price_decreased,
            'link': link,
        }

推荐阅读