python - 每页只抓取两个结果
问题描述
首先非常感谢您的帮助!
我不知道为什么我每页只获得两个结果。请你帮助我好吗?这是代码:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from mercado.items import MercadoItem
class MercadoSpider(CrawlSpider):
name = 'mercado'
item_count = 0
allowed_domain = ['https://www.amazon.es']
start_urls = ['https://www.amazon.es/s/ref=sr_pg_2rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afe bi&page=1&keywords=febi&ie=UTF8&qid=1 535314254']
rules = {
Rule(LinkExtractor(allow =(), restrict_xpaths = ('//*[h2]')),
callback = 'parse_item', follow = False)
}
def start_requests(self):
yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&qid=1535314254",self.parse_item)
for i in range(2,400):
yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page="+str(i)+"&keywords=febi&ie=UTF8&qid=1535314254",self.parse_item)
def parse_item(self, response):
for mercado in response.xpath('//*[h2]'):
ml_item = MercadoItem()
ml_item['articulo'] = response.xpath("@title").extract()[0]
ml_item['precio'] = response.xpath("@href").extract()[0]
yield ml_item
解决方案
您需要相对于您的mercado
元素进行搜索:
def parse_item(self, response):
for mercado in response.xpath('//*[h2]'):
ml_item = MercadoItem()
ml_item['articulo'] = mercado.xpath("@title").extract()[0]
ml_item['precio'] = mercado.xpath("@href").extract()[0]
yield ml_item
推荐阅读
- node.js - 使用 Node.js 从批处理文件构建 Unity
- python - 附加在列表列表中
- lua - 将#defines 转换为Lua 全局变量的最佳方法?
- matplotlib - 在 matplotlib Axis 对象中编辑线条颜色
- sql - SSRS 图表数据 - 值的过滤选项?
- php - MongoDB PECL 扩展尚未安装或启用错误
- firebase - 使用模拟器测试 Firestore 规则时如何设置测试数据?
- javascript - 在 React 本机应用程序中的 HTML 代码上未单击超链接
- vue.js - Vue Quasar modal在消息中放入组件
- python - 在本地机器上运行 dask 时的线程数和总体 cpu 利用率