首页 > 解决方案 > Scrapy 通过 POST 方法迭代

问题描述

我的代码正在运行,但我知道它是如何工作的,然后我需要扩展这个功能..

我想在登录后使用相同的 url 通过 POST 方法循环..

   class myspider(scrapy.Spider):
    name = 'myspider'
    start_urls = ['login_url']
    target_urls = 'target_url'

    # sent data
    def parse(self, response):
        return scrapy.FormRequest.from_response(
            response,
            formdata={'user': 'x', 'pass': 'y'},
            callback=self.after_login
        )

    # responds after login form sent
    def after_login(self, response):
        if "authentication failed" in response.text:
            self.log("Login failed", level=log.ERROR)
            return
        hxs = scrapy.Selector(response)
        yum = hxs.xpath('//span[@id="userName"]/text()').get()

    # responds after login result extracted
    @classmethod
    def from_crawler(cls, crawler, *args, **kwargs):
        spider = super(spider_new_fee, cls).from_crawler(crawler, *args, **kwargs)
        crawler.signals.connect(spider.spider_idle,
                                signal=scrapy.signals.spider_idle)
        return spider

    # Second parsing
    def spider_idle(self):
        self.crawler.signals.disconnect(self.spider_idle,
                                        signal=scrapy.signals.spider_idle)

        mydata={'param1': param1, 'param2': param2, 'param3': 'param3'}    
        self.crawler.engine.crawl(scrapy.Request(
            url_target,
            method='POST',
            body=json.dumps(mydata),
            headers={'Content-Type':'application/json'},
            callback=self.parse_page2
        ), self)
        
        raise DontCloseSpider

    # Extract second parsing
    def parse_page2(self, response):
        self.logger.info("Visited %s", response.url)
        hxs = scrapy.Selector(response)
        root = lxml.html.fromstring(response.body)
        lxml.etree.strip_elements(root, lxml.etree.Comment, "script", "head")

        try:
            data= lxml.html.tostring(root, method="text", encoding=str)
        except Exception as e:
            data= lxml.html.tostring(root, method="text", encoding=unicode)
        
        texts = json.loads(data)
        res={}

        # do something with result

        return res

这段代码是有效的,我是登录,并用登录抓取下一个 url。登录成功,成功获取 url 内的结果项,然后在 first(idle 方法)之后这个抓取下一个 url,最后解析结果。 .

但是 idk,这是登录后抓取的最佳方法吗?还有更成熟的代码来处理这个目的吗?是否有很好的技术解释(我的解释太简单了)?最后,这段代码如何通过 iterate target_url 进行更多的抓取不同的 POST 请求,我想添加空闲方法,但仍然失败,,

一些失败的尝试:

    multi_param = self.allparam.split("-")
    for param in multi_param:
        self.logger.info("Visited %s", target_url)
        mydata={'param1': param1, 'param2': param2, 'param3': 'param3'}    
        self.crawler.engine.crawl(scrapy.Request(
            url=target_url,
            method='POST',
            body=json.dumps(mydata),
            dont_filter=True,
            callback=self.parse_page2
        ), self)

另一个失败尝试:

我是删除函数类方法,登录后我添加了另一个刮,它失败了,因为它没有在登录时获得会话.. :(

感谢帮助,,

标签: pythonpostscrapy

解决方案


我认为您需要在spider_idle发送第二个请求时添加一些代码,

就像是 :

    def spider_idle(self):
      allparam = self.listparam.split("-")
      for param in allparam:
        mydata={'param1': param}    
        self.crawler.engine.crawl(scrapy.Request(
            url=self.target_url,
            method='POST',
            body=json.dumps(my_data),
            dont_filter=True,
            headers={'Content-Type':'application/json'},
            callback=self.parse_page2
        ), self)

code将迭代您的请求 POST,希望对您有所帮助..


推荐阅读