首页 > 解决方案 > 为什么 acrapy spider 不能正确使用烧瓶?

问题描述

我有一个 Flask 应用程序,它从用户那里获取一个 URL,然后抓取该网站并返回在该网站上找到的链接。以前,我有一个问题,爬虫只能运行一次,之后就不会再运行了。我通过使用CrawlerRunner而不是 CrawlerProcess. 这就是我的代码的样子:

from flask import Flask, render_template, request, redirect, url_for, session, make_response
from flask_executor import Executor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
from twisted.internet import reactor
from urllib.parse import urlparse
from uuid import uuid4
import urllib3, requests, urllib.parse

app = Flask(__name__)
executor = Executor(app)

http = urllib3.PoolManager()
runner = CrawlerRunner()

list = set([])
list_validate = set([])
list_final = set([])

@app.route('/', methods=["POST", "GET"])
def index():
   if request.method == "POST":
      url_input = request.form["usr_input"]

        # Modifying URL
        if 'https://' in url_input and url_input[-1] == '/':
            url = str(url_input)
        elif 'https://' in url_input and url_input[-1] != '/':
            url = str(url_input) + '/'
        elif 'https://' not in url_input and url_input[-1] != '/':
            url = 'https://' + str(url_input) + '/'
        elif 'https://' not in url_input and url_input[-1] == '/':
            url = 'https://' + str(url_input)
        # Validating URL
        try:
            response = requests.get(url)
            error = http.request("GET", url)
            if error.status == 200:
                parse = urlparse(url).netloc.split('.')
                base_url = parse[-2] + '.' + parse[-1]
                start_url = [str(url)]
                allowed_url = [str(base_url)]

                # Crawling links
                class Crawler(CrawlSpider):
                    name = "crawler"
                    start_urls = start_url
                    allowed_domains = allowed_url
                    rules = [Rule(LinkExtractor(), callback='parse_links', follow=True)]

                    def parse_links(self, response):
                        base_url = url
                        href = response.xpath('//a/@href').getall()
                        list.add(urllib.parse.quote(response.url, safe=':/'))
                        for link in href:
                            if base_url not in link:
                                list.add(urllib.parse.quote(response.urljoin(link), safe=':/'))
                        for link in list:
                            if base_url in link:
                                list_validate.add(link)

                 def start_spider():
                    d = runner.crawl(Crawler)

                    def start(d):
                        for link in list_validate:
                        error = http.request("GET", link)
                        if error.status == 200:
                            list_final.add(link)
                        original_stdout = sys.stdout
                        with open('templates/file.txt', 'w') as f:
                           sys.stdout = f
                           for link in list_final:
                              print(link)

                     d.addCallback(start)

                def run():                         
                   reactor.run(0)

                unique_id = uuid4().__str__()
                executor.submit_stored(unique_id, start_spider)
                executor.submit(run)
                return redirect(url_for('crawling', id=unique_id))

            elif error.status != 200:
                return render_template('index.html')

        except requests.ConnectionError as exception:
            return render_template('index.html')
   else:
     return render_template('index.html')

@app.route('/crawling-<string:id>')
def crawling(id):
if not executor.futures.done(id):
    return render_template('start-crawl.html', refresh=True)
else:
    executor.futures.pop(id)
    return render_template('finish-crawl.html')

我也有这段代码每 5 秒刷新一次页面start-crawl.html

{% if refresh %}
    <meta http-equiv="refresh" content="5">
{% endif %}

问题是它start-crawl.html仅在爬行时呈现,而不是在验证时呈现。所以基本上,正在发生的是它需要 URL,在渲染时抓取它start-crawl.html。然后finish-crawl.html在验证时进行。

我相信问题可能出在start_spider(),在行d.addCallback(start)。我认为这是因为它可能在后台执行我不想要的那条线。我相信这里可能发生的事情是在start_spider()d = runner.crawl(Crawler)正在执行,然后d.addCallback(start)在后台发生,这就是为什么它需要我去finish-crawl.html验证的原因。我希望整个功能在后台执行,而不仅仅是那部分。这就是为什么我有:executor.submit_stored(unique_id, start_spider)

我希望这段代码获取一个 URL,然后在渲染时抓取并验证它start-crawl.html。然后当它完成时,我希望它渲染finish-crawl.html

无论如何,如果这不是问题,有谁知道它是什么以及如何解决它?请忽略此代码的同谋以及任何不是“编程约定”的内容。在此先感谢大家。

标签: pythonflaskscrapy

解决方案


通过查看代码,我发现如果您run()在某个时候调用函数,那么一切都应该正常工作,因为它现在永远不会被调用。同样如评论中所述,您应该将类​​和函数从路由移到单独的文件 - 基本上您应该重组代码以便堆栈正常工作,如果您需要存储状态,请使用一些 tmp 文件或至少使用 SQLite队列和结果。


推荐阅读