首页 > 解决方案 > 如何在 python 中使用 Scrapy 抓取网站以获取网站中的所有链接?

问题描述

我是python的初学者,使用scrapy递归地抓取所有链接,并希望将每个链接映射到该链接中找到的文本。

为此,我需要定义我自己的蜘蛛类,它可以接受名称和要抓取的网站类型列表的参数,我想建立一个指向网站中文本链接的字典,但我在 python 中缺少概念对象班级。我在下面的代码中尝试了一些通过创建对象来运行scrapy,但它给了我错误。

请帮助我制作类的对象(传递具有网页/网站名称的参数以进行抓取)并形成字典{'URL':'all text found in that URL'}

#rinku
import scrapy
# class LinkExtractor():
class MyntraSpider(scrapy.Spider):
    name = "Myntra"
    # allowed_domains = ["myntra.com"]
    # start_urls = [
    #     "http://www.myntra.com/",
    # ]
    # name = "Linker"

    # def __init__(allowed_domains = [], start_urls = []):          
       #  self.allowed_domains = allowed_domains
       #  self.start_urls = start_urls 

    def __init__(self, allowed_domains=None, start_urls=None):
        super().__init__()

        # self.name = name
        if allowed_domains is None:
            self.allowed_domains = []
        else:
            self.allowed_domains = allowed_domains

        if start_urls is None:
            self.start_urls = []
        else:
            self.start_urls = start_urls  

    def parse(self, response):
        hxs = scrapy.Selector(response)
        # extract all links from page
        all_links = hxs.xpath('*//a/@href').extract()
        # iterate over links
        for link in all_links:
            yield scrapy.http.Request(url=link, callback=print_this_link)

    def print_this_link(self, link):
        print("Link --> {this_link}".format(this_link=link))


m1 = MyntraSpider(["myntra.com"], ["http://www.myntra.com/"])

# m1 = MyntraSpider("Linker",["myntra.com"], ["http://www.myntra.com/",])

我得到的输出没有打印链接

(venv) C:\Users\Carthaginian\Desktop\projectLink\crawler>scrapy crawl Myntra
2019-08-14 13:32:51 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: crawler)
2019-08-14 13:32:51 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1c  28 May 2019), cryptography 2.7, Platform Windows-10-10.0.17134-SP0
2019-08-14 13:32:51 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'crawler', 'NEWSPIDER_MODULE': 'crawler.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['crawler.spiders']}
2019-08-14 13:32:51 [scrapy.extensions.telnet] INFO: Telnet Password: 3109504fb87f6b47
2019-08-14 13:32:51 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-08-14 13:32:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-08-14 13:32:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-08-14 13:32:52 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-08-14 13:32:52 [scrapy.core.engine] INFO: Spider opened
2019-08-14 13:32:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-08-14 13:32:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-08-14 13:32:52 [scrapy.core.engine] INFO: Closing spider (finished)
2019-08-14 13:32:52 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.015957,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 8, 14, 8, 2, 52, 585291),
 'log_count/INFO': 10,
 'start_time': datetime.datetime(2019, 8, 14, 8, 2, 52, 569334)}
2019-08-14 13:32:52 [scrapy.core.engine] INFO: Spider closed (finished)

标签: pythonscrapy

解决方案


要使用参数运行,您必须使用__init__

class MyntraSpider(scrapy.Spider):

    def __init__(self, name, allowed_domains=None, start_urls=None):
        super().__init__()

        self.name = name

        if allowed_domains is None:
            self.allowed_domains = []
        else:
            self.allowed_domains = allowed_domains

        if start_urls is None:
            self.start_urls = []
        else:
            self.start_urls = start_urls 

什么时候运行(不带scrapy.Spider

m1 = MyntraSpider("Myntra", ["myntra.com"], ["http://www.myntra.com/"])

然后Python将执行类似的东西

MyntraSpider.__init__(m1, "Myntra", ["myntra.com"], ["http://www.myntra.com/"])

如果您生成项目来运行爬虫,那么您不会创建实例,而是运行自动使用蜘蛛的scrapy,并且必须使用命令行在命令行中发送数据

scrapy crawl MyntraSpider -a nama=Myntra -a allowed_domains=myntra.com -a start_urls=http://www.myntra.com/

但它会将其作为字符串发送,因此您可能必须将它们转换为列表 - 即。使用split()__init__


编辑:使用后的工作代码

full_link = response.urljoin(link)

将相对网址转换为绝对网址

self.并添加callback=self.print_this_link

无需创建hxs = scrapy.Selector(response),因为response.xpath也可以。

它是独立脚本,无需创建项目即可工作。它产生保存在output.csv

import scrapy

class MySpider(scrapy.Spider):

    name = "MySpider"

    def __init__(self, allowed_domains=None, start_urls=None):
        super().__init__()

        # self.name = name
        if allowed_domains is None:
            self.allowed_domains = []
        else:
            self.allowed_domains = allowed_domains

        if start_urls is None:
            self.start_urls = []
        else:
            self.start_urls = start_urls  

    def parse(self, response):
        print('[parse] url:', response.url)

        # extract all links from page
        all_links = response.xpath('*//a/@href').extract()

        # iterate over links
        for link in all_links:
            print('[+] link:', link)
            #yield scrapy.http.Request(url="http://www.myntra.com" + link, callback=self.print_this_link)
            full_link = response.urljoin(link)
            yield scrapy.http.Request(url=full_link, callback=self.print_this_link)


    def print_this_link(self, response):
        print('[print_this_link] url:', response.url)
        title = response.xpath('//title/text()').get() # get() will replace extract() in the future
        yield {'url': response.url, 'title': title}


# --- run without creating project and save in `output.csv` ---

from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    'USER_AGENT': 'Mozilla/5.0',

    # save in file as CSV, JSON or XML
    'FEED_FORMAT': 'csv',     # csv, json, xml
    'FEED_URI': 'output.csv', # 
})
c.crawl(MySpider)
c.crawl(MySpider, allowed_domains=["myntra.com"], start_urls=["http://www.myntra.com/"])
c.start()

推荐阅读