首页 > 解决方案 > 在 Scrapy 中抓取提取的链接

问题描述

我正在尝试构建一个广泛的连续爬虫,我能够提取链接,但我无法抓取它们并提取这些链接。该项目的最终目标是抓取 .au 域并将其根 URL 添加到数据库中。

class Crawler (scrapy.Spider):
    name = "crawler"
    rules = (Rule(LinkExtractor(allow='.com'), callback='parse_item')) 
    #This will be changed to allow .au before deployment to only crawl .au sites.

    start_urls = [
        "http://quotes.toscrape.com/",
    ]

    def parse(self, response):
        urls = response.xpath("//a/@href")
        for u in urls:
            l = ItemLoader(item=Link(), response=response)
            l.add_xpath('url', './/a/@href')
            return l.load_item()

我遇到的另一个问题是,对于内部链接,它添加了相对 url 路径而不是绝对路径。我试图用这个部分来修复它。

urls = response.xpath("//a/@href")
        for u in urls:

items.py 文件:

class Link(scrapy.Item):
    url = scrapy.Field()
    pass

标签: pythonscrapy

解决方案


我设法弄清楚了,我在下面发布了基本代码,以帮助将来有同样问题的任何人。

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

#Create a list of sites not to crawl. 
#Best to read this from a file containing top 100 sites for example.
denylist = [
    'google.com',
    'yahoo.com',
    'youtube.com'
]

class Crawler (CrawlSpider): #For broad crawl you need to use "CrawlSpider"
    name = "crawler"
    rules = (Rule(LinkExtractor(allow=('.com', ), 
    deny=(denylist)), follow=True, callback='parse_item'),)

    start_urls = [
        "http://quotes.toscrape.com",
    ]


    def parse_item(self, response):
        # self.logger.info('LOGGER %s', response.url)  
        # use above to log and see info in the terminal

        yield {
            'link': response.url
        }

推荐阅读