首页 > 解决方案 > Scrapy 没有为不同的基本 URL 正确生成输出?

问题描述

我还是初学者,正在学习 Scrapy

所以我正在制作 Scrapy 脚本来抓取 rumah123.com 中的大量链接,正好在https://www.rumah123.com/en/sale/surabaya/surabaya-kota/all-residential/,结果证明是成功的!它产生链接的csv

但是当我在https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/更改链接时,我的 Scrapy 脚本没有产生任何东西

当我运行脚本时,Scrapy Log 准确地说:

2019-10-18 13:02:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/> (referer: None)
2019-10-18 13:02:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=2> (referer: None)
2019-10-18 13:02:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=6> (referer: None)
2019-10-18 13:02:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=5> (referer: None)
2019-10-18 13:02:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=7> (referer: None)
2019-10-18 13:02:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=4> (referer: None)
2019-10-18 13:02:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=8> (referer: None)
2019-10-18 13:02:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=9> (referer: None)
2019-10-18 13:02:06 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=10> (referer: None)
2019-10-18 13:02:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=3> (referer: None)
2019-10-18 13:02:16 [scrapy.core.engine] INFO: Closing spider (finished)

但是当我检查真正的 csv 时,它里面什么都没有!

这是脚本的整个代码:

class Rumah123_Spyder(scrapy.Spider):
    name = "Home_Rent"
    url_list = []
    page = 1
    def start_requests(self):
        headers = {
            'accept-encoding': 'gzip, deflate, sdch, br',
            'accept-language': 'en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4',
            'upgrade-insecure-requests': '1',
            'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36',
            'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
            'cache-control': 'max-age=0',
        }
        #base = 'https://www.rumah123.com/en/sale/surabaya/surabaya-kota/all-residential/'
        base = 'https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/'
        for x in range(10): #depends on number of page in search results
            if x==0:
                yield scrapy.Request(url=base, headers=headers, callback=self.parse)
                self.page += 1
            else:
                yield scrapy.Request(url=base + "?page=" + str(self.page), headers=headers, callback=self.parse)
                self.page += 1

        #Filter a not valid URL 
        self.url_list = [rum for rum in self.url_list if "/property/" in rum]
        for x in range(len(self.url_list)):
            self.url_list[x] = "rumah123.com" + self.url_list[x]

        url_df = pd.DataFrame(self.url_list, columns=["Sub URL"])
        #url_df.to_csv("home_sale_link.csv", encoding="utf_8_sig")
        url_df.to_csv("home_rent_link.csv", encoding="utf_8_sig")

    def parse(self, response):
        for rumah in response.xpath('//a/@href'):
            if rumah.get() not in self.url_list:
                self.url_list.append(rumah.get())

from scrapy import cmdline
cmdline.execute("scrapy runspider Rumah123_url.py".split())

预期结果就像在第一次尝试 URL 时一样,这里是链接的屏幕截图:

https://imgur.com/eynTo5W

“rent” URL 的当前结果是空的,这里是截图:

https://imgur.com/a/iUdRUDt

额外说明:我测试使用scrapy shell https://www.rumah123.com/en/sale/surabaya/surabaya-kota/all-residential/运行,如果我手动运行代码,它可以直接生成CSV,但是一对一地运行代码会很累:(

谁能指出我为什么会发生这种情况?谢谢 :)

标签: pythonscrapy

解决方案


在我们的蜘蛛导入scrapy中提取url

class QuotesSpider(scrapy.Spider):
    name = "quotes"    
    start_urls = ['https://www.rumah123.com/en/rent/surabaya/surabaya-kota/all-residential/?page=' + str(i) for i in range(1, 10)]

    def parse(self, response):
        for quote in response.xpath('//*[@class="sc-bRbqnn iRnfmd"]'):
            yield {
                'url1': quote.xpath('a/@href').extract(),
            }

存储抓取数据的最简单方法是使用 Feed 导出,使用以下命令:

scrapy crawl quotes -o 1.csv

推荐阅读