首页 > 解决方案 > Scrapy Python 脚本给出了 raise TypeError("Cannot mix str and non-str arguments")

问题描述

嗨,我是编程新手,遇到了这个看似极其常见的问题,但老实说,我所看到的答案都没有帮助我。

我的代码是:

import json
import scrapy

class MoreKeysSpider(scrapy.Spider):
    name = 'getoffers'

    def __init__(self):
        with open(r'C:\Users\magnu\brickset-scraper\postscrape\postscrape\prod.json', encoding='utf-8') as data_file:
            self.data = json.load(data_file)

    def start_requests(self):
        for item in self.data:
            request = scrapy.Request(item['url'], callback=self.parse)
            request.meta['item'] = item
            yield request

    def parse(self, response):
        item = response.meta['item']
        item['details'] = []


        item['details'].append({
            "Name" : response.css('span[itemprop=name]::text').extract_first(),
            "Release" : response.xpath('//*[@id="info"]/div[2]/div[1]/div[1]/div[2]/text()').extract_first(),
            "Website" : response.xpath('//*[@id="info"]/div[2]/div[1]/div[2]/div[2]/a/@href').extract_first(),
            "Entwickler" : response.xpath('//*[@id="info"]/div[2]/div[1]/div[3]/div[2]/text()').extract_first(),
            "Publisher" : response.xpath('//*[@id="info"]/div[2]/div[1]/div[4]/div[2]/text()').extract_first(),
            "Tags" : response.xpath('//*[@id="info"]/div[2]/div[2]/div[3]/div[2]/descendant').getall(),
            "Systemanforderungenmin" : response.xpath('//*[@id="config"]/ul[1]/descendant').getall(),
            "Systemanforderungenmax" : response.xpath('//*[@id="config"]/ul[2]/descendant').getall(),
            })
        yield item


        item['offer'] = []
        for div in response.css('#offers_table'):
            for offer_row in div.css('div.offers-table-row'):
                url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)')).get(),
                url_str = ''.join(map(str, url))     #coverts list to str
                item['offer'].append({
                    "offer:"
                    "Shop": offer_row.css('div[itemprop ~= seller] div.offers-merchant::attr(title)').extract_first(),
                    "Typ": offer_row.css('div.offers-edition-region::text').extract_first(),
                    "Edition": offer_row.css("div[data-toggle=tooltip]::attr(data-content)"),
                    "Link": response.follow(url_str, self.parse_topics),
                    })
                yield item

作为回应,我得到

    DEBUG: Scraped from <200 https://www.keyforsteam.de/kaufen-crusader-kings-2-cd-key-preisvergleich/>
{'url': 'https://www.keyforsteam.de/kaufen-crusader-kings-2-cd-key-preisvergleich/', 'details': [{'Name': '\n\t\t\t\t\tCrusader Kings 2\n\t\t\t\t', 'Release': '\n                                                    14. Februar 2012\n                            ', 'Website': 'https://www.paradoxplaza.com/crusader-kings-ii/CKCK02GSK-MASTER.html', 'Entwickler': '\n                                                    Paradox Development Studio\n
       ', 'Publisher': '\n                                                    Paradox Interactive\n
           ', 'Tags': [], 'Systemanforderungenmin': [], 'Systemanforderungenmax': []}]}
2021-03-22 21:47:22 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.keyforsteam.de/kaufen-crusader-kings-2-cd-key-preisvergleich/> (referer: None)
Traceback (most recent call last):
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\utils\defer.py", line 120, in iter_errback
    yield next(it)
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\utils\python.py", line 353, in __next__
    return next(self.data)
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\utils\python.py", line 353, in __next__
    return next(self.data)
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable
    for r in iterable:
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable
    for r in iterable:
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable
    for r in iterable:
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable
    for r in iterable:
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\spidermiddlewares\depth.py", 
line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\core\spidermw.py", line 62, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\magnu\brickset-scraper\postscrape\postscrape\spiders\keysint.py", line 40, in parse
    url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)')).get(),
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\site-packages\scrapy\http\response\text.py", line 
102, in urljoin
    return urljoin(get_base_url(self), url)
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\urllib\parse.py", line 524, in urljoin
    base, url, _coerce_result = _coerce_args(base, url)
  File "c:\users\magnu\appdata\local\programs\python\python39\lib\urllib\parse.py", line 122, in _coerce_args       
    raise TypeError("Cannot mix str and non-str arguments")
TypeError: Cannot mix str and non-str arguments

所以第一部分似乎有效,我很确定错误在第二项中的某个地方,但我似乎找不到它

item['offer'] = []
        for div in response.css('#offers_table'):
            for offer_row in div.css('div.offers-table-row'):
                url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)')).get(),
                url_str = ''.join(map(str, url))     #coverts list to str
                item['offer'].append({
                    "offer:"
                    "Shop": offer_row.css('div[itemprop ~= seller] div.offers-merchant::attr(title)').extract_first(),
                    "Typ": offer_row.css('div.offers-edition-region::text').extract_first(),
                    "Edition": offer_row.css("div[data-toggle=tooltip]::attr(data-content)"),
                    "Link": response.follow(url_str, self.parse_topics),
                    })
                yield item

标签: pythonstringxpathscrapyscrape

解决方案


有一种循环路线来获得这个,但我认为调试过程会很有启发性。

如果没有程序调用的 json 文件,就很难诊断这个问题,但看起来你的问题出在这一行:url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)')).get(),

如何修复“TypeError:不能混合 str 和非 str 参数”?

根据 Scrapy 文档,您使用的 .css(selector) 方法返回一个 SelectorList 实例。如果您想要实际的(unicode)字符串版本的 url,请调用 extract() 方法:

所以我尝试了:

url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)').extract()).get(),

但我仍然得到同样的错误。奇怪的!

为了诊断,我breakpoint()在这里把一个蜘蛛放进了蜘蛛:

        for div in response.css('#offers_table'):
            for offer_row in div.css('div.offers-table-row'):
                breakpoint()
                url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)').extract()).get(),

再次运行蜘蛛,我可以测试下一行的片段:

(Pdb) offer_row.css('div.buy-btn-cell a::attr(href)').extract()
['https://www.keyforsteam.de/outgoinglink/keyforsteam/37370?merchant=1', 'https://www.keyforsteam.de/outgoinglink/keyforsteam/37370?merchant=1']

啊,extract()返回一个字符串列表而不是单个字符串也是如此。必须有两个元素匹配。但是,它们是相同的,所以我们不在乎我们得到哪一个。查看https://docs.scrapy.org/en/latest/topics/selectors.html上的 scrapy 文档,我们看到还有一个extract-first()函数。

url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)').extract-first()).get(),

虽然,查看scrapy docs,您可能想要使用get()而不是extract-first()

这时候我终于注意到你唯一的错误是把get()错误的括号放在外面。

url = response.urljoin(offer_row.css('div.buy-btn-cell a::attr(href)').get())


推荐阅读