python - Scrapy - 未找到蜘蛛
问题描述
我一直在尝试运行一个机器人,但一直找不到错误蜘蛛。我检查了蜘蛛确实在那里的目录。下面是错误。我也尝试 chna=anged 蜘蛛名称,但它没有用。任何帮助将不胜感激。谢谢。
\prabh\Anaconda3\envs\py3_knime\py3_knime) C:\Users\prabh\Downloads\storage-
mart\storage-mart>scrapy
crawl storagemart
ROSSHAVEN
*************************
2021-02-08 15:12:58 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot:
public_app)
2021-02-08 15:12:58 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2
2.9.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python
3.6.12 |Anaconda, Inc.| (default, Sep 9 2020, 00:29:25) [MSC v.1916 64 bit
(AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1i 8 Dec 2020), cryptography 3.3.1,
Platform Windows-10-10.0.18362-SP0
2021-02-08 15:12:58 [scrapy.utils.log] DEBUG: Using reactor:
twisted.internet.selectreactor.SelectReactor
Traceback (most recent call last):
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\spiderloader.py", line 68, in load
return self._spiders[spider_name]
KeyError: 'storagemart'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\runpy.py", line
193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\runpy.py", line
85, in _run_code
exec(code, run_globals)
File
"C:\Users\prabh\Anaconda3\envs\py3_knime\py3_knime\Scripts\scrapy.exe\
__main__.py", line 7, in <module>
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\cmdline.py", line 98, in _run_print_help
func(*a, **kw)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\cmdline.py", line 151, in _run_command
cmd.run(args, opts)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\commands\crawl.py", line 42, in run
crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\crawler.py", line 191, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\crawler.py", line 224, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\crawler.py", line 228, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site-
packages\scrapy\spiderloader.py", line 70, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: storagemart'
解决方案
设置蜘蛛的name属性
class MySpider(scrapy.Spider):
name = "storagemart"
然后你会运行它:
scrapy crawl storagemart
推荐阅读
- java - 为什么 List.contains(Object) 的行为不同?
- r - 如何在同一个数据帧上做colsum和average
- powerbi - 使用 DAX 生成日期系列
- javascript - 仅在服务器上需要一个包
- java - Intellij IDEA 仅针对所有未提交的更改运行测试
- regex - 在换行符之前匹配字符,不包括空格?
- macos - MacOS - 使用带有launchd的脚本 - 启动,登录,注销,关闭?
- service-worker - Workbox 的服务人员在更改时未更新
- python - AllenNLP 共指分辨率的多 GPU 训练
- reactjs - 事件处理程序中带有 [name] 的 PrevState