from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
process = CrawlerProcess(get_project_settings())
# spider_name 为项目爬虫名
# process.crawl('spider_name')
for spider_name in process.spider_loader.list():
process.crawl(spider_name)
process.start()
scrapy爬虫main.py可以直接run整个项目
推荐阅读
- Srping mvc mabatis 报错 org.apache.ibatis.binding.BindingException: Invalid bound statement (not found):
- DB2 SQL Error: SQLCODE=-803, SQLSTATE=23505, SQLERRMC=2 (转载)
- echarts之tooltip-position
- oracle 查询sequence当前值
- oracle 查看表的索引信息
- Navicat Premium12永久激活教程,亲测可用
- 薅羊毛,Google Cloud免费使用无数年,到期后继续使用的方法
- SpringBoot2 整合sharding-jdbc 启动报错,坑多多
- IDEA2019.1.2注册码,亲测可用,持续更新
- JPA-hibernate @Table(name =“动态表名” )