python - Scrapy 在抓取一长串 url 时卡住了
问题描述
我正在抓取大量 url(1000-ish),在设定的时间后,爬虫卡住了 0 页/分钟。爬行时问题总是出现在同一个地方。url 列表是从 MySQL 数据库中检索的。我对python和scrapy还很陌生,所以我不知道从哪里开始调试,而且我担心由于我的经验不足,代码本身也有点乱。任何指向问题所在的指针都值得赞赏。
我曾经一次检索整个 url 列表,并且爬虫工作正常。但是,我在将结果写回数据库时遇到了问题,并且我不想将整个大的 url 列表读入内存,所以我将其更改为一次遍历数据库一个 url,出现问题。我相当确定 url 本身不是问题,因为当我尝试从问题 url 开始抓取时,它可以正常工作,在不同但一致的位置进一步卡住。
代码的相关部分如下。请注意,该脚本应该作为独立脚本运行,这就是我在蜘蛛本身中定义必要设置的原因。
class MySpider(CrawlSpider):
name = "mySpider"
item = []
#spider settings
custom_settings = {
'CONCURRENT_REQUESTS': 1,
'DEPTH_LIMIT': 1,
'DNS_TIMEOUT': 5,
'DOWNLOAD_TIMEOUT':5,
'RETRY_ENABLED': False,
'REDIRECT_MAX_TIMES': 1
}
def start_requests(self):
while i < n_urls:
urllist = "SELECT url FROM database WHERE id=" + i
cursor = db.cursor()
cursor.execute(urllist)
urls = cursor.fetchall()
urls = [i[0] for i in urls] #fetch url from inside list of tuples
urls = str(urls[0]) #transform url into string from list
yield Request(urls, callback=self.parse, errback=self.errback)
def errback(self, failure):
global i
sql = "UPDATE db SET item = %s, scrape_time = now() WHERE id = %s"
val = ('Error', str(j))
cursor.execute(sql, val)
db.commit()
i += 1
def parse(self, response):
global i
item = myItem()
item["result"] = response.xpath("//item to search")
if item["result"] is None or len(item["result"]) == 0:
sql = "UPDATE db SET, item = %s, scrape_time = now() WHERE id = %s"
val = ('None', str(i))
cursor.execute(sql, val)
db.commit()
i += 1
else:
sql = "UPDATE db SET item = %s, scrape_time = now() WHERE id = %s"
val = ('Item', str(i))
cursor.execute(sql, val)
db.commit()
i += 1
刮板卡住并显示以下消息:
2019-01-14 15:10:43 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET someUrl> from <GET anotherUrl>
2019-01-14 15:11:08 [scrapy.extensions.logstats] INFO: Crawled 9 pages (at 9 pages/min), scraped 0 items (at 0 items/min)
2019-01-14 15:12:08 [scrapy.extensions.logstats] INFO: Crawled 9 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-14 15:13:08 [scrapy.extensions.logstats] INFO: Crawled 9 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-14 15:14:08 [scrapy.extensions.logstats] INFO: Crawled 9 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-14 15:15:08 [scrapy.extensions.logstats] INFO: Crawled 9 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-01-14 15:16:08 [scrapy.extensions.logstats] INFO: Crawled 9 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
到目前为止一切正常。感谢您给我的任何帮助!
解决方案
scrapy syas 0 item 的原因是它会计算产生的数据,而您除了插入数据库之外没有产生任何东西。