首页 > 解决方案 > 想在爬虫被scrapy关闭之前下载并获取位置

问题描述

def parse(self, response):
    item = McaItem()
    for elem in response.xpath('//*[@id="captcha"]'):
        img_url = elem.xpath("@src").extract_first()
        item['image_urls'] = ['https://www.example.com/'+str(img_url)]
        yield item
        print(os.path.isfile("/Users/full/5d8f0a002157908912495a79fdae081e43f79e63.jpg"))

想在yield之后执行打印语句

标签: python-3.xweb-scrapingscrapy

解决方案


推荐阅读