python - Python web scraping : how to skip url error
问题描述
I am trying to scrape a webpage ("coinmarketcap"). I am scraping data from 2013 to 2019 October (Open, High, Low, Close, Marketcap, Volume) of all cryptocurrencies.
for j in range (0,name_size):
url = ("https://coinmarketcap.com/currencies/" + str(name[j]) + "/historical-data/?start=20130429&end=20191016")
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page, 'html.parser')
priceDiv = soup.find('div', attrs={'class':'table-responsive'})
rows = priceDiv.find_all('tr')
The problem is some url doesn't exist. And I don't know how to skip those. Can you please help me?
解决方案
使用错误捕获。
try:
#do the thing
except Exception as e:
#here you can print the error
错误的将被打印消息跳过,否则任务继续
推荐阅读
- javascript - 如何在单页应用程序 (SPA) 中正确配置 Firebase 身份验证?
- java - 如何配置 jhipster 应用程序
- javascript - NuxtJS:如果 id 未定义或为空,则禁用链接
- python - NameError:名称“win”未使用 tkinter python 定义
- fastapi - 使用快速 api 读取 json:api 过滤器
- android - 同时循环两个列表,同时在每个步骤比较和存储每个循环的结果
- java - 了解迭代器
- python - AWS Lambda - 无法导入模块 lambda-function
- javascript - React.js:未捕获(承诺)SyntaxError:JSON输入意外结束
- c - 指针转换的奇怪结果