pytorch - Loading objects from Google Drive takes a lot more RAM than creating the same objects at runtime in Colab
问题描述
I am using Colab Pro+ to create a population of GANs. I save this population using torch.save(). While the GANs exist in the runtime environment, they do not take up very much RAM, but when I reload them from Google Drive during a later session, they explode my RAM, taking up like 40x the memory they did before.
Does anyone have an idea why? Thanks!
Here is the code I use to save and load the GANs:
def saveEntirePopulation(keyPath, population):
for ind, gan in enumerate(population):
torch.save({
'gan': gan },
keyPath + 'population_' + str(ind) )
def saveOneGan(keyPath, gan, index):
torch.save({
'gan': gan },
keyPath + 'population_' + str(index))
def loadPopulation(keyPath, popSize):
popDictArr = []
popArr = []
for ind in range(popSize):
popDictArr.append(torch.load(keyPath + 'population_' + str(ind)))
popArr.append(popDictArr[ind]['gan'])
popArr[ind].generator.train()
popArr[ind].discriminator.train()
return popArr
解决方案
推荐阅读
- python - 根据分类数据绘制日期时间(Y 轴)
- sql - 如何在不聚合的情况下旋转时间序列表
- python - 如何使用 AJAX 在 Django 中显示更改的数据
- c# - AspNetCore Identity User 缺少 ApplicationUser 自定义属性
- amazon-web-services - 在 Airflow 中启用 RDS 身份验证
- scala - 如何在 Spark scala 上删除结构列数组上的元素
- python - 使用 Word2Vec 后如何在一组文档中找到一个肯定词在哪里?
- python - How to get python to ask for values (lat lon or SPC) for line intersection problem and how to print coordinates of intersection point
- sql - Query the number of rows of each table in Oracle 11g without create view permissions
- bixby - how to make Bixby treat user vocabulary terms as equivalent