首页 > 解决方案 > colab pro + ram 是否有可能不增加

问题描述

我正在使用 colab pro+ 并且训练得很好。但是今天我碰巧注意到ram有错误,当我刚开始训练时,ram增加到上限并导致错误。所以,我减少了批量大小然后我导致

  File "/content/drive/MyDrive/onoff_dnn/models/research/object_detection/legacy/train.py", line 186, in <module>
    tf.app.run()
  File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/content/drive/MyDrive/onoff_dnn/models/research/object_detection/legacy/train.py", line 182, in main
    graph_hook_fn=graph_rewriter_fn)
  File "/content/drive/MyDrive/onoff_dnn/models/research/object_detection/legacy/trainer.py", line 290, in train
    clones = model_deploy.create_clones(deploy_config, model_fn, [input_queue])
  File "/content/drive/MyDrive/onoff_dnn/models/research/slim/deployment/model_deploy.py", line 192, in create_clones
    outputs = model_fn(*args, **kwargs)
  File "/content/drive/MyDrive/onoff_dnn/models/research/object_detection/legacy/trainer.py", line 180, in _create_losses
    train_config.use_multiclass_scores)
ValueError: not enough values to unpack (expected 7, got 0)

我的命令开始训练

!python /content/drive/MyDrive/onoff_dnn/models/research/object_detection/legacy/train.py \
    --logtostderr \
    --train_dir=/content/drive/MyDrive/onoff_dnn/models/newnormaltrain0924_11 \
    --pipeline_config_path=/content/drive/MyDrive/onoff_dnn/models/ssd_mobilenet_v1_0.75_depth_quantized_300x300_coco14_sync.config

标签: google-colaboratory

解决方案


推荐阅读