首页 > 解决方案 > python cv2溢出RAM

问题描述

我在 Google Colab 的 ComputerVision 工作。我有一系列视频标题。在一个循环中,我遍历给定的数组。在每次迭代中,我使用 cv2 阅读视频并在完成迭代之前关闭窗口。处理了几个视频后,我的内存在 google collab 中已满。我应该怎么做才能避免我的 RAM 溢出?

# make dataset only for one person
def make_dataset(df, type_, seq_len=6):
  uses_keypoints = [0, 11, 12, 13, 14, 15, 16, 23, 24, 25, 26, 27, 28]
  x = tf.Variable(tf.zeros(shape=(0, seq_len, 13, 3)), dtype='float32')
  y = []
  for k in tqdm(range(df.shape[0])):
    cap = cv2.VideoCapture(df.iloc[k].file_name)
    detector = poseDetector()
    keypoints = tf.Variable(tf.zeros(shape=(0, 13, 3)), dtype='float32')
    for i in range(0, int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), 6):
      cap.set(1, i)
      success, img = cap.read()
      lmList = detector.findPose(img)
      if lmList:
        lmList = np.array(lmList)
        lmList = lmList[uses_keypoints]
        lmList = tf.cast(lmList, dtype='float32')
        lmList = tf.expand_dims(lmList, 0)
        keypoints = tf.concat([keypoints, lmList], axis=0) 
    cap.release()
    cv2.destroyAllWindows()
    sequences = [ keypoints[i - seq_len : i] for i in range(seq_len, keypoints.shape[0])]
    label = df.iloc[k].person_id
    for sequence in sequences:
      x = tf.concat([x, tf.expand_dims(sequence, 0)], axis=0)
      y.append(label)
  y = to_categorical(y)
  path_to_load = f'/some_path'
  save_dataset(x, y, path_to_load)

标签: pythontensorflowopencvram

解决方案


在这个循环for k in tqdm(range(df.shape[0])):中,您在每次迭代时初始化您的检测器。我认为您可以在开始循环之前更好地初始化检测器。这将防止您每次都重新加载检测器。


推荐阅读