首页 > 解决方案 > ffmpeg 通过管道传输到 python 并显示 cv2.imshow 向右滑动并更改颜色

问题描述

代码:

import cv2
import time
import subprocess
import numpy as np

w,h = 1920, 1080
fps = 15

def ffmpegGrab():
    """Generator to read frames from ffmpeg subprocess"""
    cmd = f'.\\Resources\\ffmpeg.exe -f gdigrab -framerate {fps} -offset_x 0 -offset_y 0 -video_size {w}x{h} -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe -' 

    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
    while True:
        raw_frame = proc.stdout.read(w*h*3)
        frame = np.fromstring(raw_frame, np.uint8)
        frame = frame.reshape((h, w, 3))
        yield frame

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
    # Read next frame from ffmpeg
    frame = next(gen)
    nFrames += 1

    frame = cv2.resize(frame, (w // 4, h // 4))

    cv2.imshow('screenshot', frame)

    if cv2.waitKey(1) == ord("q"):
        break

    fps = nFrames/(time.time()-start)
    print(f'FPS: {fps}')


cv2.destroyAllWindows()

该代码确实显示了桌面捕获,但是颜色格式似乎发生了切换,并且视频向右滚动,就好像它在重复一样。我是否以正确的方式解决这个问题?

标签: pythonffmpeg

解决方案


问题的原因是:stderr=subprocess.STDOUT

  • 参数stderr=subprocess.STDOUT重定向stderrstdout.
  • stdout用作从 FFmpeg 子进程读取输出视频的 PIPE。
  • FFmpeg 写入一些文本,stderr文本与原始视频“混合”(由于重定向)。“混合”会导致奇怪的幻灯片和颜色变化。

替换proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)为:

proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

小修正:

正如 Mark Setchell 所称赞的那样,使用np.frombuffer()而不是np.fromstring()和避免shell=True

替换-f image2pipe-f rawvideo
输出格式是原始视频而不是图像(代码正在使用image2pipe,但rawvideo更正确)。


完整更新代码:

import cv2
import time
import subprocess
import numpy as np

w,h = 1920, 1080
fps = 15

def ffmpegGrab():
    """Generator to read frames from ffmpeg subprocess"""
    # Use "-f rawvideo" instead of "-f image2pipe" (command is working with image2pipe, but rawvideo is the correct format).
    cmd = f'.\\Resources\\ffmpeg.exe -f gdigrab -framerate {fps} -offset_x 0 -offset_y 0 -video_size {w}x{h} -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f rawvideo -'

    #proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)

    # Don't use stderr=subprocess.STDOUT, and don't use shell=True
    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)

    while True:
        raw_frame = proc.stdout.read(w*h*3)
        frame = np.frombuffer(raw_frame, np.uint8)  # Use frombuffer instead of fromarray
        frame = frame.reshape((h, w, 3))
        yield frame

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
    # Read next frame from ffmpeg
    frame = next(gen)
    nFrames += 1

    frame = cv2.resize(frame, (w // 4, h // 4))

    cv2.imshow('screenshot', frame)

    if cv2.waitKey(1) == ord("q"):
        break

    fps = nFrames/(time.time()-start)
    print(f'FPS: {fps}')


cv2.destroyAllWindows()

推荐阅读