首页 > 解决方案 > D3D11 Best way to decode multiple video in real-time

问题描述

I develop an application in C++/Qt which decode multiple camera IP videos in real-time. Each video stream is displayed at same time in different QWidget.

For performance purpose, I use D3D11 API to perform the decoding, post-processing (color domain conversion and scaling) and rendering.

I'm not sure what is the best architecture to do this. For this moment, I create only one ID3D11Device, ID3D11DeviceContext, ID3D11VideoDevice, ID3D11VideoContext and IDXGIFactory. For each IP camera stream, I launch a thread to handle the video processing. I set a ID3D11Multithread layer to ensure the thread safety and I create a specific swapchain link to each QWidget thank their winId(). With this architecture I need to protect the rendering steps because I have some issues with the viewport.

This architecture works fine but I don't if it's more efficient solution. Maybe it's preferable to create multiple ID3D11Device/Context etc for each thread to avoid thread safety issue? So, in your opinion, which is the best way to decoding multiple video in real time?

标签: c++qtdirectxdirectx-11direct3d

解决方案


推荐阅读