首页 > 解决方案 > Swift:将灰度图像转换为包含视差的 CVPixelBuffer

问题描述

我有一个深度数据的灰度图像,该图像已从其原始分辨率上采样。我不知道如何将放大的深度图像(r,g,b)的像素值转换为浮点数。

有没有办法将像素的白度级别转换为浮点值?

无论如何我可以转换与图像关联的 CVPixelBuffer 的 CVPixelBufferFormatTypes 吗?

换句话说,有没有办法将灰度图像的像素缓冲区转换为包含视差浮动的 CVpixelbuffer?

我使用以下代码从上采样深度数据的 cgimage 表示中提取 cvpixelbuffer:-

func pixelBuffer() -> CVPixelBuffer? {

    let frameSize = CGSize(width: self.width, height: self.height)

    //COLOR IS BGRA
    var pixelBuffer:CVPixelBuffer? = nil
    let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_32BGRA , nil, &pixelBuffer)

    if status != kCVReturnSuccess {
        return nil

    }

    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags.init(rawValue: 0))
    let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
    let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)


    context?.draw(self, in: CGRect(x: 0, y: 0, width: self.width, height: self.height))

    CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

    return pixelBuffer
}

标签: swiftcvpixelbuffer

解决方案


推荐阅读