自学内容网 自学内容网

视频解码故障案例两则

 案例1 绿边


故障分析:

这个能明显看到视频上方出现绿色半透明边带。这说明Y数据正常。UV数据不正常。

它显然与视频帧的垂直分辨率设置有关。

UV数据和Y数据是连续放置的,如果上方出现彩色数据失调,说明这部分数据实际仍然是Y数据。也就是说如果这是1920*1080p的数据,那么说明它的线数比1080更大。

根据绿边行数/2,加至1080的数据上,故障消失:

案例2 栅格

故障分析

上面的配置下,出现水平数据的错位,在上面可以正常的显示被拆分成了3个带状的数据。YUV模式下不好看故障的特征,先把UV数据消掉,用标准的灰度图像去观察:

相关的灰度UV数据生成:

uv_plane = np.full((height // 2, width), 128, dtype=np.uint8)

最终根据灰度图上的关键点位的相对位置,得到了真正的width,修正后:

 

附录 A 16*16宏块分散的视频数据重组为标准YUV-420

这段代码没有用到,备用。本来是为了处理案例2.

    def merge_to_yuv420(self, macrobloc_data, width, height):
        # Calculate sizes
        Y_size = width * height
        U_size = Y_size // 4  # U and V are quarter size of Y
        
        # Initialize arrays for Y, U, V
        Y = np.empty((height, width), dtype=np.uint8)
        U = np.empty((height // 2, width // 2), dtype=np.uint8)
        V = np.empty((height // 2, width // 2), dtype=np.uint8)
        
        # Merge macroblocks into Y, U, V components
        for r in range(height // 16):
            for c in range(width // 16):
                macroblock_index = r * (width // 16) + c
                Y_block_start = macroblock_index * 256
                U_block_start = Y_size + (macroblock_index * 64)
                V_block_start = Y_size + U_size + (macroblock_index * 64)
                
                # Merge Y block
                Y[r*16:(r+1)*16, c*16:(c+1)*16] = np.frombuffer(macrobloc_data[Y_block_start:Y_block_start+256], dtype=np.uint8).reshape(16, 16)
                
                # Merge U and V blocks
                U[r*8:(r+1)*8, c*8:(c+1)*8] = np.frombuffer(macrobloc_data[U_block_start:U_block_start+64], dtype=np.uint8).reshape(8, 8)
                V[r*8:(r+1)*8, c*8:(c+1)*8] = np.frombuffer(macrobloc_data[V_block_start:V_block_start+64], dtype=np.uint8).reshape(8, 8)
        
        return Y, U, V


    def nv12_to_rgb(self, nv12, width, height):
        width = 1920
        height = 1080
        Y,U,V = self.merge_to_yuv420(nv12, width, height)
        U_upsampled = cv2.resize(U, (width, height), interpolation=cv2.INTER_LINEAR)
        V_upsampled = cv2.resize(V, (width, height), interpolation=cv2.INTER_LINEAR)
        yuv_image = np.concatenate((Y, U_upsampled, V_upsampled))
    
        # Use OpenCV to convert YUV420sp (NV12) to RGB
        rgb_image = cv2.cvtColor(yuv_image, cv2.COLOR_YUV2RGB_IYUV)
        return rgb_image


原文地址:https://blog.csdn.net/twicave/article/details/140325315

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!