/
Codec 63 F n (current) F Codec 63 F n (current) F

Codec 63 F n (current) F - PowerPoint Presentation

anderson
anderson . @anderson
Follow
67 views
Uploaded On 2023-06-21

Codec 63 F n (current) F - PPT Presentation

n1 reference Load YUV DCT iDCT iQuant Quant Entropy coding Motion estimation Motion compensation recon struction F n1 reference F n1 reference Not in Codec 63 F n current ID: 1001062

reference dct codingmotion current dct reference current codingmotion estimationmotion compen frame load yuvdctidctiquantquantentropy sationrecon block encode motion c63 quantize

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Codec 63 F n (current) F" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Codec 63Fn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-struction

2. Fn-1(reference)Fn-1(reference)Not in Codec 63Fn(current)Fn-1(reference)Prepare(RGB2YUV)DCTiDCTiQuantQuantReorderEntropy codingMotion estimationMotion compen-sationIntra predictionfailokrecon-struction

3. mainFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structionc63enc.c:193 main

4. read_yuvFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structionc63enc.c:30 read_yuv

5. c63_encode_imageFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structionc63enc.c:80 c63_encode_image

6. c63_encode_imageFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structioncommon.c:83 dct_quantizecommon.c:56 dct_quantize_row

7. DCT- 128 = Each 8×8 block (Y, Cb, Cr) is converted to a frequency-domain representation, using a normalized, two-dimensional DCTtwo-dimensional DCT:Gu,v is the DCT at output coordinates (u,v)u and v are from {0, ..., 7}gx,y is the pixel value at input coordinates (x,y)α is a normalizing function:

8. DCT- 128 = A 2D DCT can be replaced by applying a 1D DCT twicetwo-dimensional DCT:can be replaced by,  

9. c63_encode_imageFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structioncommon.c:83 dct_quantizecommon.c:56 dct_quantize_rowdsp.c:105 dct_quant_block_8x8dsp.c:72 quantize_blockdsp.c:21 dct_1ddsp.c:7 transpose_blockdsp.c:19 scale_block

10. Quantization Example

11. c63_encode_imageFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structioncommon.c:83 dct_quantizecommon.c:56 dct_quantize_rowdsp.c:105 dct_quant_block_8x8dsp.c:72 quantize_blockdsp.c:21 dct_1dtables.c:7 yquanttbl_deftables.c:19 uvquanttbl_defdsp.c:7 transpose_blockdsp.c:19 scale_block

12. c63_encode_imageFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structioncommon.c:83 dct_quantizecommon.c:56 dct_quantize_rowdsp.c:105 dct_quant_block_8x8dsp.c:72 quantize_blockdsp.c:21 dct_1dcommon.c:44 dequantize_idctcommon.c:13 dequantize_idct_rowdsp.c:127 dequant_idct_block_8x8dsp.c:88 dequantize_blockdsp.c:38 idct_1dtables.c:7 yquanttbl_deftables.c:19 uvquanttbl_defdsp.c:7 transpose_blockdsp.c:19 scale_block

13. Lossless compression The resulting data for all 8×8 blocks is further compressed with a loss-less algorithm1. organize numbers in zigzag pattern-26, -3, 0, -3, -2, -6, 2, -4, 1, -4, 1, 1, 5, 1, 2, -1, 1, -1, 2, 0, 0, 0, 0, 0, -1, -1, 0 , 0, 0, 0, 0, 0, 0, 0, 0, …., 0, 0 2. run-length coding

14. c63_encode_imageFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structionc63write.c:345 write_frame

15. Full Search Motion Estimationand so on ...Find best matchFn(current)Fn-1(reference)For comparing blocks:SAD - Sum of Absolute DifferencesW: fixed set, butnot only integers !

16. c63_encode_imageFn(current)Fn-1(reference)Load YUVDCTiDCTiQuantQuantEntropy codingMotion estimationMotion compen-sationrecon-structionc63write.c:345 write_frame

17. Motion EstimationThe estimators often use a two-step process, with initial coarse evaluation and refinementsDon’t do this for every frame, you must sometimes encode macroblocks in a “safe” mode that doesn’t rely on othersThis is called “Intra”-modeWhen a complete frame is encoded in I-mode (always in MPEG-1 and MPEG-2), this is called an I-framex264 calls I-frames "keyframes". But the word keyframe has many, many other meanings as well. Avoid misunderstandings by writing I-frame.Refinements include trying every block in the area, and also using sub-pixel precision (interpolation)quarter pixel in H.264

18. Motion CompensationWhen the best motion vector has been found and refined, a predicted image is generated using the motion vectorsThe reference frame can not be used directly as input to the motion compensatorThe decoder never sees the original image. Instead, it sees a reconstructed image, i.e. an image that has been quantized (with loss)A reconstructed reference image must be used as input to motion compensation

19. Frame ReconstructionThe motion compensator requires as input the same reference frame as the decoder will seeDe-quantize and inverse-transform the residuals and add them to our predicted frameThe result is (roughly) the same reconstructed frame as the decoder will receive

20. Residual TransformationThe pixel difference between the original frame and the reconstructed frame is called residualsSince the residuals only express the difference from the prediction, they are much more compact than full pixel values such as in JPEGResiduals are transformed using DCT and QuantizationMPEG uses special Quantization tables for residualsin INF5063, we don’t (so far)