Please use this identifier to cite or link to this item: http://hdl.handle.net/11455/6389
標題: H.264/AVC畫框內預測之超大型積體電路架構設計與實現
VLSI Architecture Design and Implementation for H.264/AVC Intra Frame Prediction
作者: 柯彥吉
Ken, Yen-Chi
關鍵字: VLSI Architecture Design
H.264
H.264
Intra Frame
畫框內預測
超大型積體電路
出版社: 電機工程學系所
引用: Reference [1] ISO/IEC 14496-10:2003,”Coding of Audiovisual Objects—Part 10: Advance Video Coding,”2003, also ITU-T Recommendation H.264 “Advance video cod-ing for generic audiovisual services”. [2] Yu-Wen Huang, Bing-Yu Hsieh, Tung-Chien Chen, and Liang-Gee Chen, Fellow, IEEE, “Analysis, Fast Algorithm, and VLSI Architecture Design for H.264/AVC Intra Frame Coder, ”IEEE Transactions on Circuit and System for Video Tech-nology, vol. 15, No.3, Murch 2005. [3] H.264/MPEG-4 Part 10 White Paper intra prediction [4] Thomas Wiegand, Gary J. Sullivan, Senior Member, IEEE, Gisle Bjontegaard, and Ajay Luthra, Senior Member , IEEE, “ Overview of the H.264/AVC Video Coding Standard, ”IEEE Transactions on Circuit and System for Video Technol-ogy, vol .13, No. 7 July 2003. [5] GRAY J. SULLIVAN, SENIOR MEMBER, IEEE and THOMAS WIEGAND, “Video Compression-From Concept to the H.264/AVC Standard, “Proceedings of The IEEE, vol. 93, No. 1, January 2005. [6] Gary J. Sullivan, Pankaj Topiwala, and Ajay Luthra, “The H.264/AVC Advance Video Coding Standard: Overview and Introduction to the Fidelity Range Exten-sions, “Presented at the SPIE Conference on Applications of Digital Image Processing XXVII, Special Session on Advances in the New Emerging Standard: H.264/AVC, August, 2004. [7] jorn Ostermann, Jan Borman, Peter List, Detlev Marpe, Matthias Narroschke, Fernando Pereira, Thomas Stockhammer, and Thomas Wedi, “Video coding with H.264/AVC: Tools, Performance, and Complexity, “IEEE Circuit and System Magazine 1540-7977/04/$20.00 IEEE First Quarter 2004. [8] Henrique S, Fellow, IEEE, Antti Hallapuro, Marta Karczewicz, and Louis Kerofsky, Member, IEEE, “Low-Complexity Transform and Quantization in H.264/AVC, “IEEE Transactions on Circuit and System for Video Technology, vol .13, No. 7 July 2003. [9] H.264/MPEG-4 Part 10 White Paper: Transform & Quantization. [10] JVT Reference software version 10.1. [11] JVT-N046, Study of ISO/IEC 14496-10 and ISO/IEC 14496-5/AMD6, “Text De-scription of Joint Model Reference Encoding Methods and Decoding Conceal-ment Methods, ” Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG Hong Kong, Jan 2005. [12] Thomas Wiegand, Heiko Schwarz, Anthony Joch, Faouzi Kossentini, and Gary J. Sullivan, “Rate-Constrained Coder Control and Comparison of Video Coding Standards, “IEEE Transactions on Circuit and System for Video Technology, July 2003. [13] Antonio Ortegn and Kannan Ramchandran, “Rate-Distortion Methods for IMAGE and VIDEO COMPRESSION, “IEEE Signal Processing Magazine, November 1998. [14] Zhan-Yuan Cheng, Chn-Hong Chen, Bin-Da Liu, and Jar-Ferr Yang, “HIGH THROUGHPUT 2-D TRANSFORM ARCHITECTURES FOR H.264 ADVANCED VIDEO CODERS, “The 2004 IEEE Asia-Pacific Comference on Circuits and Systems, December 6-9, 2004. [15] Tu-Chih Wang, Yu-Wen Huang, Hung-Chi Fang, and Liang-Gee Chen, “PARALLEL 4x4 2D TRANSFORM AND INVERSE TRANSFORM ARCHITECTURE FOR MPEG-4 AVC/H.264, “in Proc. IEEE Int. Symp. Cir-cuits and Systems, 2003, pp. 800-803. [16] H.264/MPEG-4 Part 10 White Paper: Prediction of Inter Macroblocks in P-slices. [17] B. Girod, “Efficiency analysis of multihypothesis motion-compensated predic-tion for video coding,” IEEE Trans. Image Processing, vol. 9, Feb. 1999. [18] M. Flierl, T. Wiegand, and B. Girod, “Rate-constrained multi-hypothesis mo-tion-compensated prediction for video coding,” in Proc. IEEE Int. Conf. Image Processing, Vancouver, BC, Canada, Sept. 2000, vol. 3, pp. 150-153. [19] N. Ahmed, T. Natarajan, and R. Rao, “Discrete cosine transform,” IEEE Trans-actions on Computers, vol. C-23, pp. 90-93, Jan. 1974. [20] Iain E G Richardson, “H.264/MPEG-4 Part 10 White Paper.” Available: http://www.vcodex.fsnet.co.uk/resources.html [21] H. Malvar, A. Hallapuro, M. Karczewicz, and L. Kerofsky, “Low-Complexity transform and quantization in H.264/AVC,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 598-603, July 2003. [22] D. Marpe, H. Schwarz, and T. Wiegand, “Context-based adaptive binary arithme-tic coding in the H.264/AVC video compression standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 620-636, July 2003. [23] P. List, A. Joch, J. Lainema, and G. Bjontegaard, “Adaptive deblocking filter” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 614-619, July 2003. [24] J. Ribas-Corbera, P.A. Chou, and S. Regunathan, “A generalized hypothetical reference decoder for H.264/AVC,” IEEE Transactions on Circuits and Systems, vol. 13, no. 7, pp. 674-687, July 2003. [25] B. Girod, M. Kalman, Y.J. Liang, and R. Zhang, “Advances in video chan-nel-adaptive streaming,” in Proc. ICIP 2002, Rochester, NY, Sept. 2002. [26] Y.J. Liang and B. Girod, “Rate-distortion optimized low—latency video stream-ing using channel-adaptive bitstream assembly,” in Proc. ICME 2002, Lausanne, Switzerland, Aug. 2002. [27] S.H. Kang and A. Zakhor, “Packet scheduling algorithm for wireless video streaming,” in Proc. International Packet Video Workshop 2002, Pittsburgh, PY, April 2002. [28] M. Karczewicz and R. Kurçeren, “The SP and SI frames design for H.264/AVC,” IEEE Transactions on Circuits and Systems, vol. 13, no. 7, pp. 637-644, July 2003. [29] Y.-K. Wang, M.M. Hannuksela, V. Varsa, A. Hourunranta, and M. Gabbouj, “The error concealment feature in the H.26L test model,” in Proc. ICIP, vol. 2, pp. 729-732, Sept. 2002. [30] Y.-K. Wang, M.M. Hannuksela, and M. Gabbouj, “Error-robust inter/intra mode selection using isolated regions,” in Proc. Int. Packet Video Workshop 2003, Apr. 2003. [31] R. Zhang, S.L. Regunathan, and K. Rose, “Video coding with optimal in-ter/intra-mode switching for packet loss resilience,” IEEE JSAC, vol. 18, no. 6, pp. 966-976, July 2000. [32] A. Ortega and K. Ramchandran, “Rate-distortion methods for image and video compression,” IEEE Signal Processing Magazine, vol. 15 no. 6, pp. 23-50, Nov. 1998. [33] Chimienti, L. Fanucci, R. Locatelli, and S. Saponara, “VLSI architecture for a low-power video codec system,” Microelectronics Journal, vol. 33, no. 5, pp. 417-427, 2002. [34] S. Bauer, et al., “The MPEG-4 multimedia coding standard: Algorithms, archi-tectures and applications,” Journal of VLSI Signal Processing. Boston: Kluwer, vol. 23, no. 1, pp. 7-26, Oct. 1999. [35] V. Lappalainen et al., “Optimization of emerging H.26L video encoder,” in Proc. IEEE SIPS'01, Sept. 2001, pp. 406-415. [36] M. Horowitz, A. Joch, F. Kossentini, and A. Hallapuro, “H.264/AVC baseline profile decoder complexity analysis,” IEEE Tran. Circ. Sys. Video Tech., vol. 13, no 7, pp. 715-727, 2003. [37] A. Joch, F. Kossentini, H. Schwarz, T. Wiegand, and G. Sullivan, “Performance comparison of video coding standards using Lagrangian coder control,” Proc. of the IEEE ICIP 2002, part II, pp. 501-504, Sept. 2002. [38] ISO/IEC 15444-3, “Motion-JPEG2000” (JPEG2000 Part 3), Geneva, 2002. [39] D. Marpe, V. George, H.L. Cycon, and K.U. Barthel, “Performance evaluation of Motion-JPEG2000 in comparison with H.264/AVC operated in intra coding mode,” in Proc. SPIE Conf. on Wavelet Applications in Industrial Processing, Photonics East, Rhode Island, USA, Oct. 2003. [40] M. Flierl and B. Girod. “Generalized B Pictures and the Draft JVT/H.264 Video Compression Standard”, in IEEE Transactions on Circuits and Systems for Video Technology, this issue. [41] M. Flierl, T. Wiegand, and B. Girod, "A Locally Optimal Design Algorithm for Block-Based Multi-Hypothesis Motion-Compensated Prediction", in Data Com-pression Conference, Snowbird, USA, Mar. 1998, pp. 239-248. [42] T. Stockhammer, M. M. Hannuksela, and T. Wiegand, "H.264/AVC in Wireless Environments," in IEEE Transactions on Circuits and Systems for Video Tech-nology, this issue [43] ISO/IEC JTC1, “Coding of audio-visual objects - Part 2: Visual,” ISO/IEC 14496-2 (MPEG-4 visual version 1), April 1999; Amendment 1 (version 2), Feb-ruary, 2000; Amendment 4 (streaming profile), January, 2001. [44] ITU-T, “Video coding for low bit rate communication,” ITUT Recommendation H.263; version 1, Nov. 1995; version 2, Jan. 1998; version 3, Nov. 2000.
摘要: H.264是一個全新的影像壓縮標準,本論文主要是研究畫框內預測,利用臨近區塊的邊界像素來進行畫框內空間預測。在基本的模式下,畫框內預測在亮度方面有9種4*4空間預測和4種16*16空間預測,在彩度方面有4種8*8空間預測,因此如何決定我們要的最佳預測是一門課題。另一方面,4x4空間預測比較特別的是需要經過重建才可以再預測下一個4*4方塊,這個重建的過程中需要4*4離散餘弦轉換,量化器,反量化器,反4*4離散餘弦轉換,和重建。16*16空間預測和8*8空間預測則沒有這個限制,但是預測差值經過4*4離散餘弦轉換後,每個4*4方塊的直流值被收集起來,還要再經過哈德瑪轉換,和量化,才可以提供給後端的編碼器編碼。 在演算法方面,我們以H.264參考軟體為基礎,模擬範例程式,依據畫質表現和可能的硬體花費,來決定如何選擇最佳預測模式,依據模擬的結果去除了最耗硬體資源的預測模式,這個動作讓我們在畫質表現和硬體花費取得了平衡。 在硬體架構設計方面,我們提出可一次產生16個像素值的畫框內預測產生器,可以調整它的輸入值來支援全部9種預測模式,在轉換架構方面,我們利用現有的直接型轉換架構來配合我們的系統,它有16點/時脈,也有8點/時脈的產出,為我們的設計提供了彈性。我們先將每一個方塊實現出來(畫框內預測產生器,(反)離散餘弦轉換,(反)量化,(反)哈德瑪轉換),得到每個方塊大約的時間延遲,規劃整個系統的資料流,最後將它實現為硬體。 在畫框內編碼方面, 我們的架構主要是提供量化值和最佳預測模式給編碼器編碼,可達HDTV的水準(1280*720,30張/秒)。在H.264方面,則提供重建畫面給下一張畫面作移動估測。
H.264 is a new image standard. This paper is to research Intra Frame Prediction of H.264. It processes intra spatial prediction by using boundary pixels of nearing block. In baseline of H.264, Intra Frame Prediction has 9 4*4 spatial prediction and 4 16*16 spa-tial prediction in luminance, and 4 8*8 spatial prediction in chrominance. Hence, how to decide the best prediction mode is an important part. On the other hand, 4*4 spatial pre-diction must be reconstructed and predicted next 4*4 block. In reconstruction process, it needs 4*4 DCT, Quantize, Inverse Quantize, Inverse 4*4 DCT, and reconstruction. 16*16 spatial prediction and 8x8 spatial prediction don't have cause of process, but the different of the best prediction passes through 4*4 DCT, these Dc value of each 4*4 block are gathered, and process them by Hadamard transform, quantization. By the way, they can be coded by entropy coding. In algorithm, we based on reference software of H.264, we simulated reference soft-ware, decide that how to select the best prediction mode according to performance of image and hardware cost. We disable the prediction mode of most hardware cost ac-cording to simulation result. This process let me get trade-off between image perform-ance and hardware cost. In hardware architecture design, we propose intra prediction generator which gener-ates 16 pixels per cycle, it can support 9 prediction modes by changing its input data. In transform architecture, we use direct form architecture to match our system. It has throughout of 16 pixels and 8 pixels per cycle, and provides adaptability for our design. First, we implement each block (including intra prediction generator, DCT, IDCT, Q, IQ, Hadamard, and I Hadamard), get almost delay information of each block, and design data flow of system. We implement it into hardware finally. In intra frame coding, our architecture provides quantization values and the best pre-diction mode for entropy coding. It can meet standard of HDTV (1280*720, 30Frames/sec). In H.264, it provides reconstruction frames to process motion estimation for next current frame.
URI: http://hdl.handle.net/11455/6389
其他識別: U0005-1508200601262600
文章連結: http://www.airitilibrary.com/Publication/alDetailedMesh1?DocID=U0005-1508200601262600
Appears in Collections:電機工程學系所

文件中的檔案:

取得全文請前往華藝線上圖書館



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.