LETTERS International Journal of Recent Trends in Engineering Vol No PDF document - DocSlides

LETTERS International Journal of Recent Trends in Engineering Vol  No PDF document - DocSlides

2014-12-25 112K 112 0 0


6 November 2009 142 FPGA Implementation of High Speed Architecture for Max Log Map Turbo SISO Decoder JMMathana DrPRangarajan RMD Engineering colle geKavaraipettai 601206 Email jmmathanagmailcom RMD Engineering colle geKavaraipettai 601206 Email ID: 29268

Embed code:

Download this pdf

DownloadNote - The PPT/PDF document "LETTERS International Journal of Recent ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

Presentations text content in LETTERS International Journal of Recent Trends in Engineering Vol No

Page 1
LETTERS International Journal of Recent Trends in Engineering, Vol 2, No. 6, November 2009 142 FPGA Implementation of High Speed Architecture for Max Log Map Turbo SISO Decoder J.M.Mathana , Dr.P.Rangarajan R.M.D. Engineering colle ge,Kavaraipettai – 601206 Email : jm.mathana@gmail.com, R.M.D. Engineering colle ge,Kavaraipettai – 601206 Email :rangarajan_69@yahoo.co.in Abstract --This paper presents a turb o soft in soft out (SISO) decoder based on Max-log-ma p algorithm using sliding window techniques. The proposed architecture is based on branch metric normalization to improve the speed of operation of the decoder. The architecture is coded in hardware description language and the efficient code is simulated and synthesized. From the synthesis report, it is observed that the path de lay is reduced to 12.626ns compared with the 23.207ns of conventional one. Index terms -- SISO, FPGA, ML-MAP, LLR etc. I. INTRODUCTION For many digital communication services, bandwidth and transmission power are limited resources, and it is well known that the use of Forward Error- Correction (FEC) codes play s a fundamental role in increasing power and spectr um efficiency. However, Shannon demonstrated [1] in that the development of error-correction techniques w ith increasing coding gain has a limit arising from the channel capacity. Since then, FEC code designers have been looking for new codes that approach as close as pos sible the Shannon limit. However, each increased coding gain comes at the expense of decoder complexity, and its practical feasibility must be evaluated for the available technologies [2]. A new class of binary parallel concatenated Recursive Systematic Convolutional (RSC) codes, called turbo codes[3] , are capable of achieving power efficiency close to th e Shannon limit. Turbo codes have been adopted by the International Telecommunication Union (ITU) to effectively improve system capacity for Third-Generation (3G) wireless high- speed data services high- speed data services (CDMA2000 and W-CDMA). The goal of the ITU is to achieve a harmonized 3G wireless standard that would allow users to roam anywhere in the world without resorting to multimedia terminals. Despite being a small part of the overall system, turbo code specifications in the CDMA2000 and W-CDMA systems are designed to have as much commonality as possi ble toward achieving this goal [4]. Now, communication system designers have a large spectrum of turbo-code decoders at their disposal. The availability of wireless technology has revolutionized the way communications is done in our world today. Cellular and Satellite technology make it possible for people to be connected to the rest of the world from anywhere. With this increased availability comes increased dependence on the underlying systems to transmit information bo th quickly and accurately. Because the communications channels in wireless systems can be much more hostile than in “wired systems, voice and data must use forward error correction coding to reduce the probability of channel effects corrupting the information being transmitted. A new type of coding, called Turbo coding, can achieve a level of performance that comes closer to theoretical bounds than more conventional coding systems. Although Turbo codes are a new form of error correction, their foundation is rooted in coding theory. The algorithms used to decode turbo codes are also described, and performance factors which influence turbo-coded systems are explained and illustrated, where the term turbo generally refers to iterative decoders intended for parallel concatenated convolutional codes as well as for serial concatenated convolutional codes [5]. Although Shannon proved the theoretical limit at which error-free communications could take place using error-correcting codes, all previous coding schemes have fallen far short of this limit Turbo codes mimic the performance of random codes using an iterative decoding algorithm based on simple decoders individually matched to the simple constituent codes. The figure 1 shows a Communication system with turbo decoder Figure 1 Communication system analysis: Turbo codes.  2009 ACADEMY PUBLISHER
Page 2
LETTERS International Journal of Recent Trends in Engineering, Vol 2, No. 6, November 2009 143 Each constituent decoder sends a posteriori likelihood estimates of the decoded bits to the other decoder, and uses the corresponding estimates from the other decoder as priori likelihoods. The uncoded information bits(corrupted by the noisy channel)are available to each decoder to initialize the a priori likelihoods. The decoders use the “MAP” (maximum a posteriori) bit wise decoding algorithm, which requires the same number of states as the well known viterbi algorithm. The turbo decoder iterates between the outputs of two constituent decoders until reaching satisfactory convergence. The final output is a hard quantized version the likelihood estimates of either of the decoders. In this paper, we introduce a novel technique that can reduce the critical path of a turbo decoder. This is achieved by normalizing the branch metric values instead of normalizing the state metric values, as is the case in conventional implementations. The ML-MAP SISO decoder architecture with our proposed technique has been implemented to investigate its performance in terms of timing delay. II. ENCODING Turbo encoding employs two or more constituent recursive systematic convolutional (RSC) encoders separated by a pseudo-random interleaver. The data bits d are fed into the first encoder which generates a set of systematic and parity bits. The data bits are passed to the second encoder after being permuted by a pseudo-random interleaver. The second encoder a.lso generates a set of systematic and parity bits. Because sending two sets of systematic bits is redundant, the overall code is punctured by deleting the second set of systematic bits. The resulting bit stream consists of a systematic bit from the first encoder followed by the parity bits from the first and second encoders, respectively. This technique results in an overall code rate of 1/3. The code rate can be increased to 1/2 by alternately puncturing the pa rity bits from each of the constituent encoders before transmission. As the code rate increases, bandwidt h efficiency improves; performance, however, is degraded since the decoder has less information to use in making a decision. III. DECODING A Turbo decoder consists of two soft-input soft output (SISO) decoders and one interleaver/deinterleaver between them. Decoding process in a turbo decoder is performed iteratively through the two SISO decoders via the interleaver and the deinterleaver. The input symbols x and y , and a priori value L e1 (initial value is zero), are used for decoding process in the SISO decoder1 that produces log-likelihood ratio, lr1 and a priori value e2 . Then, the input symbol data, x (via interleaver) and and a priori value L e2 (interleaved value of e2 ) are used in the SISO decoder 2 that produces e1 for the SISO decoder 1 and soft-output value lr2 . Figure 2 Turbo Decoder Structure IV. DECODING ALGORITHMS This section highlights two classes of trellis- based algorithms which are typically used to decode turbo codes, but emphasis is placed on the Log-MAP algorithm .The Viterbi algor ithm (VA) accepts soft inputs but produces hard outputs. Although the maximum a posteriori (MAP) algorithm accepts soft inputs and produces soft outputs, it suffers from numerical instability. Their derivatives, which are shown below the dotted line, accept soft inputs and produce soft outputs; these algorithms, therefore, are suitable for use in turbo decoding applications. SISO algorithms are necessary for turbo decoding because the decoders are required to share their extrinsic information with each other. Although SISO decoding algorithms are more computationally complex, they allow iterative sharing of results between decoders, which permits the us e of powerful concatenated coding structures. Figure 3 Trellis-Based Decoding Algorithm V. ML-MAP ALGORITHM In this paper, the ML-MAP algorithm is used to implement the decoder archit ecture [6]. This algorithm can be used to reduce the implementation complexity of the decoder. The decoding process in MAP algorithm performs calculations of the forward and backward state metric values to obtain the log likelihood ratio (LLR)  2009 ACADEMY PUBLISHER
Page 3
LETTERS International Journal of Recent Trends in Engineering, Vol 2, No. 6, November 2009 144 values, which have the decoded bit information and reliability values. The LLR values are represented by the following equation: ln kk kk lr SS SS kk SS ln kk SS ln where , , and represent the branch, forward, and backward state metric values respectively. The subscript k and S denote time and state. The LLR value ( lr ) is calculated by the metric values at all states ( ) of time k and k-1 . The equation of , , and can be represented to logarithm form as shown below. ln ln ln exp ln ln ln ln exp ln ln where the branch metric ( ) is calculated by the a priori information ( ), channel reliability value ( ), input symbols ( x and ), the systematic bit ( ) and the parity bit ). The priori information is obtained from the LLR value computed in previous decoding process after subtracting the input symbol data and a priori values from the LLR value. The MAP algorithm, which uses the above equations, is not suitable for hardware implementation due to the logarithm function. Using Jacobin logarithm approximation and it is given below. ln( max ln This approximation is used to implement the state metric unit (SMU) and LLR computation unit (LCU) in Log MAP and ML-MAP SISO decoder. The second term of the right hand side in equation is a correction term which can be implemented through a simple look-up table [7]. Log-MAP algorithm includes this correction term. However, in this paper, the ML- MAP SISO decoder is implemented without this correction term. VI. DECODING PROCESS USING SLIDING WINDOW The conventional MAP decoding process has very high latency due to the processing of forward and backward calculations in all trellis states. Computing the LLR values requires the state metric values generated by the forward and backward pro cesses. Therefore, a large memory size is required to store the state metric values, which in turn depends on the input data block size. The SW method is used to reduce the memory size by dividing the input data into sub-blocks. In this architecture the sub-block si ze is fixed and it can also have variable block sizes. In this proposed design the smallest UMTS standard specification is taken to fix the sub-block size as 40. The following figure 4 shows the graph of data flow for decoding process in time and block axis. Figure 4 A graph of data flow with Sliding Window method. VII. ML-MAP SISO DECODER ARCHITECTURE Figure 5 shows the SISO decoder architecture which consists of the forward and backward state metric, LLR computation, and memory (LIFO and FIFO) blocks. The FIFO 1 and 2 are used to Buffer the input data symbols and the LIFO 3 and 4 are to store the forward state metric and the LLR values, respectively. The SISO decoder has been built with two backward state metric units, and . and denote the forward state and branch metric units. Figure 5 SISO Architecture A. Branch and State Metric Unit The branch and state metric units (BMU and SMU) are implemented using ML-MAP algorithm. The conventional BMU and SMU consists of branch metric calculation, add, compare, select, and normalization processes which is shown in figure 6. The general SMU in turbo SISO decoder must include the normalization process to avoid overflow of the state metric values. The  2009 ACADEMY PUBLISHER
Page 4
LETTERS International Journal of Recent Trends in Engineering, Vol 2, No. 6, November 2009 145 branch metric values are obtained form input data symbols. The new state metric values are calculated in single clock cycle recursively using add, compare (C), select, and normalization (N ) processes from branch metric and state metric values. The critical path delay is determined by the above processes. Figure. 6 conventional structures of the branch and state metric units In the proposed architecture which is shown in figure 7 the normalization is done in the branch metric values itself. This normalization method leads to a simplified SMU, but more complex in BMU. The novel architecture reduces the critical path delay significantly by eliminating the state metric normalization process used in the conventional SMU. Y0 Y1 Y2 Y3 Figure 7 BMU Unit Y0 Y1 Y2 Y3 Figure 8 SMU Unit B.LLR Computation Unit The LLR values (L or L ) are calculated using forward 0-7 ) and backward ( 0-7 ) states and branch metric ( 0-1 ) values of all states. The LLR computation unit (LCU) is similar to the SMU which consists of 3-stage compare and select process results long critical path delay. In order to reduce the critical path delay LCU is pipelined. Figure. 9 Conventional LCU Units Figure 10 Proposed LCU structure VIII. SIMULATION AND SYNTHESIS REPORT The proposed turbo decoder architecture is implemented using Verilog HDL. The functionality is checked using simulation result and the result is shown in figure 11. The efficient code is synthesized and RTL view is shown in the figure 12.  2009 ACADEMY PUBLISHER
Page 5
LETTERS International Journal of Recent Trends in Engineering, Vol 2, No. 6, November 2009 146 Figure. 11 SISO Decoder Figure. 12 RTL Schematic of Turbo Decoder From the synthesis result the proposed architecture has a critical path delay of 12.626ns, resulting in 54.4% speed-up compared to the conventional architecture. In both conventional and proposed method critical path depends on SMU unit. In conventional method state metric values are normalized. Since the branch metric valu es are normalized in the proposed method instead of normalization of state metric values, the critical path of SMU unit is reduced when compared with existing conventional SMU unit. CONCLUSION AND FUTURE WORK The paper has presented a turbo soft-in soft-out (SISO) decoder based on Max-Log maximum a posteriori (ML-MAP) algorithm. A novel technique based on branch metric normalization is introduced to improve the speed performance of the decoder. This is achieved by normalizing the branch metric values instead of normalizing the state metric values, as is the case in conventional method. This normalization method leads to a simplified SMU, but also to a more complex BMU. The proposed architecture has a critical path delay of only 12.626ns; hence more Sp eed compared to the conventional architecture consumes 23.207ns. For Conventional architecture the Maximum delay was caused by LCU. However, for proposed architecture the maximum delay was due to the SMU. The ML-MAP SISO decoder architecture with proposed technique has been simulated to investigate its performance in terms of area usage timing delay. REFERENCES [1] Al-Mohandes and M. I. Elma sry, Iteration reduction of turbo decoders using an efficient stopping/cancellation techni que," in Proc. ISCAS 2002, May 2002, Scottsdale, AZ, vol. 1, pp. 609{612. [2] Al-Mohandes and M. I. El masry, \A new e±cient dynamic-iterative technique for turbo decoders," in Proc. MWSCAS 2002, Aug. 200 2, Tulsa, OK, vol. 3, pp. 180{183. [3] Robertson, E. Villebrun, and P. Hoeher, “A comparison of optimal a nd sub-optimal decoding algorithm”, Proc. IEEE Int. Conf. Commun., 1995, pp. 1009-1013. [4] Al-Mohandes and M. I. El masry, \Design of an energy-e±cient turbo decoder for 3rd generation wireless applications," in Proc. ICM 2003, Dec. 2003, Cairo, Egypt, pp. 127{130. [5] Boutillon, E.; Douillard, C.; Montorsi, G, Iterative Decoding of Concatenated Convolutional Codes: Implementation Issues Proceedings of the IEEE Volume 95, Issue 6, June 2007 Page(s):1201 – 1227 [6] 3GPP TS 25.212, “Multiplexing and Channel Coding (FDD)”, v. 4.6.0, Sept. 2002. [7] Adamantios Mettas,. Reliability Allocation and Optimization for Complex Systems. 2000 Proceedings Annual Reliability And Maintainability Symposium, Los Angele s, California, USA, January24-27, 2000. [8] Youmin Zhang Jin Jiang, .Bibliographical Review On Re configurable Fault-Tolerant Control Systems., In Proceedings of the 5th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes, 2003, Washington, D.C., USA, June 9-11, 2003, pp. 265-276. [9] Raquel Salazar Moreno, Ab raham Rojano Aguilar, .Failure Distribution At Co mponent Level. Published by the American Society of Agricultural and Biological Engineers, Paper number 033003, 2003 ASAE Annual Meeting, 2003 [10] E. Yeo, B. Nikolic, and V. Anantharam, “Iterative Decoder Architectures”, IEEE Commun. Magazine, August 2003, pp. 132-140. [11] Masera, M. Mazza, G. Piccinini, F. Viglione, and M.Zamboni, “Architecture St rategies for Low-Power VLSI Turbo Decoders”, IEEE Tran. on VLSI Sys. Vol. 10, No. 3, June 2002, pp. 279-285.  2009 ACADEMY PUBLISHER

About DocSlides
DocSlides allows users to easily upload and share presentations, PDF documents, and images.Share your documents with the world , watch,share and upload any time you want. How can you benefit from using DocSlides? DocSlides consists documents from individuals and organizations on topics ranging from technology and business to travel, health, and education. Find and search for what interests you, and learn from people and more. You can also download DocSlides to read or reference later.