EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Optimisation conjointe de codes LDPC  Low Density Parity Check  et de leurs architectures de d  codage et mise en oeuvre sur FPGA  Field Programmable Gate Array

Download or read book Optimisation conjointe de codes LDPC Low Density Parity Check et de leurs architectures de d codage et mise en oeuvre sur FPGA Field Programmable Gate Array written by Jean-Baptiste Doré and published by . This book was released on 2007 with total page 214 pages. Available in PDF, EPUB and Kindle. Book excerpt: La découverte dans les années 90 des Turbo-codes et, plus généralement du principe itératif appliqué au traitement du signal, a révolutionné la manière d'appréhender un système de communications numériques. Cette avancée notable a permis la re-découverte des codes correcteurs d'erreurs inventés par R. Gallager en 1963, appelés codes Low Density Parity Check (LDPC). L'intégration des techniques de codage dites avancées, telles que les Turbo-codes et les codes LDPC, se généralise dans les standards de communications. Dans ce contexte, l'objectif de cette thèse est d'étudier de nouvelles structures de codage de type LDPC associées à des architectures de décodeurs alliant performances et flexibilité. Dans un premier temps, une large présentation des codes LDPC est proposée incluant les notations et les outils algorithmiques indispensables à la compréhension. Cette introduction des codes LDPC souligne l'intérêt qu'il existe à concevoir conjointement le système de codage/décodage et les architectures matérielles. Dans cette optique, une famille de codes LDPC particulièrement intéressante est décrite. En particulier nous proposons des règles de construction de codes pour en contraindre le spectre des distances de Hamming. Ces contraintes sont intégrées dans la définition d'un nouvel algorithme de définition de codes travaillant sur une représentation compressée du code par un graphe. Les propriétés structurelles du code sont ensuite exploitées pour définir l'algorithme de décodage. Cet algorithme, caractérisé par le fait qu'il considère une partie du code comme un code convolutif, converge plus rapidement que les algorithmes habituellement rencontrés tout en permettant une grande flexibilité en termes de rendements de codage. Différentes architectures de décodeurs sont alors décrites et discutées. Des contraintes sur les codes sont ensuite exposées pour exploiter pleinement les propriétés des architectures.Dans un dernier temps, une des architectures proposées est évaluée par l'intégration d'un décodeur sur un composant programmable. Dans différents contextes, des mesures de performances et de complexité montrent l'intérêt de l'architecture proposée.

Book Universal Decoder for Low Density Parity Check  Turbo and Convolutional Codes

Download or read book Universal Decoder for Low Density Parity Check Turbo and Convolutional Codes written by Ahmed Refaey Ahmed Hussein and published by . This book was released on 2011 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: De nombreux systèmes de communication sans fil ont adopté les codes turbo et les codes convolutifs comme schéma de codes correcteurs d'erreurs vers l'avant (FEC) pour les données et les canaux généraux. Toutefois, certaines versions proposent les codes LDPC pour la correction d'erreurs en raison de la complexité de l'implémentation des décodeurs turbo et le succès de certains codes LDPC irréguliers dans la réalisation des mêmes performances que les codes turbo les dépassent dans certains cas avec une complexité de décodage plus faible. En fait, les nouvelles versions des standards de ces systèmes travaillent côte à côte dans des dispositifs réels avec les plus anciennes qui sont basées sur les codes turbo et les codes convolutifs. En effet, ces deux familles de codes offrent toutes deux d'excellentes performances en termes de taux d'erreur binaire (TEB). Par conséquent, il semble être une bonne idée d'essayer de les relier de manière à améliorer le transfert de technologie et l'hybridation entre les deux méthodes. Ainsi, la conception efficace de décodeurs universels des codes convolutifs, turbo, et LDPC est critique pour l'avenir de l'implémentation des systèmes sans fil. En outre, un décodeur efficace pour les codes turbo et codes convolutifs est obligatoire pour la mise en oeuvre de ces systèmes sans fil. Cela pourrait se faire par l'élaboration d'un algorithme de décodage unifié des codes convolutifs, turbo et LDPC par des simulations et des études analytiques suivies d'une phase de mise en oeuvre. Pour introduire ce décodeur universel, il existe deux approches, soit sur la base de l'algorithme du maximum a posteriori (MAP) ou l'algorithme de propagation de croyance (BP). D'une part, nous étudions une nouvelle approche pour décoder les codes convolutifs et les turbo codes au moyen du décodeur par propagation de croyances (BP) décodeur utilisé pour les codes de parité à faible densité (codes LDPC). En outre, nous introduisons un système de représentation général pour les codes convolutifs par des matrices de contrôle de parité. De plus, les matrices de contrôle de parité des codes turbo sont obtenus en traitant les codes turbo parallèles comme des codes convolutifs concaténés. En effet, l'algorithme BP fournit une méthodologie très efficace pour la conception générale des algorithmes de décodage itératif de faible complexité pour toutes les classes des codes convolutifs ainsi que les turbo-codes. Alors qu'une petite perte de performance est observée lors du décodage de codes turbo avec BP au lieu du MAP, cela est compensé par la complexité moindre de l'algorithme BP et les avantages inhérents à une architecture unifiée de décodage. En outre, ce travail exploite la représentation tail-biting de la matrice de contrôle de parité des codes convolutifs et des codes turbo, ce qui permet le décodage par un algorithme de propagation de croyance unifiée (BP) pour les nouveaux systèmes de communication sans fils tels que le WiMAX (Worldwide Interoperability for Microwave Access) et le LTE (Long Term Evolution). D'autre part, comme solution alternative, une recherche est effectuée sur la façon de produire un décodeur combiné de ces deux familles de codes basé sur l'algorithme MAP. Malheureusement, cette seconde solution nécessite beaucoup de calculs et de capacité de stockage pour sa mise en oeuvre. En outre, ses récurrences en avant et en arrière résultent en de longs délais de décodage. Entre temps, l'algorithme MAP est basé sur le treillis et la structure en treillis du code LDPC est suffisamment compliquée en raison de la matrice de contrôle de parité de grande taille. En conséquence, cette approche peut être difficile à mettre en oeuvre efficacement car elle nécessite beaucoup de calculs et une grande capacité de stockage. Enfin, pour prédire le seuil de convergence des codes turbo, nous avons appliqué la méthode de transfert d'information extrinsèque (EXIT) pour le décodeur correspondant en le traitant comme une concaténation de noeuds de variable et de contrôle.

Book Decoder Architectures and Implementations for Quasi cyclic Low density Parity check Codes

Download or read book Decoder Architectures and Implementations for Quasi cyclic Low density Parity check Codes written by Xiaoheng Chen and published by . This book was released on 2011 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Since the rediscovery of low-density parity-check (LDPC) codes in the late 1990s, tremendous progress has been made in code construction and design, decoding algorithms, and decoder implementation of these capacity-approaching codes. Recently, LDPC codes are considered for applications such as high-speed satellite and optical communications, the hard disk drives, and high-density flash memory based storage systems, which require that the codes are free of error-floor down to bit error rate (BER) as low as 10−12 to 10−15. FPGAs are usually used to evaluate the error performance of codes, since one can exploit the finite word length and extremely high internal memory bandwidth of an FPGA. Existing FPGA-based LDPC decoders fail to utilize the configurability and read-first mode of embedded memory in the FPGAs, and thus result in limited throughput and codes sizes. Four optimization techniques, i.e., vectorization, folding, message relocation, and circulant permutation matrix (CPM) sharing, are proposed to improve the throughput, scalability, and efficiency of FPGA-based decoders. Also, a semi-automatic CAD tool called QCSYN (Quasi-Cyclic LDPC decoder SYNthesis) is designed to shorten the implementation time of decoders. Using the above techniques, a high-rate (16129,15372) code is shown to have no error-floor down to the BER of 10−14. Also, it is very difficult to construct codes that do not exhibit an error floor down to 10−15 or so. Without detailed knowledge of dominant trapping sets, a backtracking-based reconfigurable decoder is designed to lower the error floor of a family of structurally compatible quasi-cyclic LDPC codes by one to two orders of magnitudes. Hardware reconfigurability is another significant feature of LDPC decoders. A tri-mode decoder for the (4095,3367) Euclidean geometry code is designed to work with three compatible binary message passing decoding algorithms. Note that this code contains 262080 edges (21.3 times of the (2048,1723) 10GBASE-T code) in its Tanner graph and is the largest code ever implemented. Besides, an efficient QC-LDPC Shift Network (QSN) is proposed to reduce the interconnect delay and control logic of circular shift network, a core component in the reconfigurable decoder that supports a family of structurally compatible codes. The interconnect delay and control logic area are reduced by a factor of 2.12 and 8, respectively. Non-binary LDPC codes are effective in combating burst errors. Using the power representation of the elements in the Galois field to organize both intrinsic and extrinsic messages, we present an efficient decoder architecture for non-binary QC-LDPC codes. The proposed decoder is reconfigurable and can be used to decode any code of a given field size. The decoder supports both regular and irregular non-binary QC-LDPC codes. Using a practical metric of throughput per unit area, the proposed implementation outperforms the best implementations published in research literature to date.

Book Towards Optimized Flexible Multi ASIP Architectures for LDPC Turbo Decoding

Download or read book Towards Optimized Flexible Multi ASIP Architectures for LDPC Turbo Decoding written by Purushotham Murugappa Velayuthan and published by . This book was released on 2012 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: De nombreuses techniques de codage de canal sont spécifiées dans les nouvelles normes de communications numériques, chacune adaptée à des besoins applicatifs spécifiques (taille de trame, type de canal de transmission, rapport signal-à-bruit, bande-passante, etc.). Si l'on considère les applications naissantes multi-mode et multi-standard, ainsi que l'intérêt croissant pour la radio logicielle et la radio cognitive, la combinaison de plusieurs techniques de correction d'erreur devient incontournable. Néanmoins, des solutions optimales en termes de performance, de consommation d'énergie et de surface sont encore à inventer et ne doivent pas être négligées au profit de la flexibilité. Dans ce contexte, ce travail de thèse a exploré le modèle d'architecture multi-ASIP dans le but d¿unifier l'approche orientée sur la flexibilité et celle orientée sur l'optimalité dans la conception de décodeurs de canal flexibles. En considérant principalement les applications exigeantes de décodage itératif des turbocodes et des codes LDPC, des architectures multi-ASIP de décodeurs de canal sont proposées ciblant une grande flexibilité combinée à une haute efficacité architecturale en termes de bits/cycle/iteration/mm2. Différentes solutions architecturales et différentes approches de conception sont explorées pour proposer trois contributions originales. La première contribution concerne la conception d'un décodeur LDPC/Turbo multi-ASIP extensible, flexible et haut débit. Plusieurs objectifs de conception sont atteints en termes d'extensibilité, de partage de ressources, et de vitesse de configuration. Le décodeur proposé, nommé DecASIP, supporte le décodage des codes LDPC et turbocodes spécifiés dans les normes WiFi, WiMAX et LTE. L'extensibilité apportée par l'approche multi-ASIP basée sur des réseaux sur puces (NoC) permet d'atteindre les besoins en haut débit des normes actuelles et futures. La deuxième contribution concerne la conception d'un ASIP paramétré pour le turbo-décodage (TDecASIP). L'objectif étant d'étudier l'efficacité maximale atteignable pour un turbo décodeur basé sur le concept ASIP en maximisant l'exploitation du parallélisme de sous-blocs. En outre, avec cette architecture nous avons démontré la possibilité de concevoir des coeurs de traitement paramétrables et dédiés à l'application en utilisant le flot de conception ASIP existant. La troisième contribution correspond à la conception d'un ASIP optimisé pour le décodage des codes LDPC (LDecASIP). Comme pour TDecASIP, l'objectif étant d'étudier l'efficacité maximale atteignable pour un décodeur de codes LDPC basé sur le concept ASIP en augmentant le degré de parallélisme et la bande passante des mémoires. Une quatrième contribution principale de cette thèse porte sur le prototypage matériel. Une plateforme de communication complète intégrant 4-DecASIP pour le décodage de canal a été prototypé sur une carte à base de circuits FPGA. À notre connaissance, c'est le premier prototype FPGA publié de décodeur de canal flexible supportant le décodage des turbocodes et des codes LDPC avec une architecture multi-ASIP intégrant des NoC. De plus, une intégration ASIC de ce décodeur a été réalisée par le CEA-LETI dans la puce MAG3D visant des applications de communications pour la 4G. Ces résultats démontrent le cycle de conception rapide et l'efficacité offerte par l'approche de conception basée sur le concept ASIP dans ce domaine d'application, permettant ainsi d'affiner les compromis de conception par rapport aux divers objectifs ciblés

Book LDPC Code Designs  Constructions  and Unification

Download or read book LDPC Code Designs Constructions and Unification written by Juane Li and published by Cambridge University Press. This book was released on 2016-12-01 with total page 259 pages. Available in PDF, EPUB and Kindle. Book excerpt: Written by leading experts, this self-contained text provides systematic coverage of LDPC codes and their construction techniques, unifying both algebraic- and graph-based approaches into a single theoretical framework (the superposition construction). An algebraic method for constructing protograph LDPC codes is described, and entirely new codes and techniques are presented. These include a new class of LDPC codes with doubly quasi-cyclic structure, as well as algebraic methods for constructing spatially and globally coupled LDPC codes. Authoritative, yet written using accessible language, this text is essential reading for electrical engineers, computer scientists and mathematicians working in communications and information theory.

Book High Speed decoding of convolutional Turbo Codes

Download or read book High Speed decoding of convolutional Turbo Codes written by David Gnaedig and published by . This book was released on 2005 with total page 261 pages. Available in PDF, EPUB and Kindle. Book excerpt: Les turbocodes sont des codes obtenus par une concaténation de plusieurs codes convolutifs séparés par des entrelaceurs. En 1993, ils ont révolutionné le domaine du codage correcteur d’erreurs en s’approchant à quelques dixièmes de décibels de la limite théorique de Shannon. Ces performances sont d'autant plus remarquables que le principe itératif permet d'en effectuer le décodage avec une complexité matérielle limitée. Le succès des turbocodes s'est traduit par leur introduction dans plusieurs standards de communication. Les besoins croissants dans le domaine des réseaux large bande, nécessitent des implantations hauts débits qui posent de nouvelles problématiques L'objectif de cette thèse est d'étudier des architectures de décodage à haut débit offrant le meilleur compromis en terme de débit sur complexité. Dans un premier temps, nous avons proposé un modèle simple permettant d'exprimer le débit et l'efficacité d'une architecture. Ce modèle appliqué au turbo décodage met en évidence trois paramètres caractéristiques ayant un impact sur le débit et l'efficacité du décodeur : le degré de parallélisme, le taux d'utilisation (activité) des unités de calcul cl la fréquence d'horloge. Nous avons abordé chacun de ces points en explorant un large spectre de possibilités de l'espace de conception allant de la construction conjointe du code et du décodeur à l'optimisation directe des architectures de décodage pour un code ou un ensemble de codes prédéfinis. Nous avons tout d'abord proposé un nouveau schéma de codage appelé turbocodes à roulettes permettant de minimiser la memoire du décodeur par un décodage en parallèle d'un mot de code reçu par plusieurs processeurs à entrée et sortie souples. Afin de résoudre le problème des accès concurrents aux mémoires qui en résulte, nous avons conçu un nouvel entrelaceur hiérarchique. Nous avons ensuite exploré plusieurs solutions permettant d'améliorer l'activité des processeurs utilisation d'une architecture hybride série/parallèle et proposition de nouveaux séquencements au niveau interne des processeurs, et aussi au niveau global en association avec la construction d'entrelaceurs contraints adaptés. Enfin grace à méthode originale de réduction du chemin critique du calcul récursif des métriques de nœuds, nous avons obtenu, sans coût matériel supplémentaire pour un circuit FPGA, un doublement de la fréquence d'horloge du décodeur. La plupart des techniques développées dans cette thèse ont été validées par la réalisation d'un turbo-décodeur pour le standard d'accès sans-fil large bande WiMAX (IEEE 802.16) qui atteint des performances de correction d'erreur excellentes pour un débit atteignant 100 Mbit/s sur un seul circuit FPGA.

Book On Constructing Low density Parity check Codes

Download or read book On Constructing Low density Parity check Codes written by Xudong Ma and published by . This book was released on 2007 with total page 125 pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on 2-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of 10−3 to 10−4, the codes by the proposed approach do not have observable error floors at the levels higher than 10−7. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.

Book Resource Efficient LDPC Decoders

Download or read book Resource Efficient LDPC Decoders written by Vikram Arkalgud Chandrasetty and published by Academic Press. This book was released on 2017-12-05 with total page 192 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book takes a practical hands-on approach to developing low complexity algorithms and transforming them into working hardware. It follows a complete design approach – from algorithms to hardware architectures - and addresses some of the challenges associated with their design, providing insight into implementing innovative architectures based on low complexity algorithms.The reader will learn: - Modern techniques to design, model and analyze low complexity LDPC algorithms as well as their hardware implementation - How to reduce computational complexity and power consumption using computer aided design techniques - All aspects of the design spectrum from algorithms to hardware implementation and performance trade-offs - Provides extensive treatment of LDPC decoding algorithms and hardware implementations - Gives a systematic guidance, giving a basic understanding of LDPC codes and decoding algorithms and providing practical skills in implementing efficient LDPC decoders in hardware - Companion website containing C-Programs and MATLAB models for simulating the algorithms, and Verilog HDL codes for hardware modeling and synthesis

Book High Performance Decoder Architectures For Low Density Parity Check Codes

Download or read book High Performance Decoder Architectures For Low Density Parity Check Codes written by Kai Zhang and published by . This book was released on 2012 with total page 244 pages. Available in PDF, EPUB and Kindle. Book excerpt: Abstract: The Low-Density Parity-Check (LDPC) codes, which were invented by Gallager back in 1960s, have attracted considerable attentions recently. Compared with other error correction codes, LDPC codes are well suited for wireless, optical, and magnetic recording systems due to their near- Shannon-limit error-correcting capacity, high intrinsic parallelism and high-throughput potentials. With these remarkable characteristics, LDPC codes have been adopted in several recent communication standards such as 802.11n (Wi-Fi), 802.16e (WiMax), 802.15.3c (WPAN), DVB-S2 and CMMB. This dissertation is devoted to exploring efficient VLSI architectures for high-performance LDPC decoders and LDPC-like detectors in sparse inter-symbol interference (ISI) channels. The performance of an LDPC decoder is mainly evaluated by area efficiency, error-correcting capability, throughput and rate flexibility. With this work we investigate tradeoffs between the four performance aspects and develop several decoder architectures to improve one or several performance aspects while maintaining acceptable values for other aspects ... Layered decoding algorithm, which is popular in LDPC decoding, is also adopted in this paper. Simulation results show that the layered decoding doubles the convergence speed of the iterative belief propagation process. Exploring the special structure of the connections between the check nodes and the variable nodes on the factor graph, we propose an effective detector architecture for generic sparse ISI channels to facilitate the practical application of the proposed detection algorithm. The proposed architecture is also reconfigurable in order to switch flexible connections on the factor graph in the time-varying ISI channels.

Book Algorithms and Architectures for Efficient Low Density Parity Check  LDPC  Decoder Hardware

Download or read book Algorithms and Architectures for Efficient Low Density Parity Check LDPC Decoder Hardware written by Tinoosh Mohsenin and published by . This book was released on 2010 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Many emerging and future communication applications require a significant amount of high throughput data processing and operate with decreasing power budgets. This need for greater energy efficiency and improved performance of electronic devices demands a joint optimization of algorithms, architectures, and implementations. Low Density Parity Check (LDPC) decoding has received significant attention due to its superior error correction performance, and has been adopted by recent communication standards such as 10GBASE-T 10 Gigabit Ethernet. Currently high performance LDPC decoders are designed to be dedicated blocks within a System-on-Chip (SoC) and require many processing nodes. These nodes require a large set of interconnect circuitry whose delay and power are wire-dominated circuits. Therefore, low clock rates and increased area are a common result of the codes' inherent irregular and global communication patterns. As the delay and energy costs caused by wires are likely to increase in future fabrication technologies new solutions dealing with future VLSI challenges must be considered. Three novel message-passing decoding algorithms, Split-Row, Multi-Splitand Split-Row Threshold are introduced, which significantly reduce processor logical complexity and local and global interconnections. One conventional and four Split-Row Threshold LDPC decoders compatible with the 10GBASE-T standard are implemented in 65 nm CMOS and presented along with their trade-offs in error correction performance, wire interconnect complexity, decoder area, power dissipation, and speed. For additional power saving, an adaptive wordwidth decoding algorithm is proposed which switches between a 6-bit Normal Mode and a reduced 3-bit Low Power Mode depending on the SNR and decoding iteration. A 16-way Split-Row Threshold with adaptive wordwidth implementation achieves improvements in area, throughput and energy efficiency of 3.9x, 2.6x, and 3.6x respectively, compared to a MinSum Normalized implementation, with an SNR loss of 0.25 dB at BER = 10−7. The decoder occupies a die area of 5.10 mm2, operates up to 185 MHz at 1.3 V, and attains an average throughput of 85.7 Gbps with early-termination. Low power operation at 0.6 V gives a worst case throughput of 9.3 Gbps--above the 6.4 Gbps 10GBASE-T requirement, and an average power of 31 mW.

Book Algorithms and Architectures for Low density Parity check Codecs

Download or read book Algorithms and Architectures for Low density Parity check Codecs written by Christopher John Howland and published by . This book was released on 2001 with total page 185 pages. Available in PDF, EPUB and Kindle. Book excerpt: Looks at algorithms and architectures for implementing low-density parity-check codes to achieve reliable communication of digital data over an unreliable channel. Shows that published methods of finding LDPC codes do not result in good codes. Derives a cost metric for measuring short cycles in a graph due to an edge and proposes an algorithm for constructing codes through the minimisation of the cost metric. An encoding algorithm is derived by considering the parity check matrix as a set of linear simultaneous equations. A parallel architecture for implementing LDPC decoders is proposed and the advantages in terms of throughput and power reduction of this architecture are demonstrated through the implementation of 2 LSPC decoders in a 1.5V 0.16[mu]m CMOS process.

Book From LDPC Block to LDPC Convolutional Codes

Download or read book From LDPC Block to LDPC Convolutional Codes written by Wei Liu and published by . This book was released on 2019 with total page 168 pages. Available in PDF, EPUB and Kindle. Book excerpt: Mots-clés de l'auteur: Belief propagation ; capacity ; capacity-achieving codes ; low-density parity-check block codes and low-density parity-check convolutional codes ; iterative message-passing decoding algorithms ; maximum a posteriori decoding ; maximum likelihood decoding ; stability condition ; threshold saturation ; universality.

Book Power Characterization of a Digit online FPGA Implementation of a Low density Parity check Decoder for WiMAX Applications

Download or read book Power Characterization of a Digit online FPGA Implementation of a Low density Parity check Decoder for WiMAX Applications written by Manpreet Singh and published by . This book was released on 2014 with total page 73 pages. Available in PDF, EPUB and Kindle. Book excerpt: Low-density parity-check (LDPC) codes are a class of easily decodable error-correcting codes. Published parallel LDPC decoders demonstrate high throughput and low energy-per-bit but require a lot of silicon area. Decoders based on digit-online arithmetic (processing several bits per fundamental operation) process messages in a digit-serial fashion, reducing the area requirements, and can process multiple frames in frame-interlaced fashion. Implementations on Field-Programmable Gate Array (FPGA) are usually power- and area-hungry, but provide flexibility compared with application-specific integrated circuit implementations. With the penetration of mobile devices in the electronics industry the power considerations have become increasingly important. The power consumption of a digit-online decoder depends on various factors, like input log-likelihood ratio (LLR) bit precision, signal-to-noise ratio (SNR) and maximum number of iterations. The design is implemented on an Altera Stratix IV GX EP4SGX230 FPGA, which comes on an Altera DE4 Development and Education Board. In this work, both parallel and digit-online block LDPC decoder implementations on FPGAs for WiMAX 576-bit, rate-3/4 codes are studied, and power measurements from the DE4 board are reported. Various components of the system include a random-data generator, WiMAX Encoder, shift-out register, additive white Gaussian noise (AWGN) generator, channel LLR buffer, WiMAX Decoder and bit-error rate (BER) Calculator. The random-data generator outputs pseudo-random bit patterns through an implemented linear-feedback shift register (LFSR). Digit-online decoders with input LLR precisions ranging from 6 to 13 bits and parallel decoders with input LLR precisions ranging from 3 to 6 bits are synthesized in a Stratix IV FPGA. The digit-online decoders can be clocked at higher frequency for higher LLR precisions. A digit-online decoder can be used to decode two frames simultaneously in frame-interlaced mode. For the 6-bit implementation of digit-online decoder in single-frame mode, the minimum throughput achieved is 740 Mb/s at low SNRs. For the case of 11-bit LLR digit-online decoder in frame-interlaced mode, the minimum throughput achieved is 1363 Mb/s. Detailed analysis such as effect of SNR and LLR precision on decoder power is presented. Also, the effect of changing LLR precision on max clock frequency and logic utilization on the parallel and the digit-online decoders is studied. Alongside, power per iteration for a 6-bit LLR input digit-online decoder is also reported.

Book Low complexity High speed VLSI Design of Low density Parity check Decoders

Download or read book Low complexity High speed VLSI Design of Low density Parity check Decoders written by Zhiqiang Cui and published by . This book was released on 2008 with total page 218 pages. Available in PDF, EPUB and Kindle. Book excerpt: Low-Density Parity-check (LDPC) codes have attracted considerable attention due to their capacity approaching performance over AWGN channel and highly parallelizable decoding schemes. They have been considered in a variety of industry standards for the next generation communication systems. In general, LDPC codes achieve outstanding performance with large codeword lengths (e.g., N>1000 bits), which lead to a linear increase of the size of memory for storing all the soft messages in LDPC decoding. In the next generation communication systems, the target data rates range from a few hundred Mbit/sec to several Gbit/sec. To achieve those very high decoding throughput, a large amount of computation units are required, which will significantly increase the hardware cost and power consumption of LDPC decoders. LDPC codes are decoded using iterative decoding algorithms. The decoding latency and power consumption are linearly proportional to the number of decoding iterations. A decoding approach with fast convergence speed is highly desired in practice. This thesis considers various VLSI design issues of LDPC decoder and develops efficient approaches for reducing memory requirement, low complexity implementation, and high speed decoding of LDPC codes. We propose a memory efficient partially parallel decoder architecture suited for quasi-cyclic LDPC (QC-LDPC) codes using Min-Sum decoding algorithm. We develop an efficient architecture for general permutation matrix based LDPC codes. We have explored various approaches to linearly increase the decoding throughput with a small amount of hardware overhead. We develop a multi-Gbit/sec LDPC decoder architecture for QC-LDPC codes and prototype an enhanced partially parallel decoder architecture for a Euclidian geometry based LDPC code on FPGA. We propose an early stopping scheme and an extended layered decoding method to reduce the number of decoding iterations for undecodable and decodable sequence received from channel. We also propose a low-complexity optimized 2-bit decoding approach which requires comparable implementation complexity to weighted bit flipping based algorithms but has much better decoding performance and faster convergence speed.

Book Low density Parity check Codes

Download or read book Low density Parity check Codes written by Gabofetswe Alafang Malema and published by . This book was released on 2007 with total page 160 pages. Available in PDF, EPUB and Kindle. Book excerpt: The main contribution of this thesis is the development of LDPC code construction methods for some classes of structured LDPC codes and techniques for reducing decoding time. Two main methods for constructing structured codes are introduced. In the first method, column-weight two LDPC codes are derived from distance graphs. A wide range of girths, rates and lengths are obtained compared to existing methods. The performance and implementation complexity of obtained codes depends on the structure of their corresponding distance graphs. In the second method, a search algorithm based on bit-filing and progressive-edge growth algorithms is introduced for constructing quasi-cyclic LDPC codes. The algorithm can be used to form a distance or Tanner graph of a code. This method could also obtain codes over a wide range of parameters. The outcome of this study is a simple, programmable and high throughput decoder architecture based on matrix permutation and space restriction techniques.

Book Low Power Low density Parity checking  ldpc  Codes Decoder Design Using Dynamic Voltage and Frequency Scaling

Download or read book Low Power Low density Parity checking ldpc Codes Decoder Design Using Dynamic Voltage and Frequency Scaling written by Weihuang Wang and published by . This book was released on 2010 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis presents a low-power LDPC decoder design based on speculative scheduling of energy necessary to decode dynamically varying data frame in both block-fading channels and general AWGN channels. A model of a memory-efficient low-power high-throughput multi-rate array LDPC decoder as well as its FPGA implementation results is first presented. Then, I propose a decoding scheme that provides the feature of constant-time decoding and thus facilitates real-time applications where guaranteed data rate is required. It pre-analyzes each received data frame to estimate the maximum number of necessary iterations for frame convergence. The results are then used to dynamically adjust decoder frequency and switch between multiple-voltage levels; thereby energy use is minimized. This is in contrast to the conventional fixed-iteration decoding schemes that operate at a fixed voltage level regardless of the quality of data received. Analysis shows that the proposed decoding scheme is widely applicable for both two-phase message-passing (TPMP) decoding algorithm and turbo decoding message passing (TDMP) decoding algorithm in block fading channels, and it is independent of the specific LDPC decoder architecture. A decoder architecture utilizing our recently published multi-rate decoding architecture for general AWGN channels is also presented. The result of this thesis is a decoder design scheme that provides a judicious trade-off between power consumption and coding gain.

Book Efficient VLSI Architectures for Non binary Low Density Parity Check Decoding

Download or read book Efficient VLSI Architectures for Non binary Low Density Parity Check Decoding written by Fang Cai and published by . This book was released on 2011 with total page 95 pages. Available in PDF, EPUB and Kindle. Book excerpt: Non-binary low-density parity-check (NB-LDPC) codes can achieve better error-correcting performance than binary LDPC codes when the code length is moderate at the cost of higher decoding complexity. The high complexity is mainly caused by the complicated computations in the check node processing and the large memory requirement. In this thesis, two VLSI designs for NB-LDPC decoders based on two novel check node processing schemes are proposed. The first design is based on forward-backward check node processing. A novel scheme and corresponding architecture are developed to implement the elementary step of the check node processing. In our design, layered decoding is applied and only nm less than q messages are kept on each edge of the associated Tanner graph. The computation units and the scheduling of the computations are optimized in the context of layered decoding to reduce the area requirement and increase the speed. This thesis also introduces an overlapped method for the check node processing among different layers to further speed up the decoding. From complexity and latency analysis, our design is much more efficient than any previous design. Our proposed decoder for a (744, 653) code over GF(32) has also been synthesized on a Xilinx Virtex-2 Pro FPGA device. It can achieve a throughput of 9.30 Mbps when 15 decoding iterations are carried out. The second design is based on a proposed trellis based check node processing scheme. The proposed scheme first sorts out a limited number of the most reliable variable-to-check (v-to-c) messages, then the check-to-variable (c-to-v) messages to all connected variable nodes are derived independently from the sorted messages without noticeable performance loss. Compared to the previous iterative forward-backward check node processing, the proposed scheme not only significantly reduced the computation complexity, but eliminated the memory required for storing the intermediate messages generated from the forward and backward processes. Inspired by this novel c-to-v message computation method, we propose to store the most reliable v-to-c messages as 'compressed' c-to-v messages. The c-to-v messages will be recovered from the compressed format when needed. Accordingly, the memory requirement of the overall decoder can be substantially reduced. Compared to the previous Min-max decoder architecture, the proposed design for a (837, 726) code over GF(32) can achieve the same throughput with only 46% of the area.