Forward error correction
Encyclopedia
In telecommunication
, information theory
, and coding theory
, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission
over unreliable or noisy communication channels.
The central idea is the sender encodes their message in a redundant
way by using an error-correcting code (ECC).
The American mathematician Richard Hamming
pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code.
The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, and often to correct these errors without retransmission. FEC gives the receiver the ability to correct errors without needing a reverse channel to request retransmission of data, but at the cost of a fixed, higher forward channel bandwidth. FEC is therefore applied in situations where retransmissions are costly or impossible, such as when broadcasting to multiple receivers in multicast
. FEC information is usually added to mass storage
devices to enable recovery of corrupted data.
FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder
implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC coders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics.
The maximum fractions of errors or of missing bits that can be corrected is determined by the design of the FEC code, so different forward error correcting codes are suitable for different conditions.
to the transmitted information using a predetermined algorithm. A redundant bit may be a complex function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are systematic
, while those that do not are non-systematic.
A simplistic example of FEC is to transmit each data bit 3 times, which is known as a (3,1) repetition code
. Through a noisy channel, a receiver might see 8 versions of the output, see table below.
This allows an error in any one of the three samples to be corrected by "majority vote" or "democratic voting". The correcting ability of this FEC is:
Though simple to implement and widely used, this triple modular redundancy
is a relatively inefficient FEC. Better FEC codes typically examine the last several dozen, or even the last several hundred, previously received bits to determine how to decode the current small handful of bits (typically in groups of 2 to 8 bits).
Most telecommunication systems used a fixed channel code
designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse.
However, some systems adapt to the given channel error conditions:
hybrid automatic repeat-request uses a fixed FEC method as long as the FEC can handle the error rate, then switches to ARQ when the error rate gets too high;
adaptive modulation and coding uses a variety of FEC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.
s and convolutional code
s.
There are many types of block codes, but among the classical ones the most notable is Reed-Solomon coding because of its widespread use on the Compact disc
, the DVD
, and in hard disk drives. Other examples of classical block codes include Golay, BCH
, Multidimensional parity
, and Hamming codes
.
Hamming ECC is commonly used to correct NAND flash memory errors. This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliable single level cell (SLC) NAND. Denser multi level cell (MLC) NAND requires stronger multi-bit correcting ECC such as BCH or Reed–Solomon.
Classical block codes are usually implemented using hard-decision algorithms, which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, soft-decision algorithms like the Viterbi decoder process (discretized) analog signals, which allows for much higher error-correction performance than hard-decision decoding.
Nearly all classical block codes apply the algebraic properties of finite field
s.
Concatenated codes have been standard practice in satellite and deep space communications since Voyager 2
first used the technique in its 1986 encounter with Uranus
. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna.
(LDPC) codes are a class of recently re-discovered highly efficient linear block
codes. They can provide performance very close to the channel capacity (the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations can draw heavily from the use of parallelism.
LDPC codes were first introduced by Robert G. Gallager
in his PhD thesis in 1960,
but due to the computational effort in implementing encoder and decoder and the introduction of Reed–Solomon codes,
they were mostly ignored until recently.
LDPC codes are now used in many recent high-speed communication standards, such as DVB-S2
(Digital video broadcasting), WiMAX
(IEEE 802.16e standard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n), 10GBase-T Ethernet (802.3an) and G.hn/G.9960
(ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within 3GPP
MBMS
(see fountain codes).
is an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver
to produce a block code that can perform to within a fraction of a decibel of the Shannon limit. Predating LDPC codes in terms of practical application, they now provide similar performance.
One of the earliest commercial applications of turbo coding was the CDMA2000 1x
(TIA IS-2000) digital cellular technology developed by Qualcomm
and sold by Verizon Wireless
, Sprint
, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access, 1xEV-DO
(TIA IS-856). Like 1x, EV-DO was developed by Qualcomm
, and is sold by Verizon Wireless
, Sprint
, and other carriers (Verizon's marketing name for 1xEV-DO is Broadband Access, Sprint's consumer and business marketing names for 1xEV-DO are Power Vision and Mobile Broadband, respectively.).
, e.g., for the design of probabilistically checkable proofs.
Locally decodable codes are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. Locally testable code
s are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
Telecommunication
Telecommunication is the transmission of information over significant distances to communicate. In earlier times, telecommunications involved the use of visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, and optical heliographs, or audio messages via coded...
, information theory
Information theory
Information theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and...
, and coding theory
Coding theory
Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error-correction and more recently also for network coding...
, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission
Data transmission
Data transmission, digital transmission, or digital communications is the physical transfer of data over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibres, wireless communication channels, and storage media...
over unreliable or noisy communication channels.
The central idea is the sender encodes their message in a redundant
Redundancy (information theory)
Redundancy in information theory is the number of bits used to transmit a message minus the number of bits of actual information in the message. Informally, it is the amount of wasted "space" used to transmit certain data...
way by using an error-correcting code (ECC).
The American mathematician Richard Hamming
Richard Hamming
Richard Wesley Hamming was an American mathematician whose work had many implications for computer science and telecommunications...
pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code.
The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, and often to correct these errors without retransmission. FEC gives the receiver the ability to correct errors without needing a reverse channel to request retransmission of data, but at the cost of a fixed, higher forward channel bandwidth. FEC is therefore applied in situations where retransmissions are costly or impossible, such as when broadcasting to multiple receivers in multicast
Multicast
In computer networking, multicast is the delivery of a message or information to a group of destination computers simultaneously in a single transmission from the source creating copies automatically in other network elements, such as routers, only when the topology of the network requires...
. FEC information is usually added to mass storage
Mass storage
In computing, mass storage refers to the storage of large amounts of data in a persisting and machine-readable fashion. Devices and/or systems that have been described as mass storage include tape libraries, RAID systems, hard disk drives, magnetic tape drives, optical disc drives, magneto-optical...
devices to enable recovery of corrupted data.
FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder
Viterbi decoder
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has beenencoded using forward error correction based on a convolutional code....
implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC coders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics.
The maximum fractions of errors or of missing bits that can be corrected is determined by the design of the FEC code, so different forward error correcting codes are suitable for different conditions.
How it works
FEC is accomplished by adding redundancyRedundancy (information theory)
Redundancy in information theory is the number of bits used to transmit a message minus the number of bits of actual information in the message. Informally, it is the amount of wasted "space" used to transmit certain data...
to the transmitted information using a predetermined algorithm. A redundant bit may be a complex function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are systematic
Systematic code
In coding theory, a systematic code is any error-correcting code in which the input data is embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols....
, while those that do not are non-systematic.
A simplistic example of FEC is to transmit each data bit 3 times, which is known as a (3,1) repetition code
Repetition code
In coding theory, the repetition code is one of the most basic error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel...
. Through a noisy channel, a receiver might see 8 versions of the output, see table below.
Triplet received | Interpreted as |
---|---|
000 | 0 (error free) |
001 | 0 |
010 | 0 |
100 | 0 |
111 | 1 (error free) |
110 | 1 |
101 | 1 |
011 | 1 |
This allows an error in any one of the three samples to be corrected by "majority vote" or "democratic voting". The correcting ability of this FEC is:
- Up to 1 bit of triplet in error, or
- up to 2 bits of triplet omitted (cases not shown in table).
Though simple to implement and widely used, this triple modular redundancy
Triple modular redundancy
In computing, triple modular redundancy is a fault tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a voting system to produce a single output. If any one of the three systems fails, the other two systems can correct and mask the...
is a relatively inefficient FEC. Better FEC codes typically examine the last several dozen, or even the last several hundred, previously received bits to determine how to decode the current small handful of bits (typically in groups of 2 to 8 bits).
Averaging noise to reduce errors
FEC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data.- Because of this "risk-pooling" effect, digital communication systems that use FEC tend to work well above a certain minimum signal-to-noise ratioSignal-to-noise ratioSignal-to-noise ratio is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. It is defined as the ratio of signal power to the noise power. A ratio higher than 1:1 indicates more signal than noise...
and not at all below it. - This all-or-nothing tendency — the cliff effectCliff effectIn telecommunications, the cliff effect or brickwall effect describes the sudden loss of digital signal reception. Unlike analog signals, which gradually fade when signal strength decreases or electromagnetic interference or multipath increases, a digital signal provides data which is either...
— becomes more pronounced as stronger codes are used that more closely approach the theoretical Shannon limit. - Interleaving FEC coded data can reduce the all or nothing properties of transmitted FEC codes when the channel errors tend to occur in bursts. However, this method has limits; it is best used on narrowband data.
Most telecommunication systems used a fixed channel code
Channel code
In digital communications, a channel code is a broadly used term mostly referring to the forward error correction code and bit interleaving in communication and storage where the communication media or storage media is viewed as a channel...
designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse.
However, some systems adapt to the given channel error conditions:
hybrid automatic repeat-request uses a fixed FEC method as long as the FEC can handle the error rate, then switches to ARQ when the error rate gets too high;
adaptive modulation and coding uses a variety of FEC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.
Types of FEC
The two main categories of FEC codes are block codeBlock code
In coding theory, block codes refers to the large and important family of error-correcting codes that encode data in blocks.There is a vast number of examples for block codes, many of which have a wide range of practical applications...
s and convolutional code
Convolutional code
In telecommunication, a convolutional code is a type of error-correcting code in which* each m-bit information symbol to be encoded is transformed into an n-bit symbol, where m/n is the code rate and...
s.
- Block codes work on fixed-size blocks (packets) of bits or symbols of predetermined size. Practical block codes can generally be decoded in polynomial time to their block length.
- Convolutional codes work on bit or symbol streams of arbitrary length. They are most often decoded with the Viterbi algorithmViterbi algorithmThe Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states – called the Viterbi path – that results in a sequence of observed events, especially in the context of Markov information sources, and more generally, hidden Markov models...
, though other algorithms are sometimes used. Viterbi decoding allows asymptotically optimal decoding efficiency with increasing constraint length of the convolutional code, but at the expense of exponentially increasing complexity. A convolutional code can be turned into a block code, if desired, by "tail-biting".
There are many types of block codes, but among the classical ones the most notable is Reed-Solomon coding because of its widespread use on the Compact disc
Compact Disc
The Compact Disc is an optical disc used to store digital data. It was originally developed to store and playback sound recordings exclusively, but later expanded to encompass data storage , write-once audio and data storage , rewritable media , Video Compact Discs , Super Video Compact Discs ,...
, the DVD
DVD
A DVD is an optical disc storage media format, invented and developed by Philips, Sony, Toshiba, and Panasonic in 1995. DVDs offer higher storage capacity than Compact Discs while having the same dimensions....
, and in hard disk drives. Other examples of classical block codes include Golay, BCH
BCH code
In coding theory the BCH codes form a class of parameterised error-correcting codes which have been the subject of much academic attention in the last fifty years. BCH codes were invented in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-Chaudhuri...
, Multidimensional parity
Multidimensional parity-check code
A multidimensional parity-check code is a simple type of error correcting code that operates by arranging the message into a multidimensional grid, and calculating a parity digit for each row and column...
, and Hamming codes
Hamming code
In telecommunication, Hamming codes are a family of linear error-correcting codes that generalize the Hamming-code invented by Richard Hamming in 1950. Hamming codes can detect up to two and correct up to one bit errors. By contrast, the simple parity code cannot correct errors, and can detect only...
.
Hamming ECC is commonly used to correct NAND flash memory errors. This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliable single level cell (SLC) NAND. Denser multi level cell (MLC) NAND requires stronger multi-bit correcting ECC such as BCH or Reed–Solomon.
Classical block codes are usually implemented using hard-decision algorithms, which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, soft-decision algorithms like the Viterbi decoder process (discretized) analog signals, which allows for much higher error-correction performance than hard-decision decoding.
Nearly all classical block codes apply the algebraic properties of finite field
Finite field
In abstract algebra, a finite field or Galois field is a field that contains a finite number of elements. Finite fields are important in number theory, algebraic geometry, Galois theory, cryptography, and coding theory...
s.
Concatenated FEC codes for improved performance
Classical (algebraic) block codes and convolutional codes are frequently combined in concatenated coding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed-Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended.Concatenated codes have been standard practice in satellite and deep space communications since Voyager 2
Voyager program
The Voyager program is a U.S program that launched two unmanned space missions, scientific probes Voyager 1 and Voyager 2. They were launched in 1977 to take advantage of a favorable planetary alignment of the late 1970s...
first used the technique in its 1986 encounter with Uranus
Uranus
Uranus is the seventh planet from the Sun. It has the third-largest planetary radius and fourth-largest planetary mass in the Solar System. It is named after the ancient Greek deity of the sky Uranus , the father of Cronus and grandfather of Zeus...
. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna.
Low-density parity-check (LDPC)
Low-density parity-checkLow-density parity-check code
In information theory, a low-density parity-check code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel, and is constructed using a sparse bipartite graph...
(LDPC) codes are a class of recently re-discovered highly efficient linear block
codes. They can provide performance very close to the channel capacity (the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations can draw heavily from the use of parallelism.
LDPC codes were first introduced by Robert G. Gallager
Robert G. Gallager
Robert Gray Gallager is an American electrical engineer known for his work on information theory and communications networks. He was elected an IEEE Fellow in 1968 and a member of the National Academy of Engineering in 1979. He received the Claude E. Shannon Award from the IEEE Information Theory...
in his PhD thesis in 1960,
but due to the computational effort in implementing encoder and decoder and the introduction of Reed–Solomon codes,
they were mostly ignored until recently.
LDPC codes are now used in many recent high-speed communication standards, such as DVB-S2
DVB-S2
Digital Video Broadcasting - Satellite - Second Generation is a digital television broadcast standard that has been designed as a successor for the popular DVB-S system. It was developed in 2003 by the , an international industry consortium, and ratified by ETSI in March 2005...
(Digital video broadcasting), WiMAX
WiMAX
WiMAX is a communication technology for wirelessly delivering high-speed Internet service to large geographical areas. The 2005 WiMAX revision provided bit rates up to 40 Mbit/s with the 2011 update up to 1 Gbit/s for fixed stations...
(IEEE 802.16e standard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n), 10GBase-T Ethernet (802.3an) and G.hn/G.9960
G.hn
G.hn is the common name for a home network technology family of standards developed under the International Telecommunication Union's Standardization arm and promoted by the HomeGrid Forum...
(ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within 3GPP
3GPP
The 3rd Generation Partnership Project is a collaboration between groups of telecommunications associations, known as the Organizational Partners...
MBMS
MBMS
MBMS may refer to:* Multimedia Broadcast Multicast Service* Margaret Brent Middle School...
(see fountain codes).
Turbo codes
Turbo codingTurbo code
In information theory, turbo codes are a class of high-performance forward error correction codes developed in 1993, which were the first practical codes to closely approach the channel capacity, a theoretical maximum for the code rate at which reliable communication is still possible given a...
is an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver
Interleaving
In computer science and telecommunication, interleaving is a way to arrange data in a non-contiguous way to increase performance.It is typically used:* In error-correction coding, particularly within data transmission, disk storage, and computer memory....
to produce a block code that can perform to within a fraction of a decibel of the Shannon limit. Predating LDPC codes in terms of practical application, they now provide similar performance.
One of the earliest commercial applications of turbo coding was the CDMA2000 1x
CDMA2000
CDMA2000 is a family of 3G mobile technology standards, which use CDMA channel access, to send voice, data, and signaling data between mobile phones and cell sites. The set of standards includes: CDMA2000 1X, CDMA2000 EV-DO Rev. 0, CDMA2000 EV-DO Rev. A, and CDMA2000 EV-DO Rev. B...
(TIA IS-2000) digital cellular technology developed by Qualcomm
Qualcomm
Qualcomm is an American global telecommunication corporation that designs, manufactures and markets digital wireless telecommunications products and services based on its code division multiple access technology and other technologies. Headquartered in San Diego, CA, USA...
and sold by Verizon Wireless
Verizon Wireless
Cellco Partnership, doing business as Verizon Wireless, is one of the largest mobile network operators in the United States. The network has 107.7 million subscribers as of 2011, making it the largest wireless service provider in America....
, Sprint
Sprint Nextel
Sprint Nextel Corporation is an American telecommunications company based in Overland Park, Kansas. The company owns and operates Sprint, the third largest wireless telecommunications network in the United States, with 53.4 million customers, behind Verizon Wireless and AT&T Mobility...
, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access, 1xEV-DO
Evolution-Data Optimized
Evolution-Data Optimized or Evolution-Data only is a telecommunications standard for the wireless transmission of data through radio signals, typically for broadband Internet access...
(TIA IS-856). Like 1x, EV-DO was developed by Qualcomm
Qualcomm
Qualcomm is an American global telecommunication corporation that designs, manufactures and markets digital wireless telecommunications products and services based on its code division multiple access technology and other technologies. Headquartered in San Diego, CA, USA...
, and is sold by Verizon Wireless
Verizon Wireless
Cellco Partnership, doing business as Verizon Wireless, is one of the largest mobile network operators in the United States. The network has 107.7 million subscribers as of 2011, making it the largest wireless service provider in America....
, Sprint
Sprint Nextel
Sprint Nextel Corporation is an American telecommunications company based in Overland Park, Kansas. The company owns and operates Sprint, the third largest wireless telecommunications network in the United States, with 53.4 million customers, behind Verizon Wireless and AT&T Mobility...
, and other carriers (Verizon's marketing name for 1xEV-DO is Broadband Access, Sprint's consumer and business marketing names for 1xEV-DO are Power Vision and Mobile Broadband, respectively.).
Local decoding and testing of codes
Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool in computational complexity theoryComputational complexity theory
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other...
, e.g., for the design of probabilistically checkable proofs.
Locally decodable codes are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. Locally testable code
Locally testable code
In theoretical computer science, a locally testable code is an error correcting code for which membership can be tested by a non-adaptive property testing algorithm. That is, locally testable codes have an efficient probabilistic algorithm that, based on its random bits, non-adaptively looks at few...
s are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal.
List of error-correcting codes
- AN codesAN codesAN codes are error-correcting code that are used in arithmetic applications. Arithmetic codes were commonly used in computer processors to ensure the accuracy of its arithmetic operations when electronics were more unreliable. Arithmetic codes help the processor to detect when an error is made...
- BCH codeBCH codeIn coding theory the BCH codes form a class of parameterised error-correcting codes which have been the subject of much academic attention in the last fifty years. BCH codes were invented in 1959 by Hocquenghem, and independently in 1960 by Bose and Ray-Chaudhuri...
- Constant-weight codeConstant-weight codeIn coding theory, a constant-weight code, also called an m of n code, is an error detection and correction code where all codewords share the same Hamming weight. The theory is closely connected to that of designs...
- Convolutional codeConvolutional codeIn telecommunication, a convolutional code is a type of error-correcting code in which* each m-bit information symbol to be encoded is transformed into an n-bit symbol, where m/n is the code rate and...
- Group codeGroup codeIn computer science, group codes are a type of code. Group codes consist ofn linear block codes which are subgroups of G^n, where G is a finite Abelian group....
s - Golay codes, of which the Binary Golay codeBinary Golay codeIn mathematics and electronics engineering, a binary Golay code is a type of error-correcting code used in digital communications. The binary Golay code, along with the ternary Golay code, has a particularly deep and interesting connection to the theory of finite sporadic groups in mathematics....
is of practical interest - Goppa code, used in the McEliece cryptosystemMcEliece cryptosystemIn cryptography, the McEliece cryptosystem is an asymmetric encryption algorithm developed in 1978 by Robert McEliece. It was the first such scheme to use randomization in the encryption process...
- Hadamard codeHadamard codeThe Hadamard code is an error-correcting code that is used for error detection and correction when transmitting messages over very noisy or unreliable channels....
- Hagelbarger codeHagelbarger codeIn telecommunication, a Hagelbarger code is a convolutional code that enables error bursts to be corrected provided that there are relatively long error-free intervals between the error bursts....
- Hamming codeHamming codeIn telecommunication, Hamming codes are a family of linear error-correcting codes that generalize the Hamming-code invented by Richard Hamming in 1950. Hamming codes can detect up to two and correct up to one bit errors. By contrast, the simple parity code cannot correct errors, and can detect only...
- Latin square based code for non-white noise (prevalent for example in broadband over powerlines)
- Lexicographic codeLexicographic codeLexicographic codes or lexicodes are greedily generated error-correcting codes with remarkably good properties. They were produced independently byLevenshtein and Conway and Sloane and are known to be linear over some finite fields.- Construction :...
- Long codeLong code (mathematics)In theoretical computer science and coding theory, the long code is an error-correcting code that is locally decodable. Long codes have an extremely poor rate, but play a fundamental role in the theory of hardness of approximation.-Definition:...
- Low-density parity-check codeLow-density parity-check codeIn information theory, a low-density parity-check code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel, and is constructed using a sparse bipartite graph...
, also known as Gallager code, as the archetype for sparse graph codeSparse graph codeA Sparse graph code is a code which is represented by a sparse graph.Any linear code can be represented as a graph, where there are two sets of nodes - a set representing the transmitted bits and another set representing the constraints that the transmitted bits have to satisfy...
s - LT code, which is a near-optimal rateless erasure correcting code (Fountain code)Fountain codeIn coding theory, fountain codes are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of...
- m of n codes
- Online code, a near-optimal rateless erasure correcting codeFountain codeIn coding theory, fountain codes are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of...
- Raptor code, a near-optimal rateless erasure correcting codeFountain codeIn coding theory, fountain codes are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of...
- Reed–Solomon error correctionReed–Solomon error correctionIn coding theory, Reed–Solomon codes are non-binary cyclic error-correcting codes invented by Irving S. Reed and Gustave Solomon. They described a systematic way of building codes that could detect and correct multiple random symbol errors...
- Reed–Muller codeReed–Muller codeReed–Muller codes are a family of linear error-correcting codes used in communications. Reed–Muller codes belong to the classes of locally testable codes and locally decodable codes, which is why they are useful in the design of probabilistically checkable proofs in computational complexity theory....
- Repeat-accumulate codeRepeat-Accumulate CodeIn computer science, repeat-accumulate codes are a low complexity class of error-correcting codes. They were devised so that their ensemble weight distributions are easy to derive...
- Repetition codeRepetition codeIn coding theory, the repetition code is one of the most basic error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel...
s, such as Triple modular redundancyTriple modular redundancyIn computing, triple modular redundancy is a fault tolerant form of N-modular redundancy, in which three systems perform a process and that result is processed by a voting system to produce a single output. If any one of the three systems fails, the other two systems can correct and mask the... - Tornado code, a near-optimal erasure correcting codeErasure codeIn information theory, an erasure code is a forward error correction code for the binary erasure channel, which transforms a message of k symbols into a longer message with n symbols such that the original message can be recovered from a subset of the n symbols...
, and the precursor to Fountain codeFountain codeIn coding theory, fountain codes are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of...
s - Turbo codeTurbo codeIn information theory, turbo codes are a class of high-performance forward error correction codes developed in 1993, which were the first practical codes to closely approach the channel capacity, a theoretical maximum for the code rate at which reliable communication is still possible given a...
- Walsh–Hadamard code
See also
- Code rateCode rateIn telecommunication and information theory, the code rate of a forward error correction code is the proportion of the data-stream that is useful...
- Erasure codeErasure codeIn information theory, an erasure code is a forward error correction code for the binary erasure channel, which transforms a message of k symbols into a longer message with n symbols such that the original message can be recovered from a subset of the n symbols...
s - Soft-decision decoderSoft-decision decoderIn information theory, a soft-decision decoder is a class of algorithm used to decode data that has been encoded with an error correcting code. Whereas a hard-decision decoder operates on data that take on a fixed set of possible values , the inputs to a soft-decision decoder may take on a whole...
- Error detection and correctionError detection and correctionIn information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels...
- Bold text
Further reading
- "Error Correction Code in Single Level Cell NAND Flash memories" 16 February 2007