The Berlekamp–Welch algorithm, also known as the Welch–Berlekamp algorithm, is named for Elwyn R. Berlekamp and Lloyd R. Welch. This is a decoder algorithm that efficiently corrects errors in Reed–Solomon codes for an RS(n, k), code based on the Reed Solomon original view where a message \( {\displaystyle m_{1},\cdots ,m_{k}} \) is used as coefficients of a polynomial \( {\displaystyle F(a_{i})} \) or used with Lagrange interpolation to generate the polynomial \( {\displaystyle F(a_{i})} \) of degree < k for inputs \( {\displaystyle a_{1},\cdots ,a_{k}} \) and then \( {\displaystyle F(a_{i})} \) is applied to\( {\displaystyle a_{k+1},\cdots ,a_{n}} \) to create an encoded codeword \( {\displaystyle c_{1},\cdots ,c_{n}} \).
The goal of the decoder is to recover the original encoding polynomial \( {\displaystyle F(a_{i})} \), using the known inputs \( {\displaystyle a_{1},\cdots ,a_{n}}\) and received codeword \( {\displaystyle b_{1},\cdots ,b_{n}} \) with possible errors. It also computes an error polynomial \( {\displaystyle E(a_{i})} \) where \( {\displaystyle E(a_{i})=0} \) corresponding to errors in the received codeword.
The key equations
Defining e = number of errors, the key set of n equations is
\( {\displaystyle b_{i}E(a_{i})=E(a_{i})F(a_{i})} \)
Where E(ai) = 0 for the e cases when bi ≠ F(ai), and E(ai) ≠ 0 for the n - e non error cases where bi = F(ai) . These equations can't be solved directly, but by defining Q() as the product of E() and F():
\( {\displaystyle Q(a_{i})=E(a_{i})F(a_{i})} \)
and adding the constraint that the most significant coefficient of E(ai) = ee = 1, the result will lead to a set of equations that can be solved with linear algebra.
\( {\displaystyle b_{i}E(a_{i})=Q(a_{i})} \)
\( {\displaystyle b_{i}E(a_{i})-Q(a_{i})=0} \)
\( {\displaystyle b_{i}(e_{0}+e_{1}a_{i}+e_{2}a_{i}^{2}+\cdots +e_{e}a_{i}^{e})-(q_{0}+q_{1}a_{i}+q_{2}a_{i}^{2}+\cdots +q_{q}a_{i}^{q})=0} \)
where q = n - e - 1. Since ee is constrained to be 1, the equations become:
\( {\displaystyle b_{i}(e_{0}+e_{1}a_{i}+e_{2}a_{i}^{2}+\cdots +e_{e-1}a_{i}^{e-1})-(q_{0}+q_{1}a_{i}+q_{2}a_{i}^{2}+\cdots +q_{q}a_{i}^{q})=-b_{i}a_{i}^{e}} \)
resulting in a set of equations which can be solved using linear algebra, with time complexity O(n^3).
The algorithm begins assuming the maximum number of errors e = ⌊ (n-k)/2 ⌋. If the equations can not be solved (due to redundancy), e is reduced by 1 and the process repeated, until the equations can be solved or e is reduced to 0, indicating no errors. If Q()/E() has remainder = 0, then F() = Q()/E() and the code word values F(ai) are calculated for the locations where E(ai) = 0 to recover the original code word. If the remainder ≠ 0, then an uncorrectable error has been detected.
Example
Consider RS(7,3) (n = 7, k = 3) defined in GF(7) with α = 3 and input values: ai = i-1 : {0,1,2,3,4,5,6}. The message to be systematically encoded is {1,6,3}. Using Lagrange interpolation, F(ai) = 3 x2 + 2 x + 1, and applying F(ai) for a4 = 3 to a7 = 6, results in the code word {1,6,3,6,1,2,2}. Assume errors occur at c2 and c5 resulting in the received code word {1,5,3,6,3,2,2}. Start off with e = 2 and solve the linear equations:
\( {\displaystyle {\begin{bmatrix}b_{1}&b_{1}a_{1}&-1&-a_{1}&-a_{1}^{2}&-a_{1}^{3}&-a_{1}^{4}\\b_{2}&b_{2}a_{2}&-1&-a_{2}&-a_{2}^{2}&-a_{2}^{3}&-a_{2}^{4}\\b_{3}&b_{3}a_{3}&-1&-a_{3}&-a_{3}^{2}&-a_{3}^{3}&-a_{3}^{4}\\b_{4}&b_{4}a_{4}&-1&-a_{4}&-a_{4}^{2}&-a_{4}^{3}&-a_{4}^{4}\\b_{5}&b_{5}a_{5}&-1&-a_{5}&-a_{5}^{2}&-a_{5}^{3}&-a_{5}^{4}\\b_{6}&b_{6}a_{6}&-1&-a_{6}&-a_{6}^{2}&-a_{6}^{3}&-a_{6}^{4}\\b_{7}&b_{7}a_{7}&-1&-a_{7}&-a_{7}^{2}&-a_{7}^{3}&-a_{7}^{4}\\\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q0\\q1\\q2\\q3\\q4\\\end{bmatrix}}={\begin{bmatrix}-b_{1}a_{1}^{2}\\-b_{2}a_{2}^{2}\\-b_{3}a_{3}^{2}\\-b_{4}a_{4}^{2}\\-b_{5}a_{5}^{2}\\-b_{6}a_{6}^{2}\\-b_{7}a_{7}^{2}\\\end{bmatrix}}} \)
\( {\displaystyle {\begin{bmatrix}1&0&6&0&0&0&0\\5&5&6&6&6&6&6\\3&6&6&5&3&6&5\\6&4&6&4&5&1&3\\3&5&6&3&5&6&3\\2&3&6&2&3&1&5\\2&5&6&1&6&1&6\\\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q0\\q1\\q2\\q3\\q4\\\end{bmatrix}}={\begin{bmatrix}0\\2\\2\\2\\1\\6\\5\\\end{bmatrix}}} \)
\( {\displaystyle {\begin{bmatrix}1&0&0&0&0&0&0\\0&1&0&0&0&0&0\\0&0&1&0&0&0&0\\0&0&0&1&0&0&0\\0&0&0&0&1&0&0\\0&0&0&0&0&1&0\\0&0&0&0&0&0&1\\\end{bmatrix}}{\begin{bmatrix}e_{0}\\e_{1}\\q0\\q1\\q2\\q3\\q4\\\end{bmatrix}}={\begin{bmatrix}4\\2\\4\\3\\3\\1\\3\\\end{bmatrix}}} \)
Starting from the bottom of the right matrix, and the constraint e2 = 1:
\(} {\displaystyle Q(a_{i})=3x^{4}+1x^{3}+3x^{2}+3x+4} \)
\( {\displaystyle E(a_{i})=1x^{2}+2x+4} \)
\( {\displaystyle F(a_{i})=Q(a_{i})/E(a_{i})=3x^{2}+2x+1} \) with remainder = 0.
E(ai) = 0 at a2 = 1 and a5 = 4 Calculate F(a2 = 1) = 6 and F(a5 = 4) = 1 to produce corrected code word {1,6,3,6,1,2,2}.
See also
Reed–Solomon error correction
External links
MIT Lecture Notes on Essential Coding Theory – Dr. Madhu Sudan
University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra
Algebraic Codes on Lines, Planes and Curves, An Engineering Approach – Richard E. Blahut
Welch Berlekamp Decoding of Reed–Solomon Codes – L. R. Welch
US 4,633,470, Welch, Lloyd R. & Elwyn R. Berlekamp, "Error Correction for Algebraic Block Codes", published September 27, 1983, issued December 30, 1986 – The patent by Lloyd R. Welch and Elewyn R. Berlekamp
Undergraduate Texts in Mathematics
Graduate Studies in Mathematics
Hellenica World - Scientific Library
Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License