Linear Codes February 22, 2006

In nearest neighbor decoding, a received word is decoded as the codeword at closest Hamming distance.

[Theorem]
All linear codes contain the zero word.

[Theorem]
For a linear code using nearest neighbor decoding, whether or not a received word r is uniquely correctly decodable depends only on the error vector (also ray decoding is nearest neighbor decoding.

The Generator Matrix
A generator matrix G for an [n, k] code C is any k x n matrix whose rows form a basis for C.

[Theorem]
If G = [Ik | A] is a generator matrix for the [n, k] code C in standard form, then H = [-AT | In-k] is a parity check matrix for C.

Syndrome Decoding
Let H be a parity check matrix for a linear code, and let v be any word. Then the syndrome of v, syn(v) = vHT.

For a codeword, x, syn(x) = 0.

[Theorem]
If syn(u) = syn(v), then u and v belong to the same coset.

Syndrome decoding
The idea behind syndrome decoding is that given a received word, you can compute its syndrome to find which coset it belongs to (if you remember the syndromes of the coset leaders). The benefit is the savings in storage, since you don’t have to keep the whole slepian array.

(i) Calculate the syndrome r HT of the received word r
(ii) Scan the stored list of coset leaders to find the coset leader, e, with the same syndrome
(iii) Decode to the codeword c = r – e

Puncturing Codes
Deleting the same corrdinate i in each codeword of a code is termed “puncturing” the code. The resulting code is still linear.

Decoding and Shannons Theorem
Assume that cZ2n is sent and y ∈ Z2n is received and decoded as cZ2n.

From Bayes’s rule Maximum a posteriori (MAP) decoding
Choose c = c for the codeword c with Pr(c|y) maximum.

Maximum Likelihood decoding
Pick c = c for the codeword with Pr(y|c) maximum.

 Tags Error Correcting Codes {rss} Conversation Technorati Cosmos Feedster Bloglines Related Tags Comments view comments Post comment Trackback Trackback URI Logged in users can add tags Create an account here Search Gemüsehaken.org