I have entered the grades on the electronic web form and I think you should be able to access it soon.

If you have not done so already, please fill in the course evaluation at my.cmu.edu under Academics. You probably also got an email with instructions on where to fill this out.

There was some request for notes on the 3-query LDCs we covered in the final lecture. I may not be able to write full blown notes, but I’ll try to expand the lecture summary for that post with some details of the construction and analysis when I find time.

]]>We focused on local (unique) decoding of codes for the lecture. We saw how Hadamard codes can be locally decoded using just two queries. However, their encoding length for a message of length is . We then saw the higher degree generalization of Hadamard codes, where the message is interpreted as a degree homogeneous multilinear polynomial (i.e., all terms have degree exactly ). This gave us codes of encoding length , and we discussed a -query local decoding algorithm. This was based on interpolating the restriction of the multilinear polynomial on a line in a random direction. Thus for any constant , we got codes that are locally decodable using queries that have encoding length .

We then turned to the ingenious 3-query locally decodable code (LDC) construction due to Yekhanin. In keeping with the theme of our initial constructions, we presented a polynomial view of these codes, where the messages are again interpreted as homegeneous multilinear polynomials of certain degree (say ) but only a carefully chosen subset of all possible monomials are allowed. (This actually reduces the rate compared to our earlier construction, but the big gain is that one is able to locally decode using only * three* queries instead of about queries!) Our description is based on a variant of Yekhanin’s construction that was discovered by Raghavendra and subsequently presented by Gopalan as polynomial based codes.

For every such that is prime (such a prime is called a Mersenne prime), we gave a construction of -query LDCs of encoding length . Since very large Mersenne primes are known, we get 3-query LDCs of encoding length less than . We presented a 3-query algorithm and proved its correctness assuming the stated properties of the “matching sets” used in the construction, and then explained how to construct families of such subsets of of size .

]]>Also if for every , and some constant , we argued why the message passing algorithm succeeds with high probability on for any constant .

We then argued how the distributions

and

(perhaps truncated to a finite series) enables achieving capacity of — we can achieve a rate with decoding complexity (since the average variable node degree is ).

This result is from the paper Efficient erasure correcting codes. Further details, including extensions to BSC and AWGN channels, and the martingale argument for the concentration of the performance around that of the average code in the ensemble, can be found in the paper * *The capacity of low-density parity-check codes under message-passing decoding.

The last quarter of the lecture was devoted a recap of the main topics covered in the course.

]]>During lecture, the question of the speed of convergence of the bit error probability (BER) to zero was asked. The answer I guessed turns out to be correct: if we run the algorithm for iterations which is smaller than the girth of the graph, for Algorithm A the BER is at most for some , and for Algorithm B for with an optimized cut-off for flipping, the BER is at most for some .

We do not plan to have notes for this segment of the course. I can, however, point you to an introductory survey I wrote (upon which the lectures are loosely based), or Gallager’s remarkable Ph.D. thesis which can be downloaded here (the decoding algorithms we covered are discussed in Chapter 4). A thorough treatment of the latest developments in the subject of iterative and belief propagation decoding algorithms can be found in Richardson and Urbanke’s comprehensive book Modern Coding Theory.

]]>I’ll speak about this result in the ACO seminar today. The paper can be downloaded here.

]]>- Binary codes which are list-decodable up to the Zyablov radius (earlier we saw to
*unique*decode up to*half*the Zyablov radius using GMD decoding) - Construction of codes of rate over an alphabet of size that are list-decodable up to a fraction of errors. The alphabet size is not far from the optimal bound of , and nicely combines ideas from the algebraic coding and expander decoding parts of the course.

We then wrapped up our discussion of list decoding by mentioning some of the big questions that still remain open, especially in constructing binary codes with near-optimal (or even better than currently known) trade-offs.

We discussed the framework of message-passing algorithms for LDPC codes, which will be the subject of the next lecture or two. We will mostly follow the description in this survey, but will not get too deep into the material.

]]>We presented an algorithm for list decoding folded Reed-Solomon codes (with folding parameter ) when the agreement fraction is more than . This was based on the extension of the Welch-Berlekamp algorithm to higher order interpolation (in variables). Unfortunately, this result falls well short of our desired target, and in particular is meaningless for .

We then saw how to run the -variate algorithm on a folded RS code with folding parameter , to list decode when the agreement fraction is more than . Picking large and , say and , then enables list decoding from agreement fraction . We will revisit this final statement briefly at the beginning of the next lecture, and also comment on the complexity of the algorithm, bound on list-size, and alphabet size of the codes.

** **Notes for this lecture may not be immediately available, but you can refer to the original paper Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy or Chapter 6 of the survey Algorithmic results for list decoding. Both of these are tailored to list decode even from the (in general) smaller agreement fraction and use higher degrees for the ‘s in the polynomial as well as multiple zeroes at the interpolation points. In the lecture, however, we were content, for sake of simplicity and because it suffices to approach agreement fraction , with restricting to be linear in the ‘s.

A reminder that we will have **NO** lecture this Friday (April 16) due to Spring Carnival.

We will have notes for this week’s lecture available soon, but the material covered this week has also been written about in several surveys on list decoding (some of which are listed on the course webpage). Here are a couple of pointers, which also discuss the details of list decoding folded RS codes which we will cover next week (though we will use a somewhat simpler presentation with weaker bounds in our lectures):

- Algorithmic results for list decoding (Chapter 4 covers RS list decoding)
- Bridging Shannon and Hamming: List Error-Correction with Optimal Rate (again Section 4 briefly discusses list decoding RS codes)