Scribe Note 2-Feb


A digression

Claude Shannon is the father of information theory. He also made fundamental contributions to digital computer and circuit. Check out this video to know more about him. 

 

Review

Last time, we proved the converse of the source coding theorem by ingenious use of several information-theoretic inequalities. The result is as follows. For any source Formula with alphabet Formula and any code with rate Formula, we have

Formula .

If Formula, the block error probability is bounded below by

Formula,

which is positive for Formula sufficiently large. Hence, information loss is inevitable when Formula. What about Formula? The next result shows that data compression with negligible risk of information loss is possible at any rate above Formula.

 

Achievability for source coding theorem

We first give an elementary proof of the theorem. This is a simplified version of section 3.2 of the textbook and does not give an explicit bound of the error probability. Then, we give another proof using the method of random coding.

 

Elementary proof

Let Formula and Formula. By Theorem 3.1.2(3)1, Formula. Since Formula, there exists an injective map Formula from Formula  to Formula.The code is defined as follows:

(1) Encoding: If Formula, we encode it as Formula. If Formula, we send an error message2.

(2) Decoding:Given Formula, we decode it as  Formula whenever this is defined. Else we declare a decoding error.

 

It is obvious that the probability of error is just Formula. By Theorem 3.1.2 (2), Formula. Hence the rate Formula is achievable.Formula

 

Proof using random coding

Random coding is a fundamental tool of information theory. Although not computationally feasible, the random code has a relatively simple description that makes mathematical analysis possible. The same idea is used in the proof of the channel coding theorem. We begin by describing the code. Let Formula and Formula.

(1) Encoding: A map Formula  is chosen randomly3 from the set of all functions from Formula to Formula. That is, for each source sequence Formula, a binary sequence Formula is assigned. The assignment Formula is called the codebook, and is agreed a priori by both Alice (encoder) and Bob (decoder). Note that Formula, once chosen, is a deterministic function.

(2) Decoding: Given a string Formula, Bob examines the codebook to see if there exists a unique strongly typical4 Formula such that Formula. If so, he outputs Formula. Else he declares a decoding error.

 

For fixed Formula and Formula, a decoding error occurs if and only if one of the following happens: (i) There is no typical Formula that is mapped to Formula. (ii) There are two or more typical Formula that are mapped to Formula. We shall show that the probability of error is asymptotically negligible. In the following analysis, it is important to distinguish between two sources of randomness: The randomness of Formula and the randomness of Formula. The notation is somewhat confusing because the same notation Formula is used for both sources of randomness. See footnote 5 for further explanation of my notation.

 

Our first result bounds the probability of error (i). 

Result 1 (b)(i): Formula . More precisely, we have the bound Formula, where Formula.

Proof: The first statement follows directly from the law of large numbers. From the definition of Formula, Formula implies that

Formula

where Formula is the type of Formula. By Pinsker's inequality, Formula. Hence, by Theorem 11.2.1,

Formula.Formula 

 

Result 2 (b)(ii): For each fixed Formula, we have the bound Formula.

Proof: For each Formula, Formula. From (10.168), Formula. Using the union bound, we have

Formula

Hence FormulaFormula 

 

Note that Formula is fixed in result 2. But our real interest is in estimating the probability of error (ii). Our next result is a "randomized" version of result 2. Note that the randomness here is due to both Formula and Formula.

 

Result 3 (b)(iii): Let Formula be the event Formula. Then Formula.

Proof: Using result 2 and indepedence, we have

Formula

Hence Formula.Formula

 

We are ready to prove that the random code has a small probability of error.

 

Result 4 (b)(iv): Formula

Proof: Since Formula, we have

Formula

In the last inequality, we used the fact that if Formula and Formula, thenFormula (see footnote 6).Formula

 

We have NOT finished because we still allow the randomness of Formula. To finish off, we need to show that a particular Formula has small probability of error. Indeed, we can show that at least half of all possible Formula are "good".

Result 5 (b)(v)With probability Formula, a randomly chosen Formula (and the associated code) has probabiliy of error at most Formula.

Proof: Define Formula. Then

Formula.

                             Formula

 

By Markov's inequality, for any Formula,

Formula.

Pick Formula. Then

Formula.

Hence, with probability Formula, a randomly chosen Formula has probabiliy of error at most Formula.Formula

 

Since each Formula has the same probability of being chosen, we have proved that at least half coding schemes (according to our definition) have small probabilities of error. This completes our proof of the source coding theorem. We will consider the more general problem of channel coding next time.

 

Footnotes

1. Recall that Formula (see Page 59 of textbook).

2. The case Formula is trivial. Otherwise, from the proof of Theorem 3.1.2(3), we have Formula. We can just pick any Formula not in the range ofFormula.

3.  Following standard usage, we assume the uniform distribution. Probabilistically, this is equivalent to choosing Formula elements from Formula randomly with replacement.

4. To get the exponential bound of result 1 using Pinsker's Inequality (Lemma 11.6.1), I have to use strong typicality. Formula is defined by

Formula.

(See Page 357 of textbook.) Without loss of generality, we can assume Formula.

5. I use Formula to denote the random variable and Formula to denote a particular value of the variable. If I need emphasize the randomness of Formula, I use the notation Formula. For example, in result 1, the randomness is due to Formula. In result 2, the randomness is due to Formula. We also assume that Formula and Formula are independent. 

6. Consider the expression Formula, where both Formula and Formula are Formula. Note that

Formula.

Clearly  Formula and Formula are Formula. Hence Formula. Hence

Formula