| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Scribe Notes 6_2_1

Page history last edited by Poppin Ke 14 years, 10 months ago
  1. Source-channel seperation

We have a source Formula, that generates symbols from an alphabet Formula.     

Assumption: it is from a finite alphabet and satisfies the AEP. E.g., a sequence of i.i.d. random variables, sequence of states of a stationary irreducible Markov chain.

 

We want to send he sequence of symbols Formula over the channel so that the receiver can reconstruct the sequence. to do this, we map the sequence onto a codeword Formula and send he codeword over he channel. The receiver looks at his received sequence Formula and makes an estimate Formula of the sequence of Formula that was sent. The receiver makes an error if Formula.

Define the probability of errorFormula

 

FormulaFormula

 

Theorem (source-channel coding theorem): if Formula is a finite alphabet stochastic process that satisfies the AEP, then there exists a source channel code with Formulatends to 0 if Formula

Conversely, for any stationary stochastic process, if Formula, the probability of error is bounded away from zero, and it is not possible to send the process over the channel with arbitrarily low probability of error.

 

Proof:

Achievability:

Since we have assumed that the stochastic process satisfies the AEP, it implies that there exists a typical set Formula of size Formula which contains most of the probability. we will encode the source sequences belonging to the typical set; all other sequences will result in an error. This will contribute at most Formula to the probability of error.

 

We index all the sequences belonging to Formula. Since there are at most Formula such sequences, Formula bits suffice to index them. We can transmit the desired index to the receiver with probability of error less than Formula if Formula.

To be precise

Formula

          Formula for n sufficiently large

Hence, we can reconstruct the sequence with low probability of error for n sufficiently large if Formula

 

Converse: we wil show that Formulatends to 0 implies that Formula for any sequence of source-channel codes

Hence, Formula By Fano's inequality

Formula     By definition of entropy rate of a stationary process

             Formula    By Fano's inequality

             Formula     By the data processing inequality

             Formula     By the memoryless of the channel

             Formula

Hence   Formula by letting n tends to infinity, we have Formula tends to 0.

 

Note: The data compression theorem is a consequence of the AEP, which shows the there exists a small subset (size of Formula)of all possible source seuqences that contain most of the probability and that we can therefore represent the source with a small probability of error using H bits per symbol. The transmission theorem is based on the joint AEP; it uses the fact that for long block lengths, the output sequence of the channel is very likely to be jointly typical with probability Formula. Hence we can usse about Formula.

 

 

 

 

 

 

 

 

Comments (0)

You don't have permission to comment on this page.