Data Processing Inequality

A very intuitive yet powerful inequality in information theory is the data processing inequality.

Lemma: If random variable $latex X$, $latex Y$ and $latex Z$ form a Markov chain 
$latex X \rightarrow Y \rightarrow Z$, then $latex I(X;Y) \ge I(X;Z)$.

The great thing about the inequality is that unlike some results in information theory, it works for both discrete and continuous random variables. (Actually it works even for mixed variables with some continuous and some discrete.) Let’s show it assuming variables are continuous.

Proof: $latex I(X;Z)=h(X)-h(X|Z)\overset{(a)}{\le} h(X)-h(X|Y,Z)$
$latex \overset{(b)}{=}h(X)-h(X|Y)=I(X;Y)$, where (a) is because $latex h(X|Y)-h(X|Y,Z)=I(X;Z|Y)\ge 0$ and (b) is from $latex X\rightarrow Y\rightarrow Z$. $latex \Box$

Note that we can use this simple inequality to show some rather nontrivial result. Consider continuous random variable $latex X$ and a continuous reversible function $latex f(\cdot)$. Let $latex Y=f(X)$. Note that in general we have $latex h(Y)\neq h(X)$ even though we $latex H(Y)=H(X)$ if $latex X$ were discrete. However, for any other continuous random variable $latex Z$, $latex h(Z|X)=h(Z|Y)$ always holds. This can be easily seem noting that both $latex Y\rightarrow X\rightarrow Z$ and $latex X\rightarrow Y\rightarrow Z$ hold as $latex f(\cdot)$ is reversible. Thus we have both $latex I(X;Z) \ge I(Y;Z)$ and $latex I(Y;Z) \ge I(X;Z)$, and therefore $latex I(Y;Z)=I(X;Z)\Rightarrow h(Z)-h(Z|Y)=h(Z)-h(Z|X)$
$latex \Rightarrow h(Z|Y)=h(Z|X)$. One may also prove it with first principle but it is going to be quite a bit harder.

Leave a Reply

Your email address will not be published. Required fields are marked *