Rate Distortion Theory

Codebook points for a Gaussian univariate source

To derive the mean of a clipped Gaussian random variable $ X $, conditioned on $ X > 0 $, follow these steps:

1. Start with the Gaussian PDF:

The probability density function (PDF) of a Gaussian random variable $ X $ with mean $ $ and standard deviation $ $ is:

\[ f_X(x) = \frac{1}{\sigma \sqrt{2\pi}} \exp\left(-\frac{(x - \mu)^2}{2\sigma^2}\right) \]

This is valid for $ x (-, ) $.

2. Truncate the Distribution at $ X > 0 $:

When conditioning on $ X > 0 $, we truncate the Gaussian distribution, and the new PDF becomes:

\[ f_{X|X>0}(x) = \frac{f_X(x)}{P(X > 0)} = \frac{f_X(x)}{1 - \Phi\left(\frac{0 - \mu}{\sigma}\right)} \]

where $ () $ is the cumulative distribution function (CDF) of the standard normal distribution.

3. Expectation of the Truncated Distribution:

The mean of the truncated Gaussian distribution is:

\[ E[X \mid X > 0] = \frac{\int_0^\infty x f_X(x) dx}{P(X > 0)} \]

4. Substitute the Gaussian PDF:

\[ E[X \mid X > 0] = \frac{\int_0^\infty x \frac{1}{\sigma \sqrt{2\pi}} \exp\left(-\frac{(x - \mu)^2}{2\sigma^2}\right) dx}{1 - \Phi\left(\frac{0 - \mu}{\sigma}\right)} \]

5. Standardize the Variable:

Introduce a standard normal variable $ Z = $, so that:

\[ E[X \mid X > 0] = \mu + \frac{\sigma \int_{-\frac{\mu}{\sigma}}^\infty z \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{z^2}{2}\right) dz}{1 - \Phi\left(\frac{-\mu}{\sigma}\right)} \]

6. Solve the Integral:

The integral $ _a^z (z) dz $ (where $ (z) $ is the standard normal PDF) is known to be:

\[ \int_a^\infty z \phi(z) dz = \frac{\phi(a)}{1 - \Phi(a)} \]

Substituting $ a = - $, we get:

\[ E[X \mid X > 0] = \mu + \frac{\sigma \phi\left(\frac{-\mu}{\sigma}\right)}{1 - \Phi\left(\frac{-\mu}{\sigma}\right)} \]

Conclusion:

The mean of the Gaussian random variable $ X $ conditioned on $ X > 0 $ is:

\[ E[X \mid X > 0] = \mu + \frac{\sigma \phi\left(\frac{-\mu}{\sigma}\right)}{1 - \Phi\left(\frac{-\mu}{\sigma}\right)} \]

If the mean of $ X $ is 0 (i.e., $ = 0 $), the formula for the conditional mean simplifies significantly. Using the previous result:

\[ E[X \mid X > 0] = 0 + \frac{\sigma \phi(0)}{1 - \Phi(0)} \]

Since $ (0) = $ and $ (0) = 0.5 $, we get:

\[ E[X \mid X > 0] = \frac{\sigma}{\sqrt{2\pi}} \times 2 = \frac{\sigma \sqrt{2}}{\sqrt{\pi}} \]

Thus, the result is:

\[ E[X \mid X > 0] = \sqrt{\frac{2}{\pi}} \sigma \]

Optimal 1-bit quantization of a Gaussian source

Since the conditional mean minimizes the MSE of the quantization error,

\[\mathbb{E}[(X - \hat{X})^2] = \mathbb{E}[(X - \sqrt{\frac{2}{\pi}} \sigma)^2]\]

we can define two code points in the x-axis of the Gaussian source and map the positive ragion of \(X\) into $ - $ and the negative region into $ $ as shown in the equation above.