# Recovery of Compressively Sensed Normally-distributed Sparse Signals in Noise

Before, I experimentally measured the phase transitions of all kinds of recovery algorithms (greedy, convex optimization, thresholding, cyclic) for vectors of varying lengths and sparsities, as well as distributions, where no noise is corrupting the measurements. For vectors distributed Normally and Laplacian, I find greedy approaches like orthogonal matching pursuit (OMP) and cyclic matching pursuit (CMP) outperform to a large extent convex optimization approaches like $$\ell_1$$ minimization (BP). For sparse vectors distributed bimodally, the performance of these greedy approaches significantly decrease. Furthermore, the performance of $$\ell_1$$-minimization is extremely robust to the distribution underlying the non-zero elements of a sparse vector. Building on this work, I am now interested in how the performance of these algorithms are affected by additive noise in the measurements.

I have rerun my simulations as before, but now with additive noise in the measurements. Just to be clear, I have sensed some $$s$$-sparse vector $$\vx \in \mathcal{R}^{400}$$ by a fat matrix $$\MA \in \mathcal{R}^{m \times 400}$$ sampled from the uniform spherical ensemble (implicit is that each column is unit norm). Thus, for each sparse vector, I have $$m$$ measurements given by $$\MA\vx$$.
Now, what happens when I attempt to recover $$\vx$$ from the measurements corrupted by additive noise: $$\vy = \MA\vx + \vn$$?
In preliminary experiments, I saw embarrassing results for all algorithms when the signal-to-noise ratio $$|| \MA\vx ||_2/||\vn||_2$$ was 30 dB.
So in the simulations below, I am only looking at the case of 60 dB SNR.

This time, the algorithms I test are: OMP, CMP, and probabilistic OMP (PrOMP); the recommended versions of iterative hard and soft thresholding (IHT, IST), and two-stage thresholding (TST), which is essentially subspace pursuit; smoothed $$\ell_0$$ (SL0) with a $$\sigma_J = 4 ||\MA\vx||_2 10^{-\textrm{SNR}/20}$$, and a decrease factor of 0.95 (recommended by Igor); Basis Pursuit denoising (BP), i.e., $$\min || \vx ||_1 \; \textrm{subject to} \; || \vy – \MA\vx ||_2 \le ||\vy||_2 10^{-\textrm{SNR}/20}$$
and iterative re-weighted $$\ell_1$$ minimization (IRl1) with up to four reweightings, and the initialization $$\epsilon = 0.1$$ and $$\MW = \MI$$. Unlike BPDN, in IRl1 I have the following convex optimization problem: $$\min || \MW \vx ||_1 \; \textrm{subject to} \; || \vy – \MA\vx ||_2 \le ||\vy||_2 \left [ || \MA\vx ||_2 10^{-\textrm{SNR}/20}\right ]$$ which might not be correct. (The second term describes the standard deviation of $$\vn$$.)

In the animated plot below, we see how the phase transitions change for sparse vectors distributed Normal. The only major differences I see are at low problem indeterminacies, where they are much lower for the SNR = 60 dB. Everywhere else things stay the same.
The dashed line is of the “ensemble” (no voting, the best possible performance).

In the animated plot below, we see the same thing but for vectors distributed Bernoulli (constant amplitude random signs). Here, the only major difference I see is that at higher problem indeterminacies, the phase transitions are a bit lower for the SNR = 60 dB. These effects could be from a low number of trials (100 at each sparsity and indeterminacy pair). Here, we see that IRl1 performs very well. (I am currently testing it in the non-noisy case.)

Tomorrow I will be looking at this IRl1 issue in more detail. And of course, turning down the SNR to see where things deteriorate.

## 3 thoughts on “Recovery of Compressively Sensed Normally-distributed Sparse Signals in Noise”

1. Hi Igor,
That is because the simulation data I used did not include SL0 among the algorithms. I am running it all now and should have some results by tomorrow.

Like

2. IgorCarron says:

Bob,
In the two picture movies you have, I cannot seem to see SL0 on the second image. Does it fail ?
Igor.

Like

3. IgorCarron says:

Thanks!
Igor.

Like