I have run some experiments to compare the performance of CMP to MP and OMP in recovering compressively sensed sparse signals.

Given a real length-\(N\) signal \(\vx\) with sparsity \(K\), I sense it using \(\mathbf{\Phi} \in \mathcal{R}^{M\times N}\), which in my case has elements iid zero mean Gaussian, and each column is made unit norm. The \(K\)-sparse signal has \(K\) of its elements (uniformly selected) selected from a standard normal distribution. I sense this signal and obtain the observation

\(\vy = \mathbf{\Phi} \vx\).

Using a greedy algorithm (MP, OMP, CMP) I attempt to recover \(\vx\) by selecting columns of \(\mathbf{\Phi}\) to “explain” \(\vy\). I stop this process when either \(M\) atoms have been selected, or \(||\vy – \mathbf{\Phi} \hat \vx||_2^2/||\vy||_2^2 < 10^{-10}\) where \(\hat\vx\) is the reconstruction. In my experiments, \(N = 250, M = 50\), and the sparsity \(K \in [1, 30]\). I test each algorithm for 1000 different realizations of \(\vx\) and \(\mathbf{\Phi}\).

For CMP, I cycle until either the difference between successive SRRs is less than 0.001, or we have cycled through all atoms 10 times.

Below we see the probability of “exact recovery” as a function of sparsity. I define “exact recovery” when \(||\vx – \hat\vx||_2^2 \le 10^{-10}\). As expected, MP is terrible. OMP does well, and CMP based on MP does stunningly well. (Stunning perhaps because I am not impartial.)

Though CMP slightly under-performs OMP, we see below that it slightly out-performs OMP when it comes to the mean \(\ell_2\) reconstruction error. Why then MP beats both of those after 0.45 is discomforting, but maybe not entirely unusual since its weights are exponentially decreasing.

By the way, once I am finished running some more tests I will post my code. :)

### Like this:

Like Loading...

*Related*