Cyclic Matching Pursuit for Compressed Sensing Recovery, pt. 2

The other day, I posted some initial results of cyclic matching pursuit (CMP) for compressed sensing recovery. This morning, I came to an office full of new test results! If you click on the image below, you will see some of these!


CMPrecov.jpg
Here, you see how signal recovery compares between CMP and orthogonal matching pursuit (OMP) for a variety of problem indeterminacies (undersampling ratios \(m/N\), where \(m\) is the number of measurements, and \(N\) is the sparse vector length), and sparse vector distributions. The particulars of CMP are as follows: it runs the refinement procedure a max of five times,
or until \(||\vr_k’||_2^2/||\vr_k||_2^2 > 0.999\), where \(\vr_k’\) is the residual after refinement. I make CMP (and OMP) stop once the model order is greater than twice the signal sparsity \(k\).
The definition of exact recovery is that used by Maleki and Donoho (\(||\vx – \hat \vx||_2/||\vx||_2 < 0.01\))
For these experiments, I make \(N=400\),
sample the sensing matrix from the uniform spherical ensemble,
and average the results over 100 independent trials
for each sparsity and problem indeterminacy.
I used the following distributions:

  1. Standard Normal;
  2. Laplacian distribution with \(\lambda = 10\);
  3. Uniform distribution over \([-1,1]\);
  4. Bernoulli is \(\{-1,1\}\) equiprobable;
  5. bimodal Gaussian is two unit-variance humps with means of +- 3;
  6. bimodal Uniform is in \([-4,-2] \cup [2,4]\);
  7. and bimodal Rayleigh means \(|x|\) is distributed Rayleigh distribution with mean 3.7.

It is fantastically clear over all sparse vector distributions that CMP performs just as well as, and often better, than OMP at this task, and without projections, and thresholding. These two greedy methods perform the best for Normal and Laplacian distributed sparse vectors, and the worst for Bernoulli (Constant Amplitude Random Signs) — which is not really news.
It is conspicuous, however, that CMP and OMP follow similar paths. This tells me that they are failing for the same signals.

Next up, comparisons of iterative thresholding, and convex relaxation, for these distributions.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s