Continuing my experiments from the previous few days, I have now coded and experimented with Probabilistic OMP (PrOMP), and the convex relaxation of the strict sparsity measure to an \(\ell_1\)-norm of the solution, a.k.a, the principle known as Basis Pursuit (BP for short).

The problem set-up is the same as before. I am not including any noise. PrOMP turned out to be a real hassle to tune, but I finally settled on \(m=2\), \(\epsilon = 0.001\), and stopped the search when either \(||\vr||^2 \ge 10^{-22}\) or 10,000 iterations have elapsed. (An explanation of these values is here.) For BP I used CVX, with the following

cvx_begin

variable xhatBP(dimensions);

minimize( norm(xhatBP,1) );

subject to

y == Phi*xhatBP;

cvx_end

where Phi is the same kind of matrix used before, and y are the measurements.

This time, under time constraints, I only looked at sparsity values up to 0.4,

and I only performed 500 runs of each algorithm (each with the same sensing matrix and sparse vector).

Below we see a comparison of the probability of “exact recovery” as a function of sparsity of these methods with MP and OMP.

First, it is immediately clear that MP is now making a fool of itself. Why does it keep showing up to this party? BP, however, is surprisingly poor! (Unless I have made some error in the code.) I don’t know why this is, but there was little change when I forced the sensing matrix to have a coherence of less than 0.5. OMP does well, and PrOMP does a little better.

However, in coding I realized something bad about PrOMP.

PrOMP selects atoms at random based on a distribution that places most probability mass on the atoms highly coherent with the residual, and less on all the others (and zero on those atoms already selected, or with zero length projections). PrOMP terminates when the residual norm is exactly zero, at which time it serves up the solution. But, even though the norm residual is zero, we are not guaranteed the solution is correct! There are infinitely many solutions \(\vx\) such that \(||\vy – \mathbf{\Phi}\vx|| = 0\) by virtue of \(\mathbf{\Phi}\) being fat and full rank. It has a null space into which we can point the sparsest solution and still have zero norm residual. And there cometh the spark of \(\mathbf{\Phi}\) being that much less sparse than the sparsest solution. (In my experiments, the null space of the dictionaries have at least dimension 200, so that is a low spark.) Thus, there is no guarantee PrOMP serves up the correct solution. Yet still! PrOMP does a fine job of outperforming OMP and BP.

My code is attached. Next I shall experiment with re-weighted \(\ell_1\) and \(\ell_2\) methods, iterative hard thresholding, CoSaMP, StOMP, regularized OMP, A*OMP, two-stage thresholding, etc.

### Like this:

Like Loading...

*Related*

“BP, however, is surprisingly poor!”

That’s strange. Michael Elad, in his book “Sparse and Redundant Representations”, does a comparison between BP and OMP, and BP is better than OMP. Quoting page 52:

“It is clear that the relaxation methods are performing much better”

The code he used to generate these results is available here:

http://www.cs.technion.ac.il/~elad/Various/Matlab-Package-Book.rar

Look at the file Chapter_03_Demo_Relaxation.m

LikeLike

I was under the impression that BP was going to make OMP look like OMP makes MP look. So I am wondering if my constraint is set too tight for numerical precision, whether my sensing matrix is not properly built, whether I need to specify a different solver for CVX, or what. (I also think reweighted ell1 minimization will help.) I see that Elad is not using CVX in his code, so I will rewrite my code into a linear program and test that just to make sure. Thank you for your comment Alejandro!

LikeLike