With the advice of a commenter yesterday, I replaced my CVX code with that by Michael Elad, which is just the linear program presented in S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput., vol. 20, pp. 33-61, Aug. 1998., i.e.,

xhatBP = linprog( ones(2*N,1), [], [], [Phi, -Phi], y, zeros(2*N,1), 100*ones(2*N,1) );

xhatBP = xhatBP(1:N) – xhatBP(N+1:end);

where N is the length of x before sensing by Phi, creating the measurements in y = Phi x.

The last two inputs ensure that the solution has positive elements between 0 and 100. With this change I ran OMP, PrOMP, and BP for 1000 iterations, and find the same behavior as yesterday! (Code here.)

Why is this relaxed sparsity constraint performing so poorly with respect to the probability of “exact recovery” (which is when \(||\vx – \hat\vx||_2^2 \le 10^{-10}\))? I remember reading somewhere that “a good rule of thumb is \(M \approx 5K\),” where the sparsity is \(K\), and the number of measurements is \(M\); but that seems a bit low looking at these results. I am still not convinced this underperformance of BP is not due to some numerical error, or my naïveté in selecting a proper sensing matrix.

### Like this:

Like Loading...

*Related*

It might be related with the amplitude (on the support) of x. Note the Elad generate x as

x(pos(1:S))=sign(randn(S,1)).*(1+rand(S,1));

I just run your code using an x with these amplitudes, and BP is better than OMP. See the results here: http://imagebin.ca/view/hDPh1L.html (I just run OMP and BP 200 times).

So maybe the statement “BP is better than OMP” must be stated more carefully. (PD: Why paragraphs are not allowed in the comments?)

LikeLike