At SPARS 2011, I had a conversation that confused me. MP has no phase transition, or, more accurately, the phase transition is at a sparsity of zero. In other words, as the ambient dimension \(N\) grows, the sparsity at which MP successfully recovers a compressively sensed sparse vector goes to zero. However, that is not what I am finding empirically. In the image below you can see the phase transitions for “MP+” (with a debiasing step), for several sparse vector distributions, for \(N=800\). (B = Bernoulli, U = Uniform, L = Laplacian, N = Normal, BR = Bimodal Rayleigh.) The criteria for successful recovery is that the solution support is exactly the same as the original sparse vector. For each pair of sparsity and indeterminacy I run 100 trials.

(Notice the vertical scale.)

Just to be sure my sinc interpolation scheme is not causing problems,

I also implemented the approach by Maleki and Donoho, which estimates the phase transition by fitting a linear model in \(\rho\) to the logit of the estimated success probability.

These phase transitions are the dotted lines.

For the most part, the pairs line up, though it appears at times the interpolation approach is too optimistic for bimodal Rayleigh.

Of course, why would one use the pure greedy algorithm when its cousin PrOMP does so much better? Or IHT? I remember during his talk at SPARS2011, Jeffrey Blanchard said that if one knows where in the phase plane a problem lies, then we can find the least-cost algorithm to solve it. And MP is a pretty low-cost algorithm.

Anyhow, I find it extremely strange that MP, the king of pure greedy algorithms, does so poorly for sparse vectors distributed in ways that are usually favored by greedy approaches. And its performance gets worse in these cases with an increasing number of measurements. In such cases, the coherence of the dictionary should be decreasing, and so I would think MP would be at more of an advantage.

I am now running the same experiments for \(N=2000\) to see how things change.

*Update 20111004: Not much changes at the higher dimension.*

### Like this:

Like Loading...

*Related*