Continuing from my experiments with recovery of compressively sensed vectors from measurements with noise distributed normally, I now have results for SNR’s at 40, 50, 60, and Inf dB. In the image directly below, we see the phase transitions for these algorithms for sparse vectors distributed Bernoulli, i.e., constant amplitude random sign. Not much changes down to 50 dB SNR (BP takes a little dive at 50 dB); but somewhere between there and 40 dB SNR, all the algorithms begin to fail. IRl1 seems much more resilient than the others. It is also strange that adding more measurements makes makes TST, IHT, and SL0 (I am using the robust version) freak out. The greedy methods and BP appear to care less about the extra measurements. And we see IST finally grow above the other algorithms for the first time. No doubt, there are some tuning issues with SL0, as well as those created by Maleki and Donoho.
Also, the selection from the ensemble approach by the criterion stated here, seems to really like the solutions by SL0 because they have nearly zero error. I find that if I change 10 to 1, things perform much better. (So it is time to stop this ad hoc business and find the best selection criterion given expected SNR. :)
Below we see the phase transitions for the same experiments, but this time with sparse vectors distributed normally. Very little changes between 50 to Inf dB SNR; but again between there and 40 dB something dramatic happens. The greedy methods and BP care little about the extra measurements, but the performance of all the others appear hindered. The ensemble selection approach is still magnetized to solutions found by SL0.