About six of the many things I learned at SPARS2011 is that the world of algorithms for the solution of underdetermined systems is growing and growing.
My feeble study of nine algorithms with seven sparse vector distributions appears entirely insufficient for providing a good review of state-of-the-art work.
Now, I am working to fix this state of affairs, and submit my extended and revised work (rejected from Signal Processing Letters), to this special issue of the EURASIP Journal on Advances in Signal Processing where it is a much better fit and can fill more than 4 pages. Gone is testing CMP (it pretty much does the same as OMP), and in comes the following parade of characters:
- Approximate message passing (AMP)
- Regularized OMP (ROMP)
- Stagewise OMP (StOMP)
- Compressive Sampling MP (CoSaMP)
- Algebraic Pursuit (ALPS)
- Gradient Projection Sparse Reconstruction (GPSR)
Even though CoSaMP is an extension of ROMP, and Maleki and Donoho’s Two-Stage Thresholding is a generalization of CoSaMP, I am including them in my simulations.
Here are some preliminary results!
I am using the same experimental setup as in my previous work, with \(N=400\), and averaging the results of 50 realizations.
We see below the empirical “phase transition” (which I learned at the workshop is the correct way to describe these since I am not taking the dimensions and everything else to infinity), for sparse vectors distributed normally, and with no measurement noise.
What is going on with CoSaMP? And why is StOMP not even to retrieve 1-sparse signals?
Over the next few days I will look into the details of these algorithms, and make sure I am using the correct settings for the various codes I found on-line.