A Study on Sparse Vector Distributions and Recovery from Compressed Sensing

So maybe I was a little more than 5 days away from a paper submission, but I have finally finished it.
Here is the complete summary of my findings from the past few months on this interesting subject — a version of which I just submitted to IEEE Signal Processing Letters. Here is the abstract:

I empirically investigate the variability of several recovery algorithms
on the distribution underlying the sparse vector sensed by a random matrix.
a dependence that has been noted before,
but, to my knowledge, not thoroughly investigated.
I find that $\ell_1$-minimization \cite{Chen1998}
and tuned two-stage thresholding \cite{Maleki2010}
(subspace pursuit \cite{Dai2009} without the use of a sparsity oracle)
are the most robust to changes in the sparse vector distribution;
but they are outperformed to a large degree by greedy methods,
such as orthogonal matching pursuit \cite{Pati1993}
for sparse vectors distributed Normal and Laplacian.
I also find that selecting the best solution from those produced by
several recovery algorithms can significantly increase
the probability of exact recovery.

Now, with more time to kill, I am running the same experiments but the dimensionality \(N=800\), which is what Maleki and Donoho use. I am interested to see how the recovery performance of iterative hard thresholding changes.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s