A Smattering of Papers from EUSIPCO 2012, pt. 2

Continuing with some papers selected from EUSIPCO 2012.

“How to Use Real-Valued Sparse Recovery Algorithms for Complex-Valued Sparse Recovery?” by A. Sharif-Nassab, M. Kharratzadeh, M. Babaie-Zadeh and C. Jutten

This appears to be a very nice practical paper.
It shows that, as long as the sparsity is a quarter the spark of the dictionary,
one need not solve a second-order cone problem for complex sparse recovery using error constrained \(\ell_1\)-minimization, but instead pose it as a linear program.
The paper also corrects a few misstatements in the literature.

“A Greedy Algorithm to Extract Sparsity Degree for l1/l0-equivalence in a Deterministic Context” by N. Pustelnik, C. Dossal, F. Turcu, Y. Berthoumieu and P. Ricoux

This paper recalls the polytope interpretation underlying the work of Donoho and Tanner to find the class of signals that are not recoverable by error constrained \(\ell_1\)-minimization from compressed sampling in a deterministic sensing matrix. This paper definitely deserves a deeper read, and reminds me to return to some work from: M. D. Plumbley, “On polar polytopes and the recovery of sparse representations”, IEEE Trans. Info. Theory, vol. 53, no. 9, pp. 3188-3195, Sep. 2007.

“Choosing Analysis or Synthesis Recovery for Sparse Reconstruction” by N. Cleju, M. Jafari and M. Plumbley

This paper explores where the analysis and synthesis approaches to sparse recovery are different.
We see with more measurements, the synthesis formulation becomes better for sparser signals, and the analysis formulation better for cosparser signals.
Furthermore, the analysis formulation is more sensitive to sparse signals that are approximately sparse.
This is a nice paper with a strong empirical component.

“CoSaMP and SP for the Cosparse Analysis Model” by R. Giryes and M. Elad

On the heels of the last paper,
this one adapts CoSaMP and SP to the Analysis formulation.
The paper also presents a nice table summarizing the synthesis and analysis formulations.

“Matching Pursuit with Stochastic Selection” by T. Peel, V. Emiya, L. Ralaivola and S. Anthoine

In order to accelerate matching pursuit, this work takes a random subset of the dictionary as in this work, but also a random subset of the dimensions.
Thus, it need not compute full inner products.
They show good approximation ability with a smaller computational price.

“Robust Greedy Algorithms for Compressed Sensing” by S. A. Razavi, E. Ollila and V. Koivunen

This paper presents modifications to OMP and CoSaMP wherein M-estimates are used to help guard against the effect of possibly impulsive noise.

“A Fast Algorithm for the Bayesian Adaptive Lasso” by A. Rontogiannis, K. Themelis and K. Koutroumbas

This paper takes the adaptive lasso and makes it faster to apply to recovery from compressive sampling.
It appears to do well with measurements in noise,
for both Bernoulli-Gaussian and Bernoulli-Rademacher signals.

“Audio Forensics Meets Music Information Retrieval – A Toolbox for Inspection of Music Plagiarism” by C. Dittmar, K. Hildebrand, D. Gaertner, M. Winges, F. Müller and P. Aichroth

A toolbox! For detecting music plagiarism! The authors have assembled a Batman belt of procedures in the context of the REWIND project. It tackles three types of plagiarism: sample, rhythm, and melody.

“Detection and Clustering of Musical Audio Parts Using Fisher Linear Semi-Discriminant Analysis” by T. Giannakopoulos and S. Petridis

This paper presents an approach to segmenting a musical signal using bags of frames of features (BFFs), and then Fisher linear discriminant analysis and clustering to find sections that are highly contrasting in relevant subspaces.

“Forward-Backward Search for Compressed Sensing Signal Recovery” by
N. B. Karahanoglu and H. Erdogan

The idea here is nice. Expand your support set by some number of elements, and then reduce it by that number less one.
The expansion is done by selecting the elements with the largest correlation with the residual; the shrinkage is done by removing the elements with the smallest correlation with the new residual.
The experiments are run at a small problem size with increasing sparsity,
and we see this algorithm performs favorably compared to OMP and SP — though “exact reconstruction rate” is never defined. It would be interesting to see simulations using Bernoulli-Rademacher signals.
One relevant publication missing from the references is: M. Andrle and L. Rebollo-Neira, “Improvement of Orthogonal Matching Pursuit Strategies by Backward and Forward Movements”, Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, pp. 313-316, Toulouse, France, Apr. 2006. In that work, however, they apply this forward-backward business at the end of the pursuit.

“Fusion of Greedy Pursuits for Compressed Sensing Signal Reconstruction” by S. K. Ambat, S. Chatterjee and K. Hari

The idea is simple yet effective.
Take two greedy pursuits, run them both, and combine their results to find one that is better.
The experiments show favorable robustness to the distribution underlying sparse signals, as well as to noise.
The cost is, of course, increased computation; but if it works, it works.
I had a similar idea a while ago, but reviewers didn’t like it. I remember one review said it was “too obvious.” This fusion framework provides a nice way to get around the problem of selecting the best support of a single solver.

“Use of Tight Frames for Optimized Compressed Sensing” by E. Tsiligianni, L. Kondi and A. Katsaggelos

This paper adapts an approach by Elad for building Grassmannian sensing matrices for compressed sensing, and shows their performance is better than random sensing matrices with respect to mean squared reconstruction error.

“A Comparison of Termination Criteria for A*OMP” by N. B. Karahanoglu and H. Erdogan

I need to read about this A*OMP. It has been on my to do list for a long time.

“Classification From Compressive Representations of Data” by B. Coppa, R. Héliot, D. David and O. Michel

To what extent does compressive sampling hurt discriminatability?
This paper experiments with it in fundamental ways to clearly show
more measurements leads to fewer errors.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s