Paper of the Day (Po’D): Matching Pursuits with Random Sequential Subdictionaries Edition

Hello, and welcome to Paper of the Day (Po’D): Matching Pursuits with Random Sequential Subdictionaries Edition. I saw today’s paper at the recent SPARS 2011 workshop:
M. Moussallam, L. Daudet, and G. Richard, “Matching Pursuits with Random Sequential Subdictionaries”, which is at arXiv while it is being reviewed. The approach proposed in the paper is quite similar to ones I have seen before, e.g., in

  1. P. J. Durka, D. Ircha, and K. J. Blinowska, “Stochastic time-frequency dictionaries for matching pursuit,” IEEE Trans. Signal Process., vol. 49, pp. 507-510, Mar. 2001 (Po’D here);
  2. M. Elad and I. Yavneh, “A plurality of sparse representations is better than the sparsest one alone,” IEEE Trans. Info. Theory, vol. 55, pp. 4701-4714, Oct. 2009 (Po’D here);
  3. S. E. Ferrando, E. J. Doolittle, A. J. Bernal, and L. J. Bernal, “Probabilistic matching pursuit with Gabor dictionaries”, Signal Processing, vol. 80, no. 10, pp. 2099-2120, Oct. 2000 (Po’D here);
  4. A. Divekar and O. K. Ersoy, “Probabilistic Matching Pursuit for Compressive Sensing,” Electrical and Computer Engineering Technical Report, Purdue University, May 2010 (Po’D here)

Not to forget Bayesian approaches:

  1. H. Zayyani, M. Babaie-Zadeh, and C. Jutten, “Bayesian pursuit algorithm for sparse representation,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., (Taipei, Taiwan), pp. 1549-1552, Apr. 2009
  2. (Po’D here)
  3. P. Schniter, L. C. Potter, and J. Ziniel, “Fast Bayesian Matching Pursuit,” Proc. Workshop on Information Theory and Applications (ITA), (La Jolla, CA), Jan. 2008 (Po’D here)


The authors propose and study an approach to greedy sparse approximation by matching pursuit (MP) and orthogonal MP (OMP) wherein at specified instances, perhaps even every iteration, the dictionary is randomly constructed by selecting a small subset from a larger predefined dictionary. This is different from the work of Durka et al., Elad et al., Ferrando et al., and Divekar et al. in that they do not produce several signal models and then process those. Moussallam et al. instead produce one signal model, but vary the dictionary from which atoms are selected.
There are two motivations for doing this. First, the authors argue that projecting and searching over large dictionaries is computationally expensive. Thus, randomly selecting small subsets from a larger dictionary can reduce computation while preserving the diversity of the larger dictionary. Second, the authors argue that since the cost of encoding representations (amplitudes and indices into the dictionary) over smaller dictionaries is less than over larger dictionaries, signal models built in this way will be more compressible than when using large dictionaries.
The important question to answer now is whether MP using randomly selected subsets of a larger dictionary is guaranteed to produce signal models that are superior to MP using the larger dictionary.

The authors formally analyze this problem with the help of order statistics, which provides a way to discuss the set of magnitude projections involved in atom selection by MP. This analysis concludes that we should expect a faster energy decay when using randomly selected subsets of the dictionary. In their evaluation using synthetic and real signals, experimental results show that OMP with randomly selected dictionaries can still recover the original sparse signal; that a randomized dictionary subset selection (of which they test three different approaches) can result in only a small increase in the number of iterations; and that their method can result in higher signal to residual energy ratios than MP over the full dictionary for real audio signals with respect to bitrates.

This is a very interesting approach, and one that is not intuitive at all! I feel generally uncomfortable making dictionary selection random at any step in the process, but I can see it would be a tradeoff in terms of computational cost and perhaps limiting the damage done by MP over the larger dictionary.
In the end, it looks like taking such a strategy actually pays off.
Now I want to get a grasp on this idea of order statistics, which I think is a cool way to analyze MP!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s