Paper of the Day (Po’D): Matching Pursuit in Parallel

Hello, and welcome to the Paper of the Day (Po’D): Sparse Approximation
in Parallel Edition. Today’s paper comes from the March 2010 issue of IEEE Signal Processing Magazine: Laurent Daudet, “Audio Sparse Decompositions in Parallel: Let the Greed be Shared!”, vol. 27, no. 2, pp. 90-96, Mar. 2010. This issue of the magazine deals with the use of multicore computing platforms for parallelized applications.

This particular article is of interest because, even though MPTK offers amazing decomposition speed for certain dictionaries, the audio sparse approximation community is in need of generalizable and near-real-time decomposition algorithms. For instance, Ravelli’s audio codec using matching pursuit (MP) with an overcomplete dictionary of MDCT bases of eight scales requires about 100X the the audio signal duration — and that is using bases with fast transforms. It is difficult to create fast algorithms because of the size of the problems, not to mention the rich complexity of audio signals. Sampled audio signals typically have dimensions larger than millions. And thus dictionaries must grow in size in order to accommodate such signals, depending of course on one’s application.

In this article, Daudet demonstrates the obvious. Even though an audio signal exists in a high dimensional space, there is nothing stopping us from segmenting it when we are more interested in describing it locally than globally. Daudet presents three methods: 1) each core performs MP on disjoint segments; 2) each core performs a modified MP on overlapping segments with message passing between adjacent segments; 3) each core performs a modified MP in a stationary frame through which the signal moves. This latter approach can be likened to an assembly line, but for atomic decomposition. As the signal rolls on by, each person at his station is responsible for removing a certain number of atoms from the signal; and the likelihood of a person removing an atom is smaller the more he has to stretch his arms. Daudet compares simulations of all of these algorithms with different configurations to the standard sequential MP (presumably using the same atoms). Daudet finds that PLoMP performs closely to sequential MP in terms of signal-residual energy ratio.

Unfortunately, several important details are missing from this paper, e.g., frame size, frame
shift, weighting window shape, not to mention computation time for the different approaches. However, this paper does well as a proof-of-concept in showing that we can take the serial decomposition algorithm of MP (which is not a “divide and conquer” algorithm) and make it parallel, the ease of which is limited by the support of the largest atom in the dictionary. Now, if we can just as easily make orthogonal MP parallel …

Reblog this post [with Zemanta]
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s