Paper of the Day (Po’D): Performance Limits of Matching Pursuit Algorithms Edition

Hello, and welcome to Paper of the Day (Po’D): Performance Limits of Matching Pursuit Algorithms Edition. Today’s paper addresses the very problem I was mystified by before the break: Y. Jin and B. D. Rao, “Performance Limits of Matching Pursuit Algorithm,” IEEE Int. Symp. Info. Theory, Toronto, Canada, pp. 2444-2448, July 208. For example, see my results here, and here, and here, and here, and here, and here. In particular, the last link there shows an image of how horribly OMP recovers sparse signals with equiprobable Bernoulli distribution.


Well, commenter Phil Schniter alerted me to the method of “successive interference cancellation (SIC),” which is essentially decoding by OMP a received signal where multiple clients are transmitting to various receivers at the same time (and thus the application of the technique to collision detection multiple access (CDMA)). This has received a lot of attention and analysis in the digital communication literature; and indeed, in this paper Jin and Rao make the link between the two and successfully show why the distribution of sparse signals has a great affect on the decoding results. The take home message is this: if the magnitudes of the non-zero coefficients are nearly the same, this means that the sources are transmitting being received at the same powers (considering the dictionary is unit norm). When the code words (atoms) are not orthogonal (or alternatively when their lengths are much smaller than the size of the codebook), it will be tough to successfully decode any one of them.

Jin and Rao go further to show performance limits for OMP in terms of “capacity regions” adopted from the SIC literature. If we have compressively sensed a \(K\)-sparse signal with a dictionary of \(N\) atoms of length \(M\), then OMP can “asymptotically reconstruct” the signal if
$$
\frac{\log N}{M} \le 0.5 \log \left [1 + \frac{s_i^2}{\sigma_n^2 + \sum_{j=i+1}^K s_j^2} \right ] \; \forall i \in \{1, 2, \ldots, K\}
$$
where \(\sigma_n^2\) is the noise power, and \(s_i\) is the original weight of the \(i\)th atom (of \(K\)) with non-zero weight in the original sparse vector. If the above inequalities hold only for a range of the weights, then those are the weights than are recoverable (or at least their locations). This problem does not exist for “joint decoding methods” like \(\ell_1\)-minimization, which explains how recovery by BP is very consistent across sparse vector distributions. Because of its forward-selection nature, and the information theoretic derivation of the condition above, the authors argue that this limit provides a fundamental bound on performance not just for OMP, but also other forward selection methods, such as OLS, and Recursive OMP.

The authors perform multiple experiments validating their conclusions. They show that increasing \(M\) leads to better performance for several \(\sigma_n^2\) when a sparse vector satisfies the conditions above. And when it does not, increasing \(M\) does not help. Furthermore, they show that when OMP recovers the correct sparse signal, it does so most often such that the weights are found in order (“[OMP] actually prefers a sequence according to [the conditions above]”).

In summary, this is an excellent paper demonstrating the real fruits to be had in reading outside of one’s discipline. And to learn more about SIC and the derivation of the above mentioned inequalities, look at D. Tse and P. Viswanath, Fundamentals of Wireless Communication, Cambridge University Press, 2005, available on-line!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s