Phase transitions in higher dimension

The Asilomar conference was great! I met several new people, and remet several others. In particular, I had an informative chat with Jeremy Vila, who has designed with Phil Schniter one of the most promising inverse problem solvers: expectation maximization Bernoulli Gaussian approximate message passing (EMBGAMP).
Before the conference I had run some simulations with the algorithm at an ambient dimensionality of 400, and was not seeing appreciable increases in the phase transitions across all problem indeterminaces. Jeremy took a look at my simulations and suggested performing them in a higher dimension. What was going on was that with only a few components active, the algorithm was having a hard time estimating the parameters of the distribution underlying the sparse signal — as can be expected. So upon returning home, I reran my simulations with an ambient dimension of 1000, and now see improved performance over all indeterminaces. (I am also running experiments this week at 2000.)


Below we see the empirical phase transitions of 14 different algorithms for recovery from compressively sampled non-noisy sparse signals with non-zero elements distributed Normally. I define a recovery successful when the support is returned exactly. EMBGAMP is designed precisely for this distribution, and because of that we see a large margin of improvement. As an added benefit, it runs impressively fast!

Normal1000.png
When the EMBGAMP algorithm attempts to recover sparse signals that are not distributed Bernoulli (single) Gaussian, its performance degrades. Below are the phase transitions of these same algorithms, but for sparse signals distributed Rademacher (now that I have learned the proper name :), which means its non-zero elements are equiprobable in \(\{-1,1\}\). (I was calling this distribution “Bernoulli” before. Maleki and Donoho called it CARS, for “constant amplitude random sign”.) But look at this! EMBGAMP is still better than regular AMP, and the theoretical transition of \(\ell_1\) minimization.
What a graceful way to “fail”. :)

Bernoulli1000.png

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s