In my recent article (submitted),

I briefly look at the differences between exact recovery conditions based on the normalized Euclidean distance between the real and recovered solution, and the recovery of the support of the solution.

At a dimension of \(N=400\) using sensing matrices sampled from the uniform spherical ensemble, I found for sparse vectors distributed Laplacian the following differences in the empirical phase transitions between the two recovery conditions (no noise in the measurements):

Those lines show the separation between the empirical phase transition based on support recovery vs. that based on the Euclidean distance. Since the latter is more lenient than the former, and in fact the former implies the latter, this difference should always be less than or equal to zero.

The other algorithms I tested (recommended version of two-stage thresholding, subspace pursuit, SL0, and ROMP) show negligible differences in the two phase transitions. Of all distributions I tested (Normal,bimodal Gaussian, Laplacian, uniform, bimodal uniform, Bernoulli, and bimodal Rayleigh), the Laplacian vectors show the greatest difference. As expected, the empirical phase transitions for Bernoulli signals show no differences at all.

Now, when we move to a space of dimension \(N=800\) and look at the same differences for Laplacian distributed sparse vectors (still keeping the same stopping criterion for all algorithms), we get a different picture:

It seems that the differences between the two exact recovery conditions is becoming less important as the problem dimensionality increases, especially for GPSR. It is not intuitive to me that as the dimension grows, the lax Euclidean condition becomes as strict as the support condition.

It could instead be something to do with GPSR, e.g., it begins to behave better as the dimensions grow.

In this case though, I see the opposite. Below are the empirical phase transitions for all sparse vector distributions I test at dimensionality \(N=400\) based on the support recovery condition for GPSR.

And below we see the same for dimensionality \(N=800\) for GPSR:

Thus, there is no doubt that GPSR (or at least the code I am using) is performing worse at higher dimension, but consistently worse across the two recovery conditions. At least the other algorithms are within the same neighborhood for each dimensionality. So my intuition seems right for the time being.

### Like this:

Like Loading...

*Related*