# Improved solution selection from an ensemble of compressed sensing recovery algorithms

The other day, I discussed selecting the best solution from a set of solutions produced by several sparse recovery algorithms. I think this idea is a quite natural one since it is unreasonable to assume that one recovery algorithm can be better than all the rest in all conditions.
We have seen several times that different algorithms behave quite differently when, e.g., we change the distribution underlying the sensed sparse signals.
When we add noise to the measurements, then the performance of these algorithms change as well.

we see that my hard and fast definition of sparsity for the ensemble selection criterion
$$\vx = \arg \min_{\vx_i \in \mathcal{X}} \frac{|| \vy – \MA \vx_i ||_2}{|| \vy ||_2} + \frac{|| \vx_i ||_0}{N}$$
i.e.,
$$|| \vx_i ||_0 := \#\{ | [\vx_i]_n | \ge 10^{-4} : n = 1, 2, \ldots, N\}$$
(which I hand selected in the noiseless case)
is absurd, and leads to extremely poor performance when we introduce noise.
Just look below at how that black dashed line flips around like a garden snake thrown in the air by a thoughtless child.

I took some time and thought about it a bit more, and came up with a more reasonable and condition adapted definition:
$$|| \vx_i ||_0 := \#\{ | [\vx_i]_n | \ge 10 \sigma_\vn/\sqrt{N} : n = 1, 2, \ldots, N\}.$$
where $$\sigma_\vn^2$$ is the power of the additive noise in the measurements,
$$N$$ is the dimension of the signal, or the number of atoms in the dictionary,
and the $$10$$ is hand picked with the help of some hacking.

Now I have rerun the simulations at SNR 60 dB, and see a great improvement in the selected solutions!

No more defenestrated snake.
Me thinks I have the beginnings of a submission for ICASSP.