This algorithm power hour is devoted to Compressive Sampling Matching Pursuit (CoSaMP), presented in the paper: D. Needell and J. A. Tropp, “CoSaMP: iterative signal recovery from incomplete and inaccurate samples,” Appl. Comput. Harmonic Anal., vol. 26, no. 3, pp. 301-321, 2009.

Given an \(s\)-sparse vector \(\vx \in \mathcal{R}^N\),

we sense it by a matrix \(\mathbf{\Phi} \in \mathcal{R}^{m\times N}\) with \(N > m\),

which produces the measurement vector \(\vu = \mathbf{\Phi}\vx\).

Define the index of the columns of \(\mathbf{\Phi}\) as \(\Omega = \{1, 2, \ldots, N\}\).

CoSaMP proceeds in the following way.

Given the index set \(\Omega_k \subset \Omega\) at iteration \(k\),

and defining \(\mathbf{\Phi}_{\Omega_k}\) as the columns from \(\mathbf{\Phi}\) indexed by \(\Omega_k\),

CoSaMP redefines this set \(\Omega_{k+1} = J_s^*\)

where \(J_s^* \subset \Omega\) is a set of at most \(s\) indices found in the following way.

First, CoSaMP finds

$$

J := \textrm{Supp}\{T_1 \left ( |\mathbf{\Phi}^*\vr_k| ; \tau_{k,1} \right ) \}

$$

where \(T_1(\vy)\) is a thresholding function applied to each element of \(\vy\)

according to the threshold \(\tau_{k,1} \ge 0\),

and \(\vr_k\) is the residual.

(Wherever an element of \(\vy\) is below \(\tau_{k,1}\) it is set to zero.)

CoSaMP sets \(\tau_{k,1}\) such that \(|J| \le 2s\).

Then with \(\Omega_{k}\cup J\),

CoSaMP computes the weights such that \(||\vu – \mathbf{\Phi}\MI_{\Omega_{k}\cup J}\vb||_2\) is minimized,

where \(\MI_{\Omega_{k}\cup J}\) is a \(N\times N\) diagonal matrix

with ones at \((j,j)\), \(j \in \Omega_{k}\cup J\), and zeros elsewhere.

CoSaMP thresholds \(\vb\) to find the new optimal support

$$

J_s^* = \textrm{Supp}\{T_2\left ( |\vb|; \tau_{k,2} \right )\}

$$

where \(T_2(\vy)\) is another thresholding function,

and \(\tau_{k,2} \ge 0\) is set such that a maximum of \(s\) elements of \(\vb\) are retained.

*(Ed: the following has been added 20110814 to make it more complete.) *

CoSaMP then computes the new residual \(\vr_{k+1} = \vu – \MPhi\MI_{\Omega_{k+1}}\vb\), and repeats the procedure above until some stopping condition.

CoSaMP is generalized in the two-stage thresholding technique of Maleki and Donoho.

In the implementation I use (available at Nuit Blanche), I set the maximum number of iterations to 300, unless \(||\vr_k||_2/||\vx|| < 10^{-5}\). And just now, I spotted serious errors in this code.

Below you can see the probabilities of exact recovery as a function of sparsity for several problem indeterminacies \(\delta = m/N\).

The black lines are results from using the code at Nuit Blanche.

The red lines are results from using my corrected code, available here: cosamp.m. *(Note, this code has been corrected here. 20110816: I have further corrected it here.)*