143 days

That is how long I must wait for my 5400 simulations to finish running. I started this process more than 50 hours ago, thinking it would be done Tuesday. Maleki and Donoho are not kidding when they write,

It would have required several years to complete our study on a single modern desktop computer.


Granted, that was way back in March 2009; but I got my 24 inch iMac (3.33 GHz Intel Core 2 Duo, 8 GB RAM, 2 TB HD) only one year later.

I am not doing any parameter tuning here. In fact, I am using the recommended algorithms from their study (iterative hard/soft thresholding, two-stage thresholding), along with orthogonal matching pursuit, and \(\ell_1\) minimization, to investigate the impact of the distribution of non-zero elements of sparse signals on their recoverability from compressive sampling. My problem size is the same as theirs: sensing matrix sampled from the uniform spherical ensemble \(\mathbf{\Phi} \in \mathcal{R}^{m \times 800}\), and 900 different pairs of problem indeterminacies and sparsities. I am looking at 100 realizations from each of 6 different sparse vector distributions. So that makes 540,000 runs of each recovery algorithm, which makes 2,700,000 total reconstructions. (Maleki and Donoho ran 90,000,000 total reconstructions on 38 servers for one month.)

Since I cannot wait 143 days to write a Signal Processing Letter, I am going to have to get creative. I had plans to include two other distributions (Cauchy and bimodal Rayleigh) reweighted \(\ell_1\), \(M\)-term IHT, cyclic MP and OLS, complementary MP, and PrOMP.
Maybe I can “observe” that \(\ell_1\) minimization remains quite consistent across all distributions, and thereby focus only on greedy methods and thresholding.

By the way, Tropp and Wright (“Computational methods for sparse solution of linear inverse problems,” Proc. IEEE, vol. 98, pp. 948-958, June 2010), Blanchard et al. (J. D. Blanchard, C. Cartis, J. Tanner, and A. Thompson, “Phase transitions for greedy sparse approximation algorithms,” (submitted somewhere), Apr. 2010), and Blumensath and Davies (“Normalized iterative hard thresholding: guaranteed stability and performance,” IEEE J. Selected Topics Signal Process., vol. 4, pp. 298-309, Apr. 2010)
classify thresholding with greedy pursuits; and Qui and Dogandzic (“Variance-component based sparse signal reconstruction and model selection,” IEEE Trans. Signal Process., vol. 58, pp. 2935-2952, June 2010) say thresholding cannot be classified as greedy, convex relaxation, or probabilistic methods.
Of these possibilities, I would classify IHT as a greedy approach, since MP is a special case where the thresholding function adds to the previous solution only the largest magnitude coefficient each step; and OMP does the same but defines a new residual; and the relaxed greedy algorithm (A. Barron, A. Cohen, W. Dahmen, and R. DeVore, “Approximation and learning by greedy algorithms,” Ann. Statist. vol. 36, no. 1, pp. 64-94, 2008) is a special case of IST. Maybe then, greedy algorithms are a subclass of thresholding algorithms.

Update: It appears now that two-stage thresholding is taking nearly four times as long as the other methods! \(\ell_1\)-minimization is not the culprit. Again, I repeat: \(\ell_1\)-minimization is not the culprit. Stay tuned for further developments on this 24-hour sparse recovery news channel.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s