Speedups in OMP Implementations

I am now compiling many results in a comparative study of several different implementations of OMP: the naive way, the Cholesky way, the QR way, and the MIL way.

So far, it appears that each implementation has some error accumulation from recursion, but at completely acceptable levels (at least on my 64-bit machine).
Furthermore, I have yet to observe a case where one fails while the others succeed.
Where they are significantly different, however, is in their computation times.
Below we see in the phase plane how much faster the three implementations are compared to the naive way for particular ambient dimension \(N\). The z-axis is the mean time of the naive (averaged over 100 independent trials at each problem sparsity and indeterminacy) divided by the mean time of another implementation. (This is mean time of entire pursuit, not per iteration.) It is interesting to see how large of a benefit it is to use QR vs. Cholesky for large problem sizes and more iterations.
meantimeofpursuit.gif

Advertisements

One thought on “Speedups in OMP Implementations

  1. Have you compared against the version in the SPAMS toolbox? It’s open source C++, so you could convert to Matlab if you want a fair test. I believe they use a single Cholesky factorization, and it’s extremely rapid. I tried to break it with gigantic dictionaries and failed; it’s always fast.
    (Note: if you do compare with their version, the output may be slightly different, because they use a slightly different variant. In my tests, their variant gives better results too).

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s