Continuing from my experiments last week, I have decided to test to what degree the problems of MOD are inherited from the problems of OMP as the choice in its approximation step. So as before, I run MOD for 1000 iterations to learn dictionaries of cardinality 128 from 256 measurements generated from 8 length-64 atoms (each sampled from the uniform spherical ensemble). The weights of the linear combination are iid Normal.
In these experiments, however, I substitute the oracle support and find the weights by least squares. So, given the \(l\)th measurement is composed of atoms indexed by \(I_l\), after each dictionary update, I find the new weights by an orthogonal projection onto the span of the atoms indexed by \(I_l\). In this way, we remove OMP to see the behavior of MOD in the best case scenario: known support. Finding the support is the hardest part anyway.
Below are mean coding errors over all 1000 iterations for 100 independent trials.
We see MOD found the correct dictionary less than 10% of the time.
A close-up of where the majority end shows a mean error of about -20 dB.
Using instead OMP, and picking 3 times the number of atoms with debiasing,
we see a mean error of about -9 dB in the same experimental setup.
So, clearly, knowing the support does help.
These results appear better than those reported in B. Mailhé and M. D. Plumbley, “Dictionary Learning with Large Step Gradient Descent for Sparse Representations“, Proc 10th International Conference on Latent Variable Analysis and Source Separation (LVA/ICA 2012), Tel-Aviv, Israel, LNCS 7191, pp. 231-238, March 12-15, 2012.
My further experimentation reveals that
OMP often finds the support of all measurements when using the real dictionary;
and when it is run to twice or three times the expected sparsity,
it always finds it (at least in the 100 independent trials I am running).
This is not so surprising since this dictionary, sampled from the uniform spherical ensemble, likely has very low coherence.
So, this says to me that the failures of MOD in this case are due mostly to the dictionary update, which obviously hinders good coding by OMP, which obviously hinders the dictionary update, which clearly hinders good coding, and so on.
Next up is looking at MOD with shift-invariant dictionaries.