Continuing my experiments with LoCOMP yesterday, here are my comparisons of the time-domain distributions of atoms in the \(n=25\) order models of each signal constructed by five different greedy methods. (Here is my code I used to produce the plots below.)
First, we see clearly the problems that come with MP, OMP, and LoCOMP making poor atom selections, e.g., the atoms preceding the transient onset where there is nothing to model. This behavior wastes atoms, and obfuscates the relationships between the model and the signal contents. We see that interference adaptation ameliorates this situation to the tune of nearly 10 dB difference in the signal-to-residual energy ratio (SRR). In essence, atoms are placed where they are needed to model the signal, and not to correct the model. I can see no major differences between the OMPIA and LoCOMPIA models, though there is 1 dB in favor of the latter.
Here again we see problems of greedy pursuits combining these two modes into one with a large happy atom that is later corrected by other destructively interfering atoms. I cannot see any differences between the models built by OMP and LoCOMP, with the exception of a small atom on the left in the latter. With interference adaptation, a more constructive model is built with smaller-scale atoms concentrated in each mode. There are significantly different atoms present in the models built with OMPIA and LoCOMPIA. I posit that this comes from the fact that LoCOMP (at least in my version) has a very tough definition on the neighbors of an atom, i.e., it shares support with the atom. Perhaps if this is relaxed to be neighbors of neighbors, e.g., it shares support with an atom that shares support with the atom, the differences between OMPIA and LoCOMPIA would be less significant — but then the computational cost will grow. Also, there is no guarantee that the best interference weighting for OMPIA will be the same for LoCOMPIA, so my results cannot make any claim that LoCOMP is less amenable to interference adaptation. (That will be the subject of another experiment soon.)
Finally, here we see even more problems with greedy pursuits and dictionaries of smooth functions: overshoot. It is completely silly that for a stationary random process, MP and OMP will produce models that have a diversity of components, i.e., the model of the stationary process is not stationary. These pursuits, with their greed-infested good-for-everything attitudes, barge in with their big infinite dictionaries and acting all high and mighty by monopolizing the gigaflops of my computer’s processor, and yet miss the most sparse and efficient way of modeling such signals from their very first iteration. (And still, I have to wait 199 more iterations for this crap?) With OMPIA we obtain a model that is exactly what we want: atoms that are uniformly overlapped. The foresight with interference adaptation, providing a reins around the greed, also makes a difference of over 4 dB in the SRR. When integrated with LoCOMP, we see similar results, but there is that strange atom centered at sample 512.
It is clear from my brief experiments that the signal models built by OMP and LoCOMP are not only similar in the resulting signal-to-residual energy ratios, but also in the distribution of atoms selected by each pursuit — when it comes to these rather simple signals. Thus, for these signals, the tradeoff in using a locally-optimized subspace projection method instead of the projection onto the entire basis as done in OMP, is well-justified, and appears to not make much of a positive or negative difference in the resulting signal models. Now, it is time to unleash LoCOMP into the real world, and incorporate interference adaptation to truly see if all these problems with greedy iterative descent pursuits are really that bad after all.