I leave for home in only a few short days.
And then it will be several weeks’ worth of Po’Ds on which to catch up during my time-zone-adjustment period.
Among my scattered papers in my office is one that appears to fuse orthogonal matching pursuit (OMP) with the A* search algorithm. This paper caught my eye not only because of the OMP bit, but also because just prior I had taught some path-finding, which is where and when I first heard of A*. Well, now that I will be teaching in only a few weeks time a 10-lecture course on artificial intelligence, I have been delving into this topic, and programming my own fun MATLAB demonstrations (to be reproduced by the students, so no demo code here yet):
Above we see paths from a start to a finish found by Dijkstra’s algorithm, A*, and greedy best-first (left to right). The last two are instances of what are called “informed search,” which just means that the algorithm chooses which nodes to explore further based on some guess about the distance yet to travel is used, a.k.a. a “heuristic.” A* uses both the cost to get to the current node and the heuristic. The greedy best-first approach just uses the heuristic. Dijkstra’s algorithm is an instance of “uninformed search,” which just explores first the nodes with the cheapest cost thus far without guessing about the distance to the finish. Both Dijkstra’s algorithm and A* are guaranteed to find the least cost path when there is one. We have no such guarantee with the greedy best-first search. However, we can see above that the greedy best-first search has the smallest “fill” of all — it explores far fewer nodes than the others, and thus gives a reasonable path with a small amount of computation. A* has much less fill than Dijkstra’s, but only finds one least cost path from start to finish, while Dijkstra’s finds many least cost paths from the start to several nodes including the goal. (Dijkstra’s algorithm, or a variant, is used in the IP layer of the 5-layer Internet protocol stack for packet routing.)
I can see very clearly how finding a sparse representation of a signal in some dictionary can be formulated as finding a least cost path through a set of nodes (atoms in a dictionary) starting with the signal and ending at zero norm residual, or within some radius of it to accommodate noise. Some members of the family of matching pursuits (MP, OMP, OLS) are just greedy best-first search where the heuristic is defined as magnitude projections of atoms (nodes) on the residual. So I wonder if keeping track of the cost, as done in Dijkstra’s and A*, is just regularization, or an instance of a relaxed greedy algorithm?