Orthogonal Matching Pursuit the Cholesky Way

Continuing from my previous post, we now look at using Cholesky decomposition to make OMP extremely efficient.

Consider that in its \(k\)th step, OMP has the two sets of projections
\(\mathcal{I}_{k-1} = \{\langle \vr_{k-1}, \vphi_n \rangle/\|\vphi_n\| \}_{n \in \Omega}\)
and \(\mathcal{I}_{0} = \{\langle \vu, \vphi_n \rangle/\|\vphi_n\|\}_{n \in \Omega}\),
the set of dictionary inner products
\(\Gamma_l = \{\langle \vphi_l, \vphi_n \rangle/\|\vphi_n\|\}_{n \in \Omega}\)
and norms \(\{\|\vphi_n\|\}_{n \in \Omega}\),
and the Cholesky decomposition of the Gramian of the
subdictionary \(\MPhi_{\Omega_{k-1}}^H \MPhi_{\Omega_{k-1}} =
\ML_{k-1}\ML_{k-1}^H\).
Every Gramian can be expressed as a Cholesky decomposition.


OMP selects the new atom index \(n_k\) by searching \(\mathcal{I}_{k-1}\),
and updates the Cholesky factorization by
first solving for \(\vv\) in the triangular system
\(\ML_{k-1}\vv = \MPhi_{\Omega_{k-1}}^H\vphi_{n_k}\)
(the righthand side coming from \(\Gamma_{n_k}\)),
and then updating the triangular matrix by
$$
\ML_{k} = \left [ \begin{array}{cc}
\ML_{k-1} & \zerob \\
\vv^H & \sqrt{\|\vphi_{n_k}\|^2 – \|\vv\|^2}
\end{array} \right ].
$$
The complexity of the search and update is \(\mathcal{O}(N + k^2 + k)\).
The minimum \(\ell_2\)-norm of the solution error satisfies
$$
\MPhi_{\Omega_{k}}^H\MPhi_{\Omega_{k}} \vx_k = \ML_{k}\ML_{k}^H \vx_k = \MPhi_{\Omega_{k}}^H \vu.
$$
where the righthand side comes from \(\mathcal{I}_{0}\).
OMP only needs to solve two triangular systems by backsubstitution
$$
\begin{align}
\ML_k \vy & = \MPhi_{\Omega_k}^H \vu \\
\ML_k^H \vx_k & = \vy
\end{align}
$$
which has complexity \(\mathcal{O}(k^2)\).
The new residual energy is just
$$
\begin{align}
\|\vr_k\|^2 & = \|\vu – \MPhi_{\Omega_k} \vx_k\|^2 \\
& = \|\vu\|^2 + \|\MPhi_{\Omega_k} \vx_k\|^2 – 2\text{Re}\{\vx_k^H \MPhi_{\Omega_k}^H\vu\} \\
& = \|\vu\|^2 + \vx_k^H \ML_{k}\ML_{k}^H \vx_k – 2\text{Re}\{\vx_k^H \ML_{k} \vy\} \\
& = \|\vu\|^2 – \|\vy\|^2.
\end{align}
$$
Finally, OMP updates the inner products in one of two ways.
For the first,
OMP explicitly computes the residual signal
\(\vr_k = \vu – \MPhi_{\Omega_k} \vx_k\)
with a cost of \(\mathcal{O}(Mk)\),
and then directly compute the inner products
\(\mathcal{I}_{k} = \{\langle \vr_k, \vphi_n\rangle/\|\vphi_n\|\}_{n \in \Omega}\)
with a cost of \(\mathcal{O}(MN)\).
In the second case, OMP computes
$$
\begin{multline}
\mathcal{I}_{k} = \left \{\frac{\langle \vr_{k}, \vphi_n \rangle}{\|\vphi_n\|} \right\}_{n \in \Omega} =
\left \{\frac{\langle \vu,\vphi_n \rangle}{\|\vphi_n\|} – \frac{\langle\MPhi_{\Omega_k} \vx_k, \vphi_n \rangle}{\|\vphi_n\|} \right\}_{n \in \Omega} \\
= \left \{\mathcal{I}_{0}(n) – \sum_{l=1}^k x_{l} \frac{\langle\vphi_{n_l}, \vphi_n \rangle}{\|\vphi_n\|} \right\}_{n \in \Omega} \\
= \left \{\mathcal{I}_{0}(n) – \sum_{l=1}^k x_{l} \Gamma_{n_l}(n) \right \}_{n \in \Omega}
\end{multline}
$$
where \(x_l\) is the \(l\)th element of \(\vx_k\).
This entire update has a complexity of \(\mathcal{O}(Nk)\).
Thus, the total complexity of its \(k\)th iteration is either
\(\mathcal{O}(MN + k^2 + Mk)\) in the first case,
or \(\mathcal{O}(Nk + k^2)\) by using the latter approach.

Here is my MATLAB implementation of the Cholesky OMP: ompdChol.m

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s