Papers of the Day (Po’D): My EUSIPCO Papers Day 1 Edition

Tomorrow at EUSIPCO 2012 I present two papers. And to remind myself of what I did, I present some Po’D.

The first paper I present is as an oral presentation: B. L. Sturm and M. G. Christensen, “Comparison of Orthogonal Matching Pursuit Implementations,” Proc. European Signal Processing Conference, Bucharest, Romania, Aug. 2012. And lucky me, I have already posted a Po’D here. The paper and presentation slides are available at the link above. (Today I decided that in the presentation I will skip the details of the implementations and go from the big picture to the results.)

The second paper I present is as a poster: P. Noorzad and B. L. Sturm, “Regression with sparse approximations of data,” Proc. European Signal Processing Conference, Bucharest, Romania, Aug. 2012.

In this paper, we adapt sparse representation classification for use in regression, and specifically, local polynomial regression.
Local regression is a non-parametric approach that emphasizes the importance of locally rather than globally fitting a surface to the regression function.
The Taylor expansion facilitates local polynomial regression, but it requires the estimation of several parameters depending on the polynomial order (e.g., for locally linear we must estimate the gradient, and for locally quadratic the gradient and Hessian).
This estimation problem is quite easy when using least squares optimization.
However, we must define the contribution of each regressand to the local fit, which are called “observation weights.”
One approach for defining observation weights is to use the Euclidean distances between the k closest regressors and the point of interest.
That gives “weighted k-nearest neighbor regression”.
Another approach is use a kernel to weight all regressands.
That gives Nadaraya-Watson regression.
In our approach — sparse approximation weighted regression (SPARROW) —
we define these weights for a point of interest by its sparse approximation
using the dictionary of normalized regressors.
In our experiments, we find that the locally constant version of SPARROW can perform competitively with respect to other locally constant regression methods.
Curiously, the locally linear version of SPARROW performs more poorly.
Why does this happen?
We don’t yet know.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s