Hello, and welcome to brief presentations of some papers that are somehow relevant to my current research interests. Since there are so many interesting papers, I only take a cursory look over a few and jot down some notes — which are probably inaccurate but might help me later when I need to find something I read.
“An Ellipsoid-Based, Two-Stage Screening Test for BPDN” by L. Dai and K. Pelckmans
From the title I first thought that BPDN was some malady; but it is the familiar “basis pursuit denoising” algorithm. Essentially, this paper presents an interesting way to reduce the computational cost of BPDN by performing a “screening” first to find elements that are highly likely to be zero. Previous work in this area come from: Z. J. Xiang, H. Xu, P. J. Ramadge, “Learning sparse representations of high dimensional data on large scale dictionaries,” NIPS 2011; Z. J. Xiang, P. J. Ramadge, “Fast LASSO screening tests based on correlations,” ICASSP 2012; and L.E. Ghaoui, V. Viallon, T. Rabbani, “Safe feature elimination in sparse supervised learning,” Arxiv preprint arXiv:1009.3515, 2010. The figures in this paper are useless, so I can’t really conclude anything without more closely reading the paper. :) This is a good opportunity for a public service announcement: Please, sympathize with readers who really want to understand your work: create figures that advertize and not antagonize.
“Online One-Class Machines Based on the Coherence Criterion,” by Z. Noumir, P. Honeine and C. Richard
This is the first time I have heard of one-class classification. My first thought: what’s the use? But then I thought, well, that is just detection. It is or it isn’t. But after reading some more, I see it is the more deep problem of finding a way to detect an apple while only knowing apples and not other fruit.
This paper builds a method for online learning of a one-class SVM
using the coherence to limit the number of support vectors.
It appears that elements will only be added to the dictionary of support vectors if they are all incoherent, which is simply to say they point in directions that are relatively orthogonal.
It is impressive to see that the learning is two orders of magnitude faster than using the one-class SVM. Good work!
“Multi-sensor Joint Kernel Sparse Representation for Personnel Detection”, by N. Nguyen, N. Nasrabadi and T. D. Tran
This paper appears to reinvent kernel sparse representation: P. Vincent and Y. Bengio, “Kernel matching pursuit”, Machine Learning, vol. 48, no. 1, pp. 165-187, July 2002. It does extend this approach to joint sparse representation, and applies it to an interesting problem.