wednesday 01 february 2012
We (N. Thome, M. Cord, A. Rakotomamonjy and me) received the notification of acceptance for a poster presentation at ESANN 2012. This work is on learning product combinations of kernels.
The sketch is as follows: suppose you have several types of features and signatures leading to a variety of kernels (typically Gaussian kernels). This is quiet a common scheme in Computer Vision. You might want to combine them, and usually people use MKL approaches (i.e. a weighted sum of kernel). However, in most cases these kernels are redundant, and you would better do a product combination of these (think of the different scales in Spatial Pyramid or different scales of the same descriptors). The product is like an 'AND' gate while the sum more like an 'OR' gate, thus if your features are redundant, the product is more likely to denoise than the sum.
The bad thing about this product counterpart of MKL is that it is non-convex (we have a nice proof about this). So we proposed and algorithm finding a local optimum. While this might not be the best combination possible, it is sufficiently robust in practice to remove non-informative kernels. The good thing is that it also performs the kernels parametrization without any need for cross-validation.
Once I've cleaned the paper of all remaining typos and corrections as suggested by the reviewers, I'll put the code online.