JKenelMachines 2.2

Here we go again, it seems I'm only alternating new publications and update to jkms on this page.

Version 2.2.

  • Fast kernel using Nystrom approximation (with fast active learning procedure as in (Tabia BMVC13))
  • Large scale Kernel SVM using the Nystrom approximation
  • New algorithms and better tuning in the algebra package
  • Multhithreading support for algebra
  • Optional dependency on EJML for faster eigen decomposition (check is at runtime, compatible with older code)
  • Revised and online Javadoc

The can now optionaly depend on EJML in order accelerate the eigen-decomposition. I had a lot of fun implementing some algorithms (Jacobi, QR, householder transforms, Givens rotation, ...), which allows the library to perform all available BLAS on its own. However, it will never be competitive with dedicated libraries. So I checked the current pure java blas library, and EJML is probably the best out there (kudos to the people behind). I made a simple wrapper that checks that the library is in the classpath, and uses it in that case. No older code should break because of this. If it does, email me rapidly...

Next, I will wrap more things around EJML (i.e. not only eig), but I still want jkms to be totally autonomous. That is, not existing feature will ever require EJML (nor any other library).

Another new feature is a fast Kernel based on the Nystrom approximation, with an active learning strategy for fast training. this was among the stuff I worked with Hedi Tabia and presented at BMVC last september.

JKernelMachines 2.1 release

I released a new version of JKernelMachines with the following features:
  • new algorithms: SDCA (Shalev-Shwartz 2013), SAG (Le Roux 2012)
  • new custom matrix kernel to handle train and test separately
  • add fvec file format
  • add experimental package for linear algebra and corresponding processing (i.e. PCA, KPCA), use at your own risk!
  • add example app to perform VOC style classification
  • Lots of bug fixes

The linear algebra package is at the moment very rough. I find it somehow useful to perform some king of pre-processing (like a PCA for example). At the moment, my matrix code is a bit slow. If ever I find the time to make solid matrix operations, I will add some nice features like low rank approximations of kernels (Nyström).

Nevertheless, I suggest to always pick the latest git version instead of these releases. The API is very stable now and should not change significantly, which means that all the code you write now is to be supported in the next few years. Thus, picking the latest git always assures you to have the bug-fixes and so on (I don't release versions only for bug-fixes).

One more thing: JKernelMachines has been published in JMLR last month. I encourage you to read the paper and to cite it if you ever use to code for your publications.

JKernelMachines 2.0

JKernelMachines version number bumped to 2.0!

The bigs changes are:

  • All classes have migrated under fr.lip6.jkernelmachines.* This breaks backward compatibility! (hence the 2.0 version number).
  • Separation of the core library and unit testing
  • Junit testing added
  • Lots of bug fixes
  • Better examples, and many useless test classes removed
  • A small demo script to benchmark the library was added
As always, check the mloss page or directly github.

ESANN 2012 Special Session on Multimedia

I am organizing a special session with Philippe Gosselin at this year's ESANN conference.

Machine Learning for multimedia applications
David Picard , ETIS – ENSEA, Philippe-Henri Gosselin , INRIA Rennes (France)

In recent years, many multimedia applications have shown very successful improvements by leveraging machine learning techniques. These applications include image and video classification, object recognition, image and video retrieval, or event detection.

However, these multimedia applications also uncover new machine learning problems in areas such as mid-level features learning, distance learning, feature combination, and so on.

This special session is intended to research papers that combine machine learning for multimedia problems. The following topics are of particular interest:

  • Mid-level features learning, Deep learning
  • Feature combination, Multiple kernel learning
  • Kernel methods, Kernel learning, Distance learning
  • Machine learning methods specially adapted to image classification, image and video retrieval, object or event recognition, etc.
Important Dates:
Submission of full paper: November 30, 2012
Notification of acceptance: February 1, 2013

Seminar by Hassen Drira, Thursday 8th march 2012

Tomorrow, Hassen will be showing us some pretty things on 3D object recognition (in french). It's open to everyone, room 384 at the ENSEA.

Calcul statistique sur les variétés de formes 3D pour la reconnaissance d'identité et d'expressions

Nous proposons un cadre Riemannien pour comparer, déformer, calculer des statistiques et organiser de manière hiérarchique des surfaces faciales. Nous appliquons ce cadre à la biométrie faciale 3D indépendamment des expressions faciales. Le même framework est utilisé pour reconnaitre les expressions indépendamment de l'identité. Les surfaces faciales sont représentées par un ensemble de courbes radiales. Dans ce cas, le calcul se simplifie et l'espace des formes des courbes ouvertes se ramène à une hyper sphère de l'espace de Hilbert. Le reconnaissance d'identité est basée sur une métrique élastique afin de faire face aux déformations non-isomètriques (ne conservant pas les longueurs) des surfaces faciales. La reconnaissance d'expressions, quand à elle, est basée sur l'apprentissage de l'énergie nécessaire à déformer les visages neutres pour exprimer les six émotions universelles. L'approche de reconnaissance d'identité proposée a été validée sur des Benchmarks connus (FRGCv2, GAVAB, BOSPHORUS) et a obtenu des résultats compétitifs par rapport aux méthodes de l'état de l'art. L'approche de reconnaissance d'expressions a été testée sur la base BU4D, une base de séquences 3D, et surpasse en performance les approches de l'état de l'art.

page 1 of 2next