thursday 17 april 2014
2014 is set to be a good year! We already have the reviews for a few papers I've been working on lately. Some are in the ML domain (an ICPR paper with Romain Negrel on supervised sparse subspace learning, an ESANN paper with Jérôme Fellus on decentralized PCA), others in CV (2 journals in revision on low level visual descriptors with Olivier Kihl) and 1 in 3D indexing with Hedi Tabia (CVPR poster)
Other than that, I've been pushing version 2.3 of jkms. I've tagged it the "density edition" since most of the changes are related to density estimators (mostly one class SVM). I've introduced the density version of SimpleMKL, which could e useful to perform model selection. Basically, if you set C=1, you'll get a Parzen estimator, albeit selection the kernel from a specific set.
Finally, I'll be in Brugge next week for the ESANN 2014 conference. A good way to start new projects, if anyone volunteers!
monday 10 june 2013
- new algorithms: SDCA (Shalev-Shwartz 2013), SAG (Le Roux 2012)
- new custom matrix kernel to handle train and test separately
- add fvec file format
- add experimental package for linear algebra and corresponding processing (i.e. PCA, KPCA), use at your own risk!
- add example app to perform VOC style classification
- Lots of bug fixes
The linear algebra package is at the moment very rough. I find it somehow useful to perform some king of pre-processing (like a PCA for example). At the moment, my matrix code is a bit slow. If ever I find the time to make solid matrix operations, I will add some nice features like low rank approximations of kernels (Nyström).
Nevertheless, I suggest to always pick the latest git version instead of these releases. The API is very stable now and should not change significantly, which means that all the code you write now is to be supported in the next few years. Thus, picking the latest git always assures you to have the bug-fixes and so on (I don't release versions only for bug-fixes).
One more thing: JKernelMachines has been published in JMLR last month. I encourage you to read the paper and to cite it if you ever use to code for your publications.
wednesday 10 april 2013
The year is beginning with a small batch of publications on the different topics I'm working on.
Two years ago, we developed a new signature for image retrieval and classification based on tensors aggregation we named VLAT. The paper giving the very details of the method (plus a bonus for cheap large scale computation) has now been published in Computer Vision and Image Understanding at Elsevier. In the meantime, Romain Negrel (Ph.D. Student) has completely redesigned the method to improve its effectiveness. His work has now been accepted in IEEE Multimedia. There are some nice experiments in this paper, including large scale retrieval (1M images) at very low bitrate (less than 64 bytes per image).
On the video front, we have a paper accepted at MVA 2013 with Olivier Kihl (PostDoc), on video descriptors using polynomials expansion. We have very good results on well known data-sets, which makes me think this approach sounds very promising.
On a totally different topic, I recently did a paper with my colleague Aymeric Histace on the modeling of an insect (the bark beetle) using a multi-agents system. This was something I haven't done for years, and it was fun to do. The novelty in our approach is that we consider the chemical markers released by the agents and the environment to evolved thanks to a partial differential equations system modeling the physical spreading. This concurrent evolution between MAS and PDE makes the behavior of the agents a lot less predictable. This work was in part done by Marie-Charlotte Desseroit (undergrad student) during an internship last summer, which I find pretty impressive.
monday 18 june 2012
We have a paper on image categorization accepted at ICPR next November. This is the other part of the Work Romain Negrel has been doing with VLAT. This time it's about efficiency in image classification. We tried to put every tricks of latest image categorization techniques (like dense sampling, spatial pyramid, and so on) into our VLAT while still retaining small size signatures.
All in all, we managed to achieve 61.5% mAP on VOC2007, which is not bad at all considering we used a single feature and a linear classifier (a stochastic gradient descent from Leon Bottou). Actually, if you put the throttle a bit further, you can expect better results, but then it becomes very heavy computationally speaking. As usual, some code is available here, although it's only for the Holydays dataset right now. At least you can produce the features and then use your own machine learning library (or mine, of course!).
monday 23 april 2012
We have a paper accepted at ICIP next september. This is the work Romain Negrel has beeing doing on trying to reduce the size of our VLAT features using some kernel based dimensionality reduction techniques. We focused on search by similarity, and it happened to give very comparable results for large scale benchmarks.
I hope to release some code on this very soon. At least you can check this page
I'll bee in Brugge next week for the ESANN conference.
page 1 of 2next