Contact Information

  • Institute: CY Cergy Paris Université
  • Address: Bureau 583, Bâtiment A, Site Saint Martin, 2 av. Adolphe Chauvin, Pontoise 95000 France
  • Email: aikaterini.tzompanaki [at] cyu [dot] fr

Title: Explaining Recommender Systems via Why-Not questions - DEADLINE EXTENDED!


A recommender aids the user explore the set of items in a system and find the most relevant items to him/her. The two basic recommender categories are the context- and score-based ones. The first category exploits the characteristics of users and items, while the latter depends on the item scores given by the users. Traditional implementations of recommenders are based on TF-IDF and nearest neighbors techniques, while more recent recommenders follow machine learning approaches, like matrix factorization and neural networks. A natural issue that comes along with recommendations is whether a user, or even the system designer understands the results of the recommender. This problem has given rise to the so-called explainable recommenders. Explainable recommendation helps to improve the transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction of recommendation systems. It also facilitates system designers for better system debugging [Zhang2018]. So far, the research in explainable recommendations is focused on the Why question: “Why is an item recommended?”. Solutions either consider the recommendation system as a black-box, and thus try to reveal relationships among users and items, the importance of different features with respect to the predicted value (e.g., [Lundberg2017]), or to dwell into the intrinsic characteristics of the recommendation system in order to truly explain the system [Ghazimatin2020]. What has not yet been studied though, is the Why-Not aspect of a recommendation: “Why is not a specific item a recommendation?”. We argue that explaining why certain items or categories of items are not recommended can be as valuable as explaining why items are recommended. Why-Not questions have recently gained the attention of the research community in multiple settings, e.g., for relational databases [Bidoit2015]. In machine learning, Why-Not questions are shown to improve the intelligibility of predictions [Lim2009] but remain vastly unexplored.

In this thesis proposal we aim to explore Why-Not, machine learning based explainable recommenders. In a second phase, we aim to extend the recommenders so that they can leverage the Why-Not explanations for auto-tuning.

Requirements and skills

The candidate should hold a Msc Degree in fields related to Computer Science, Machine Learning, or Applied Mathematics/Statistics. She/He should have solid knowledge of data management, algorithms and programming. Knowledge and previous experience on machine learning, recommender systems, explainability are a plus. She/He should master the english language (oral and written); knowledge of the french language is not obligatory. She/He must have strong analytical skills, be proactive, self-driven and capable to collaborate with a group of international researchers.

Duration and Location

The financing of the PhD position is for three years, full-time job starting on September 2020. The successful candidate will work in the CY Cergy Paris University. She/He will be also a member of the MIDI team of the ETIS lab, whose researchers specialize on various types of data (e.g., relational, web, multimedia, spatial) management, data integration and data mining.


Dimitris Kotzinos, Professor, CY Cergy Paris Université, France, Email: dimitrios.kotzinos [at] (thesis director)
Katerina Tzompanaki, Associate Professor, CY Cergy Paris Université, France, Email: aikaterini.tzompanaki [at] (thesis co-supervisor)


Interested candidates are requested to send one single pdf file including :
  • Detailed CV
  • Motivation Letter
  • Copies of study certificates (when available)
  • Copies of transcripts
  • Copy of english language certificate
  • Contact details of two references
to Dr. Katerina Tzompanaki, Email: We will accept complete applications until the 15th of May 2020, or until the position is filled.


  • [Zhang2018] Zhang, Yongfeng, and Xu Chen. "Explainable recommendation: A survey and new perspectives." arXiv preprint arXiv:1804.11192 (2018).
  • [Lundberg2017] Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in neural information processing systems. 2017.
  • [Ghazimatin2020] Ghazimatin, Azin, et al. "PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems." Proceedings of the 13th International Conference on Web Search and Data Mining. 2020.
  • [Bidoit2015] Bidoit, Nicole, Melanie Herschel, and Aikaterini Tzompanaki. "Efficient computation of polynomial explanations of why-not questions." In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 713-722. ACM, 2015.
  • [Lim2009] Lim, Brian Y., Anind K. Dey, and Daniel Avrahami. "Why and why not explanations improve the intelligibility of context-aware intelligent systems." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2009.