News
CATALPA Lecture Series: Tanja Käser on educational technology and models of learning
[06.01.2026]Generalizability versus interpretability of learning models — that is the topic of Tanja Käser's presentation as part of the CATALPA Lecture Series on January 21. Modeling learners' knowledge and behavior are at the heart of educational technology, yet, they often lack either generalizability or interpretability. If and how AI can help here is yet to be discovered.
Photo: EPFL
Insights into science from international colleagues and opportunities for exchange—that's the goal of the CATALPA Lecture Series. Next guest: Prof. Dr. Tanja Käser, a new member of the CATALPA advisory board and assistant professor with tenure track at the EPFL School of Computer and Communication Sciences (IC) in Lausanne, where she heads the ML4ED Laboratory (Machine Learning for Education Laboratory). Her research lies at the intersection of machine learning, data mining, and education. She is particularly interested in creating accurate models of human behavior and learning.
Title of the lecture: Generalizable and interpretable models of learning
Location: Immersive Collaboration Hub (ICH), FernUniversität in Hagen
Date: January 21, 2026
Time: 4:00 p.m. to 6:00 p.m.
Abstract:
Modeling learners’ knowledge, behavior, and strategies is at the heart of educational technology. Learner models serve as a basis for adapting the learning experience to students’ needs and supporting teachers in classroom orchestration. Consequently, a large body of research has focused on creating accurate models of student knowledge and behaviors. However, current modeling approaches are still limited: they are either defined for specific and well-structured domains (e.g., algebra, vocabulary learning) requiring substantial work from experts and limiting generalizability, or they lack interpretability. Recent advances in generative AI, in particular large language models (LLMs), have the potential to address these constraints. However, LLMs lack alignment with educational goals and a grounded knowledge.
In this talk, I will discuss the key challenges in developing generalizable and explainable models, and our solutions to address them, including models tracking learning in open-ended environments and generalizing between different environments and populations. I will present our work on explainable AI, including a rigorous evaluation of existing approaches, the development of inherently interpretable models, as well as studies on effectively communicating model explanations. Finally, I will show some of our recent results combining “traditional” modeling approaches and LLMs to provide interpretable feedback and explanations while not compromising on model trustworthiness.