Dolores Romero Morales

Dolores Romero Morales

Copenhagen Business School, Denmark

Dolores Romero Morales is a professor of Operations Research at Copenhagen Business School, Denmark. Her areas of expertise include explainability and fairness in data science as well as sustainable supply chain management. Dolores currently serves as the Editor-in-Chief of TOP and as an associate editor of the Journal of the Operational Research Society as well as the INFORMS Journal on Data Science. Moreover, she is an Honorary SAS Fellow and a member of the SAS Academic Advisory Board. Among her recent achievements, Dolores (together with her co-authors) was awarded the 2024 Spanish Society of Statistics and Operations Research — BBVA Foundation Award for the best contribution in statistics and operations research applied to data science and big data published in the European Journal of Operational Research. Beyond research, Dolores actively supports early-career researchers through initiatives such as YoungWomen4OR, a program within the EURO WISDOM Forum that aims at increasing the visibility of young female researchers in operations research across EURO.

Talk Title: "Local Explainability in Machine Learning: A collective framework"

State-of-the-art Artificial Intelligence (AI) and Machine Learning (ML) algorithms have become ubiquitous across industries due to their high predictive performance. However, despite their widespread deployment, these models are often criticized for their lack of transparency and accountability. Their “black-box” nature obscures the reasoning behind decisions, limiting trust and hindering their integration in critical, data-driven decision-making processes. Moreover, algorithmic decisions can perpetuate or even amplify societal biases, leading to unfair and discriminatory outcomes. This concern is especially pressing in high-stakes domains such as healthcare, criminal justice, and credit scoring, where unfair model behavior can significantly impact individuals' lives.

In the burgeoning field of Explainable Artificial Intelligence, the goal is to shed light on black-box machine learning models. Local Interpretable Model-Agnostic Explanations (LIME) is a popular tool, that, given a prediction model and an instance, builds a surrogate linear model which yields similar predictions around the instance. When LIME is applied to a group of instances, independent linear models are obtained, which may hinder overall explainability.

In this talk we propose a novel framework, called Collective LIME (CLIME), where the surrogate models built for the different instances are linked, being smooth with respect to the coordinates of the instances. With this collective approach, CLIME enables one to control global sparsity, i.e., which features are used ever, even if sparse models are built for each instance. In addition, CLIME builds Generalized Linear Models as surrogates, enabling us to address with the very same methodology different prediction tasks: classification, regression, and regression of counting data. We will show how classic Operations Research models, such as the Knapsack Problem, are relevant to obtain satisfactory CLIME solutions. We will end the talk illustrating our approach on a collection of benchmark datasets.


Website: University Webpage