
The paper "Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation", authored by Flavio Giorgi, Matteo Silvestri, and Gabriele Tolomei from the Department of Computer Science, in collaboration with Fabrizio Silvestri (DIAG) and Cesare Campagnano, received the Best Paper Award at the HCAI 2025 Workshop (Human-Centric AI: From Explainability and Trustworthiness to Actionable Ethics), co-located with the ACM CIKM 2025 international conference in Seoul, Korea.
The work advances a highly promising research direction in human-centered explainable AI, aiming to make AI explanations more accessible, interpretable, and practically useful in critical real-world contexts. The authors propose a novel pipeline that utilizes both large and small language models (LLMs/SLMs) to generate coherent natural language narratives from counterfactual explanations for predictive models that operate on tabular data.

