Distinguished Lectures
The Computer Science Department at Sapienza University of Rome is promoting a series of Distinguished Lectures held by renowned speakers on fundamental research topics in computer science. The goal of each lecture (approx. 45 minutes) is to explain why the theme is indeed fundamental, and to summarize the state of the art up to cutting edge research.
Supported by MIUR under grant "Dipartimenti di eccellenza 20182022", of the Computer Science Department at Sapienza University.
2021
Lecturer: Jeffrey David Ullman
Title: Abstractions and Their Compilers
Location: Zoom meeting
Date: September 16, 2021.
Time: 17.0018.00 (CET)
Lecturer: Michael Bronstein
Title: Geometric Deep Learning: from Euclid to drug design
Location: Zoom meeting
Date: April 27, 2021.
Time: 11.0012.00 (CET)
Meeting recording
Michael is the recipient of the Royal Society Wolfson Research Merit Award, Royal Academy of Engineering Silver Medal, five ERC grants, two Google Faculty Research Awards, and two Amazon AWS ML Research Awards. He is a Member of the Academia Europaea, Fellow of IEEE, IAPR, BCS, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019). He has previously served as Principal Engineer at Intel Perceptual Computing and was one of the key developers of the Intel RealSense technology.
The current state of deep learning somewhat resembles the situation in the field of geometry in the 19h century: On the one hand, in the past decade, deep learning has brought a revolution in data science and made possible many tasks previously thought to be beyond reach  including computer vision, playing Go, or protein folding. At the same time, we have a zoo of neural network architectures for various kinds of data, but few unifying principles. As in times past, it is difficult to understand the relations between different methods, inevitably resulting in the reinvention and rebranding of the same concepts.
Geometric Deep Learning aims to bring geometric unification to deep learning in the spirit of the Erlangen Programme. Such an endeavour serves a dual purpose: it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers, and gives a constructive procedure to incorporate prior knowledge into neural networks and build future architectures in a principled way. In this talk, I will overview the mathematical principles underlying Geometric Deep Learning on grids, graphs, and manifolds, and show some of the exciting and groundbreaking applications of these methods in the domains of computer vision, social science, biology, and drug design.
(based on joint work with J. Bruna, T. Cohen, P. Veličković)
Lecturer: Alessandro Chiesa
Title: From Zero Knowledge to Private Transactions
Location: Zoom meeting
Date: February 24, 2021.
Time: 17.0018.00 (CET)
Slides
 2020

2020
Lecturer: Luciano Floridi
Title: AI, Digital Utopia, and “Asymptopia”
Location: Zoom meeting
Date: December 1, 2020.
Time: 11.0012.00 (CET)Luciano Floridi Professor of Philosophy and Ethics of Information at the University of Oxford, where he is Director of the OII Digital Ethics Lab. He is a worldrenowned expert on digital ethics, the ethics of AI, the philosophy of information, and the philosophy of technology. He has published more than 300 works, translated into many languages. He is deeply engaged with policy initiatives on the socioethical value and implications of digital technologies and their applications, and collaborates closely on these topics with many governments and companies worldwide.
Abstract. AI promises to be one of the most transformative technologies of our age. In this lecture, I will argue that AI can actually support the development of a better future, both for humanity and for the planet. In the course of the lecture, I will outline the nature of utopian thinking, the relationship between it and digital technologies, and introduce what I shall call “asymptopia”, or asymptotic utopia, the possibility of a progressive improvement of our society steered by regulative ideals (Kant).Lecturer: Luca Trevisan
Title: P versus NP
Location: Aula Seminari, Via Salaria 113, 3rd Floor.
Date: February 6, 2020.
Time: 11.3012.30Luca Trevisan is a professor of Computer Science at Bocconi University. Luca studied at the Sapienza University of Rome, he was a postdoc at MIT and at DIMACS, and he was on the faculty of Columbia University, U.C. Berkeley, and Stanford, before returning to Berkeley in 2014 and, at long last, moving back to Italy in 2019.Luca's research is focused on computational complexity, on analysis of algorithms, and on problems at the intersection of pure mathematics and theoretical computer science.
Luca received the ACM STOC'97 Danny Lewin (best student paper) award, the 2000 Oberwolfach Prize, and the 2000 Sloan Fellowship. He was an invited speaker at the 2006 International Congress of Mathematicians. He is a recipient of a 2019 ERC Advanced Grant.
Abstract. The P versus NP problem asks whether every time we have an efficient algorithm to verify the validity of a solution for a computational problem we must also have an efficient algorithm to construct such a valid solution. It is one of the major open problems in mathematics and computer science and one of the six unresolved "Millenium Problems" with a million dollar prize on its solution.We will trace the origin of this question, and its conceptual implications about the nature of mathematical proofs and the notions of expertise and creativity. We will see why certain proof strategies cannot possibly resolve this problem, and why fundamentally different ideas are needed to make progress. Finally, we will discuss "averagecase analysis" extensions of the P versus NP problems, which are needed to reason about the security of cryptographic protocols and blockchains and about the complexity of problems arising in machine learning.