Abstract
It is commonly claimed that explainability is necessary for trust in AI, and that this is why we need it. I argue that for some notions of trust it is plausible that explainability is indeed a necessary condition. But that these kinds of trust are not appropriate for AI. For notions of trust that are appropriate for AI, explainability is not a necessary condition. I thus conclude that explainability is not necessary for trust in AI that matters. I draw out some implications of this for both trust and explainability for AI.
Speaker
Sam Baron is an associate professor of philosophy at the University of Melbourne and convenor for AI research. His research lies within metaphysics and philosophy of science. He has particular interests in the metaphysics of quantum gravity, in explanation within mathematics and in explananability in artificial intelligence. He has held positions at the University of Sydney (2013-2014), the University of Western Australia (2014-2019) and the Australian Catholic University (2020-2023). He is the recipient of two large grants from the Australian Research Council to study the nature of time in philosophy and physics, and currently holds a grant with the Icelandic Research Fund to study the nature of philosophical progress. He is an executive member of the Australasian Association of Philosophy and a member of the Centre for Time at the University of Sydney.