Home » Node » 12454

Distinguish lecture in AI by Randy Goebel (U. Alberta) and David Israel (SRI)

Speaker: 
Randy Goebel (U. Alberta) and David Israel (SRI)
Data dell'evento: 
Thursday, 28 September, 2017 - 16:00
Luogo: 
Via Ariosto 25, Aula B2
Contatto: 
Giuseppe De Giacomo <degiacomo@dis.uniroma1.it>

Speaker: Randy Goebel (University of Alberta) 

Title:  What does logic and abduction have to do with deep visual explanation?

 

Abstract: Artificial Intelligence research that has exploited and adapted the foundations of logic and scientific reasoning has significantly contributed to a literature on the creation and management of formal theories, including the development of reasoning architectures that support the concept of explanation. A good representative of this work is work on the development of what could be generally labeled as non-monotonic abductive reasoning systems, including typical applications in diagnostic reasoning, theory formation, and causality.  But recent impactful advances in machine learning, especially deep learning, have catalyzed new enthusiasm for Artificial Intelligence performance applications, where mathematically sophisticated supervised learning algorithms have demonstrated supra-human performance.  But two significant problems ensue: 1) the learned models are mostly if not wholly opaque and can not be inspected (no debugging, no error analysis, no explanation of unanticipated model predictions), and 2) models can not be imbued (or easily so) with background knowledge to accelerate the knowledge learned by the models.

We provide a three component framework for what we ultimately label “deep visual explanation.”  Motivated by a recent rebirth of ideas arising from problems 1, 2 above, we structure the idea of “explainable AI” into three components: first, the foundations of explainability arising from non-monotonic abductive reasoning; second, a sketch of a logical theory of visualization, which provides a basis for consolidating complex n-dimensional data into a form in which the human visual system can derive plausible inferences; and third, and finally, a sketch of a system based on the first two components, which we call deep visual explanation. We describe an instance of this third component as a method of instrumenting deep learned but opaque models so that one can observe a visual explanation of a deep model’s internal behaviour.

 

Bio: R.G. (Randy) Goebel is currently professor of Computing Science in  the Department of Computing Science at the University of Alberta, Associate Vice President (Research) and Associate Vice President (Academic), and principle investigator in the Alberta Machine Intelligence Institute (AMII). He received the B.Sc. (Computer Science), M.Sc. (Computing Science), and Ph.D. (Computer Science) from the Universities of Regina, Alberta, and British Columbia, respectively.

Professor Goebel's theoretical work on abduction, hypothetical reasoning and belief revision is internationally well know, and his recent research is focused on the formalization of visualization, with applications in several application areas including web mining, optimization, natural language processing, legal reasoning, precision health, and intelligent transportation.

Randy has previously held faculty appointments at the University of Waterloo, University of Tokyo, Multimedia University (Kuala Lumpur), Hokkaido University (Sapporo), visiting researcher engagements at National Institute of Informatics (Tokyo), German Research Centre for Artificial Intelligence (DFKI, Germany), and National ICT Australia (NICTA, now Data61), and is actively involved in collaborative research projects in Canada, France, Japan, China, and Germany.

 

Speaker: David Israel (Stanford Research Institute)

 

Title: Some Thoughts on RoboEthics  -- From a Non-Roboticist and A Complete Amateur at Ethics

 

Abstract: A lot of very smart people (Stephen Hawking, Elon Musk, Peter Norvig, Stuart Russell and many others) have expressed deep concerns about the existential threat to our species (!) that may be posed by the development of "super-intelligent" machines.  In this talk, I want to address what I consider much more immediate, and much less science-fiction-inspired worries having to do with autonomous systems, in particular with even partially autonomous weapons systems.  Besides frightening my audience, I hope to get them to think about some of the issues -- both policy-oriented and ethical -- that autonomous systems raise. 

 

Bio: David J. Israel is a Principal Scientist Emeritus at Artificial Intelligence Center at SRI, where he been the Director of the Natural Language Program Artificial Intelligence Center Information and Computing Sciences Division SRI International. He has worked in a number of areas in AI, including Knowledge Representation and Reasoning, Theory of (Rational) Action and various parts of Natural Language Processing, such as Formal Semantics and the Theory and Design of Machine Reading systems.

 

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma