Seminario pubblico di Elena Umili (Procedura valutativa per n.4 posti di Ricercatore a tempo determinato tipologia A - SC 09/H1 SSD ING-INF/05) - Integrating Linear Temporal Logic with Deep-Learning-Based Applications
Linear Temporal Logic (LTL) is a modal logic widely used in different domains, such as robotics and Business Process Management, for specifying temporal relationships, dynamic constraints, and performing automated reasoning. However, exploiting LTL knowledge in real-world applications can be difficult due to the knowledge's symbolic "crispy" nature. This seminar explores different techniques to relax the knowledge to make it applicable in continuous domains where symbols are grounded through Deep Learning modules and the symbol grounding function and/or the symbolic temporal specification can be unknown or partially known. In particular, we propose two different techniques: (i) one based on Logic Tensor Networks and (ii) one based on Probabilistic Finite Automaton. We apply the first approach to classifying sequences of images, and we show that our approach requires less data and is less prone to overfitting than purely deep-learning-based methods. We use the second approach to learn DFA specifications from traces with gradient-based optimization, showing that it can learn larger automata and is more resilient to noise in the dataset than prior work. Finally, we propose an extension of our second approach that we apply to non-Markovian Deep Reinforcement Learning problems. This third contribution has shown to be more sample efficient of methods based on Recurrent Neural Networks, and, at the same time, it requires less prior knowledge than methods based on LTL, such as Reward Machines and Restraining Bolts.