Home » Publication » 14305

Dettaglio pubblicazione

2018, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), Pages 1771-1778

LTLf/LDLf Non-Markovian Rewards (04b Atto di convegno in volume)

Brafman RONEN ISRAEL, DE GIACOMO Giuseppe, Patrizi Fabio

In Markov Decision Processes (MDPs), the reward obtained in a state is Markovian, i.e., depends on the last state and action. This dependency makes it difficult to reward more interesting long-term behaviors, such as always closing a door after it has been opened, or providing coffee only following a request. Extending MDPs to handle non-Markovian reward functions was the subject of two previous lines of work. Both use LTL variants to specify the reward function and then compile the new model back into a Markovian model. Building on recent progress in temporal logics over finite traces, we adopt LDLf for specifying non-Markovian rewards and provide an elegant automata construction for building a Markovian model, which extends that of previous work and offers strong minimality and compositionality guarantees.
keywords
© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma