In this article, we explore the concept of AIXI, a Bayesian optimality notion for general reinforcement learning. Previous approaches to approximating AIXI’s Bayesian environment model relied on a predefined set of models, introducing uncertainty for the agent. We address this issue in the context of Human-AI teaming, introducing a new agent called DynamicHedgeAIXI. This agent maintains an exact Bayesian mixture over dynamically changing sets of models and offers strong performance guarantees. Through experiments on epidemic control on contact networks, we validate the practical utility of the DynamicHedgeAIXI agent.

Abstract:Prior approximations of AIXI, a Bayesian optimality notion for general reinforcement learning, can only approximate AIXI’s Bayesian environment model using an a-priori defined set of models. This is a fundamental source of epistemic uncertainty for the agent in settings where the existence of systematic bias in the predefined model class cannot be resolved by simply collecting more data from the environment. We address this issue in the context of Human-AI teaming by considering a setup where additional knowledge for the agent in the form of new candidate models arrives from a human operator in an online fashion. We introduce a new agent called DynamicHedgeAIXI that maintains an exact Bayesian mixture over dynamically changing sets of models via a time-adaptive prior constructed from a variant of the Hedge algorithm. The DynamicHedgeAIXI agent is the richest direct approximation of AIXI known to date and comes with good performance guarantees. Experimental results on epidemic control on contact networks validates the agent’s practical utility.

Read the original article