Policy-value concordance for deep actor-critic reinforcement learning algorithms
Date
2024
Authors
Buro, Jonas
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Designing general agents to optimize sequential decision-making underneath uncertainty has long been central to artificial intelligence research. Recent advances in deep reinforcement learning (RL) have made progress in this pursuit, achieving superhuman performance in a collection of challenging and visually complex domains, in a tabula rasa fashion without embedding human domain knowledge. Although making progress towards designing general problem-solving agents, these methods require significant amounts of data to learn effective decision-making policies relative to humans, preventing their application to most real-world problems for which no simulator exists. It is clear that the question of how to best learn models intended for downstream purposes such as planning in this context remains unresolved. Motivated by this gap in the literature, we propose a novel learning objective for RL algorithms with deep actor-critic architectures, with the goal of further investigating the efficacy of such methods as autonomous general problem solvers. These algorithms employ artificial neural networks as parameterized policy and value functions, which guide their decision-making processes. Our approach introduces a learning signal that explicitly captures desirable properties of the policy function in terms of the value function from the perspective of a downstream reward-maximizing agent. Specifically, the signal encourages the policy to favour actions in a manner that is concordant with the relative ordering of value function estimates during training. We hypothesize that when correctly balanced with other learning objectives, RL algorithms incorporating our method will converge to comparable strength policies using less real-world data relative to their original instantiations. To empirically investigate this hypothesis, we incorporate our technique with state-of-the-art RL algorithms, ranging from simple policy gradient actor-critic methods to more complex model-based architectures, and deploy them on standard deep RL benchmark tasks, and then perform statistical analysis on their performance data.
Description
Keywords
Sequential decision making, Artificial intelligence, Reinforcement learning, Machine learning