Library Open Repository

Reducing the Time Complexity of Goal-Independent Reinforcement Learning

Downloads

Downloads per month over past year

Ollington, R and Vamplew, P (2004) Reducing the Time Complexity of Goal-Independent Reinforcement Learning. In: AISAT2004: International Conference on Artificial Intelligence in Science and Technology, 21-25 November 2004, Hobart, Tasmania, Australia.

[img]
Preview
PDF
AISAT-CQL.pdf | Download (407kB)
Available under University of Tasmania Standard License.

| Preview

Abstract

Concurrent Q-Learning (CQL) is a goal independent
reinforcement learning technique that learns the action
values to all states simultaneously. These action values
may then be used in a similar way to eligibility traces to
allow many action values to be updated at each time
step. CQL learns faster than conventional Q-learning
techniques with the added benefit of being able to apply
all experiences gained performing one task to any new
task within the problem domain. Unfortunately the
update time complexity of CQL is O(|S|2x|A|). This
paper presents a technique for reducing the update
complexity of CQL to O(|A|) with little impact on
performance.

Item Type: Conference or Workshop Item (Paper)
Keywords: goal-independent reinforcement learning, hierarchical reinforcement learning
Page Range: pp. 132-137
Date Deposited: 26 Nov 2004
Last Modified: 18 Nov 2014 03:10
Item Statistics: View statistics for this item

Actions (login required)

Item Control Page Item Control Page