Open Access Repository

Reducing the Time Complexity of Goal-Independent Reinforcement Learning


Downloads per month over past year

Ollington, R and Vamplew, P 2004 , 'Reducing the Time Complexity of Goal-Independent Reinforcement Learning', paper presented at the AISAT2004: International Conference on Artificial Intelligence in Science and Technology, 21-25 November 2004, Hobart, Tasmania, Australia.

AISAT-CQL.pdf | Download (407kB)
Available under University of Tasmania Standard License.

| Preview


Concurrent Q-Learning (CQL) is a goal independent
reinforcement learning technique that learns the action
values to all states simultaneously. These action values
may then be used in a similar way to eligibility traces to
allow many action values to be updated at each time
step. CQL learns faster than conventional Q-learning
techniques with the added benefit of being able to apply
all experiences gained performing one task to any new
task within the problem domain. Unfortunately the
update time complexity of CQL is O(|S|2x|A|). This
paper presents a technique for reducing the update
complexity of CQL to O(|A|) with little impact on

Item Type: Conference or Workshop Item (Paper)
Authors/Creators:Ollington, R and Vamplew, P
Keywords: goal-independent reinforcement learning, hierarchical reinforcement learning
Item Statistics: View statistics for this item

Actions (login required)

Item Control Page Item Control Page