A Multi-Layer Case-Based & Reinforcement Learning Approach to Adaptive Tactical Real-Time Strategy Game AI

Show simple item record

dc.contributor.advisor Watson, I en
dc.contributor.author Wender, Stefan en
dc.date.accessioned 2016-03-30T20:13:03Z en
dc.date.issued 2015 en
dc.identifier.citation 2015 en
dc.identifier.uri http://hdl.handle.net/2292/28485 en
dc.description.abstract Real-time-strategy (RTS) games offer complex environments that exhibit many interesting problems for research in artificial intelligence (AI). However, research into machine learning (ML) approaches in this domain often focuses on small sub-problems. Architectures that address multiple tasks are often patchworks of different non-adaptive components. This thesis investigates and aims to advance research into adaptive ML approaches to RTS game AI. The thesis also focuses on creating novel techniques and applications for acquiring knowledge through online learning in RTS environments, specifically their tactical and reactive tasks. As a first step toward an adaptive ML approach, reinforcement learning (RL), a machine learning technique that works well in unknown environments, is evaluated for its performance in the RTS game domain. A number of RL algorithms are compared against each other for their performance in small-scale combat in the commercial RTS game StarCraft. The best-performing Q-learning algorithm is used in the subsequent creation of other ML modules. RL is combined with case-based reasoning (CBR) into a hybrid approach that is able to address more complex problems through generalizing over large state-action-spaces. This hybrid module is found to increase speed of convergence when compared to simple table-based RL. Furthermore, this step includes in-depth evaluation of case-base behaviour and the optimisation of algorithmic parameters for future use. Given the complexity inherent in the RTS domain, a hierarchical decomposition of the problem is proposed which subdivides the problem space in order to address the overall task of micromanaging combat units in RTS games. Based on this decomposition and inspired by other hierarchical paradigms such as layered learning (LL) and hierarchical CBR, a hierarchical architecture is created. That hierarchical architecture includes individual components for RTS game tasks which fall into the tactical and reactive layers. These tasks are Navigation, Formation, Attack , Retreat, and, on the highest level, Tactical Unit Selection. For each of these tasks, a separate module is created, on many occasions using combinations of CBR and RL to acquire and manage the knowledge required to solve the particular task. Each module is individually evaluated to successfully solve its assigned problem. Finally, all modules are integrated in a three-tier layered implementation and are shown to successfully solve tactical and reactive tasks in various evaluation scenarios. en
dc.publisher ResearchSpace@Auckland en
dc.relation.ispartof PhD Thesis - University of Auckland en
dc.relation.isreferencedby UoA99264844406502091 en
dc.rights Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher. en
dc.rights.uri https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm en
dc.rights.uri http://creativecommons.org/licenses/by-nc-sa/3.0/nz/ en
dc.title A Multi-Layer Case-Based & Reinforcement Learning Approach to Adaptive Tactical Real-Time Strategy Game AI en
dc.type Thesis en
thesis.degree.discipline Computer Science en
thesis.degree.grantor The University of Auckland en
thesis.degree.level Doctoral en
thesis.degree.name PhD en
dc.rights.holder Copyright: The Author en
dc.rights.accessrights http://purl.org/eprint/accessRights/OpenAccess en
pubs.elements-id 525598 en
pubs.record-created-at-source-date 2016-03-31 en
dc.identifier.wikidata Q112911193


Files in this item

Find Full text

This item appears in the following Collection(s)

Show simple item record

Share

Search ResearchSpace


Browse

Statistics