Survey of evolutionary learning for generating agent controllers in synthetic environment

PDF

Authors
  1. Taylor, A.
Corporate Authors
Defence R&D Canada - Ottawa, Ottawa ONT (CAN)
Abstract
Using evolutionary methods is a promising approach to overcoming limitations in developing traditional artificial intelligence (AI) controllers in synthetic environments. Traditional AI is difficult and expensive to create and maintain. As a result exercises and training simulations often require the participation of human players who exist solely to provide believable and realistic behaviour for allied, enemy, and neutral forces. Here we review recent results in evolutionary learning for achieving two goals: reducing the difficulty of creating AI, and creating highly capable and robust AI. Results are organized first according to their architectures; these range from hand-designed controllers with evolved parameters to neural networks that grow in complexity. Second we compare methods of guiding the evolutionary search process, necessary because of the large size of the controller search space. These methods, such as modularization and multiobjective evolution, help to reduce the designer’s workload. We explain the need to identify practical applications for evolutionary methods by examining shortcomings in current client use of AI, and outline a research plan for developing evolutionary methods for application to these client requirements. Successful application of these methods will generate better AI, reduce cost, and reduce the human workload in executing distributed simulation exercises.

Il y a un résumé en français ici.

Report Number
DRDC-OTTAWA-TM-2011-212 — Technical Memorandum
Date of publication
01 Dec 2011
Number of Pages
44
DSTKIM No
CA036954
CANDIS No
536716
Format(s):
Electronic Document(PDF)

Permanent link

Document 1 of 1

Date modified: