ORTS AI Competition

»

«

Advised by John Laird

Jul 11, 2007

During the summer of 2007, I was responsible for the creation of a new ORTS gaming agent, to be entered in the 2008 ORTS artificial intelligence gaming competition.

ORTS is a programming environment for RTS games. ORTS itself stands for Open Real-Time Strategy, and serves as a platform for testing AI problems such as pathfinding, dealing with imperfect information, scheduling and planning in the domain of RTS games.

SOAR is a general cognitive architecture for developing systems that exhibit intelligent behavior. Research on SOAR combines expertise in artificial intelligence as well as cognitive science. Applications of SOAR have been found in a great multitude of different areas.

While the main goal of the summer was to overhaul the SORTS agent in preparation for the next ORTS competition, we also decided to create our own agent (sans SOAR) in order to test the mechanics of the ORTS gaming engine, as well as to examine how well SOAR did against less intelligent players.

Much of my work with this project involved the creation and implementation of an Influence Map class to be used within the ORTS environment. The first stage of this process involved researching past uses of influence maps and comparing those uses with our particular application. Once I had the information I needed to begin the project, I wrote the code and integrated it into our current ORTS agent. The use of influence maps allowed our units to make intelligent decisions about the areas of the maps that contained a high density of enemy units in comparison with those that did not contain many units.

This implementation also allows for a number of unique strategies that would be impossible without the use of influence maps. For example, our units could be instructed to only walk on tiles below a certain influence level and thus stay outside of enemy range while still achieving goals like the destruction of enemy bases. Currently, our agent prioritizes its attack method by selecting and attacking first those bases with the highest (friendliest) influence levels. This strategy was a great improvement on the previous agent versions that relied primarily on distances to their targets in order to prioritize attacks.

This code was initially tested in our own stand-alone agent (currently unnamed). However, in the last week of the project we integrated this functionality into the more intelligent SOAR and realized a improvement in winning percentage when faced against traditional opponents (older versions of the SOAR agent, and other agents from previous competitions). We plan on competing in the upcoming ORTS gaming competition this year, and will likely enter both the SORTS agent as well as our own agent for adjudication.

Pathfinding for this project was an interesting problem. Originally we considered using exclusively A* pathfinding. This would result in the ideal path being found every time. However, this method is difficult to use in practice because of the computing power required to process individual paths for every unit. We finally decided on a force vector method, with A* pathfinding as a fallback when units get stuck. This was implemented with a good deal of success, although the difficulties with the randomly moving obstacles and especially concave terrain objects that trap units were never fully resolved.

Finding the best target for every unit is an essential part of the ORTS competition. There were a number of strategies that we implemented in order to select the best target for attack. Our units knew to attack the unit within their range that had the least health remaining. Since every unit of any given type does the same damage regardless of health, this strategy results in the fastest decrease in enemy firepower over the shortest amount of time. However, the caveat to the aforementioned heuristic is that units must be able to determine whether or not they are hitting a target. If a unit has fired at a target, but not hit that target, our client allows that unit to re-select a better target.

Special thanks to Dr. John Laird for his help on this project.