Friday, July 01, 2005

Emergent Behavior

Earlier this week I read an article about a game/simulation/research project wherein virtual robots were trained to effectively fight enemies. The robots were programed with a simple learning algorithm, and the training would consist primarily of a reward/punishment system for encouraged/discouraged behavior.

By compounding desired traits, such as "move towards the enemy", "shoot the enemy", "don't get too close to the enemy", "Don't clump together", etc., the robots would learn how to effectively fight an enemy. Additionally, after a certain time period, some of the robots could 'teach' their acquired skills to new robots, and impart a time-compounded education.

Individually, the bots' behavior started out pretty simple. Move towards the enemy, don't get too close, and shoot it. However, when there was a whole squadron of similarly trained bots, they could be taught to work together, and those that were most effective could be selected to 'train' subsequent generations of bots, and eventually develop unique attack styles as individuals or teams.

This got me thinking: About two and a half years ago I read an article on using emergent behavior in small real-life robots to develop a higher sort of intelligence (in the form of "Swarm Intelligence"). One white-paper application (similar to this one) was to send a swarm of robot ants, about 6-8 inches long, to Mars. Once there, they would interact with the martian landscape and each other, serving as an exploration rover of sorts that would potentially be much more fault-tolerant than a larger single-unit rover. Since the bots would also be interacting with each other, sharing data etc., a higher form of A.I. would develop. I wasn't able to find the original article, but the concept is outlined here and here.

An existing example of emergent behavior is Conway's Game of Life. With a few basic rules concerning which cells reproduce, lie dormant, or die, complex patterns can arise from an initially random field.

Another example that's a bit more globally dispersed is an ant colony or bee hive. Individually, an ant or a bee might not appear to have a great deal of smarts - they react in certain ways to certain situations, and are (individually) weak. However, when the ants or bees are viewed as a colony or hive, they are very resourceful. They can eliminate threats, store up supplies for winter, regenerate (repopulate) the unit, and adapt itself to changing conditions.

It looks like emergent behavior will probably be the organization of future A.I. units. The basic principle of simple routines leading to complex behavior is already well implemented in computer programming in the form of subroutines and Object-Oriented programming, so it's not too much of a stretch to take those principles and apply them to a logic type processing unit.

Things are going to be interesting.

2 comments:

Anonymous said...

Interesting... Sounds like someone has extra time on his hands...

-Guess Who?!

jhjessup said...

My sister?