IEEE 4

 

Home
Up

Copyright © 1998, Institute of Electrical and Electronics Engineers, Inc. All rights reserved. This article was published in IEEE Intelligent Systems magazine

Battling with GA-Joe

By Stephen Grand

For the past twenty years I’ve been advocating soft, bottom-up, massively parallel computing techniques and waging a war on top-down, serial, control-freak thinking. Yet, just lately, I seem to have found myself becoming increasingly disdainful of genetic algorithms and other evolutionary software techniques. What’s happening—have I become a traitor to my cause? I hope not; evolution undoubtedly works. I’ve used it many times out of academic interest and more than once against genuinely difficult, practical problems. In fact, it’s partly because it works that I have such a problem with it. The thing is, I don’t like what evolutionary research is doing to the field of Artificial Life.

A runaway success

A similar thing happened to Artificial Intelligence. Back in the heady days of AI’s "Brave New World", all sorts of things seemed possible and all kinds of mechanism were being entertained and explored. But then a nasty positive-feedback loop took hold. AI had big ambitions and it was clear that those ambitions would take considerable effort and time to achieve. In the meantime, the field had to justify itself by demonstrating some results. Enter the Expert System. Few would argue that Expert Systems are the embodiment of AI’s original dream. Lists of explicit, hand-coded rules can, at best, only be described as "intelligent" when qualified by inverted commas, since they fail to show key intelligent phenomena like self-learning, conceptual reasoning or creativity. Nevertheless, they do work! Expert systems are not, in my view, on the path leading to true artificial intelligence, yet within their intended domain they were highly successful. Consequently, they attracted funding. Instantly, natural selection kicked in with some positive feedback—expert systems attracted funds, which encouraged their further development, making them ever more successful and attracting ever more funds. Academics who worked on such knowledge-based systems naturally emphasized those ideas in their teaching, thus increasing the proportion of knowledge engineers in the population. To a large extent then, AI has evolved into one ecological niche, when many others originally held equal promise. I have nothing against expert systems per se but the broader field of AI has clearly suffered as a result of this "lock-in" effect.

Now the same thing appears to be happening in Artificial Life. A mere decade ago, the world was A-Life’s metaphorical oyster. People were working on cellular automata, neural networks, self-catalyzing reactions and morphogenesis, plus, of course, simulated evolution. These are all very difficult topics, but the first one to show real theoretical and practical success was undoubtedly simulated evolution, albeit in the abstract forms known as genetic programming and genetic algorithms. As I’ve suggested above, things with immediate practical value attract more funding than those that only show potential for the indefinite future (except, for some reason, particle physics). Is this the top of a slippery slope? Is Artificial Life about to suffer lock-in, until "AL" is synonymous with "GA"? I fear so. Based on a straw poll of the work I’m currently aware of, plus a few web searches, I conclude that the bulk of the work currently being done under the Artificial Life banner is connected with evolution. In fact, this ecological specialization goes even finer and I believe that, out of this broad evolutionary subset, most work is based on the one, rather abstract and "unnatural" method of genetic algorithms.

Sometimes, this kind of evolutionary lock-in is A Good Thing. Compared to the wide variety of bizarre contraptions at the end of the last century, all modern motor cars are essentially identical—a wheel at each corner, a wheel at the front to steer with, pedals for gas and brake, etc. And thank heavens for that! Design lock-in provides us with de facto standards, and many a car rental company is relieved to hear it. Nevertheless, is it good that the field of A-life should specialize so early in its development? Most particularly, is it wise to focus so narrowly and so hard on simulating evolution when it may well not be the key to artificial life, just as expert systems, powerful or not, are not on the route towards the primary goals of AI?

Chuck-it-in-a-bucket science

The appeal of the GA is obvious—especially to students! It is a magic solution; a way of finding answers that requires little more than the ability to recognize them. Designing a good GA is a non-trivial problem, I don’t deny it, but it is undoubtedly their magic solution appeal and the fact that they work that has made them catch on. This has led to the emergence of what I call the "chuck it in a bucket" brigade. Some people think that all they have to do is cook up a primordial soup of random ingredients, give it a stir and let it simmer for a short while and complex, living beings will eventually crawl out over the top, complaining about the heat. When it comes to small-scale problems (grad student projects, say), GAs can indeed be spectacularly successful. I once had a very messy and non-linear problem to do with converting video color palettes. I spent some considerable time fruitlessly trying to find a mathematical solution to the problem. Finally I decided to evolve a solution. It took one hour to write a GA, followed by one hour per run (roughly a million generations) to optimize the palettes and in half a day everything was rosy (or at least, the closest color I could get to rosy). Nevertheless, the total phase space being explored by the system contained only 25616 points—a relatively small number, compared to the design space of most moderately interesting systems.

It’s the scale of things that trips many people up and makes evolution seem like a wonderful panacea, capable of curing all ills. People’s imaginations fail them when they try to conceive of large populations and long time scales. Let’s try a rough and dirty thought experiment to get a feel for the scales involved. Suppose we want to create an intelligent machine with the approximate complexity of a mouse. I accept that a mouse is an extremely complex machine compared to many artificial intelligence systems, but mice still aren’t very bright, so I’m not asking much. To compensate for this horrendous amount of target complexity, let’s shortcut 90% of the problem by assuming we don’t need to start with a primordial soup, but already know how to hand-engineer something as sophisticated as a lobster (lobster soup?). Lobsters aren’t that much different in complexity or brainpower to mice, really, so it’s not a huge step in evolutionary terms. So we’re talking about an evolution from moderately complex arthropods to simple mammals. How long did that take in Nature? Let’s say 200 million years, in round figures. How long does it take to establish the fitness of a lobster (or mouse, or intermediate)? There are no extrinsic fitness functions in a task like this—the only way to discover the fitness of an individual is to let it live out its life in a realistic environment until it dies or successfully breeds. Let’s say it takes a year from birth to reproductive maturity. So we’re talking about 2x108 generations, of some millions of individuals per generation. There are all sorts of objections that pro-GA people will rightly bring to bear on these assertions. So, to appease them a little, let’s assume that they have a truly phenomenal amount of computer power available—enough to compute fitness at a generation per minute – roughly half a million times faster than real time. This is many orders of magnitude faster than all the computers in the world could muster, but I’m feeling generous. How long will it take for our almost-there arthropod to reach the dizzying heights of mousehood? Even at that hugely accelerated speed, the program will have to run for roughly four hundred years. Personally, this is rather longer than I am willing to wait, especially if the computer might crash part way through!

[Postscript: Yes, I know mice did not evolve from lobsters, and maybe their common ancestor was substantially different from either and a good deal simpler than both, but I don't think it makes any huge difference. I gave you enough concessions—if you're not careful, I'll demand that you start with a primordial soup and then you'll have to explain away 3 billion years instead of 200 million!]

The above reasoning has many faults, but it does serve to remind us how easy it is to be complacent about exponents—we all know that 1044 is "somewhat bigger" than 1040, but it’s so easy to lose sight of the fact that it’s 10,000 times bigger. The problem with design space is that it grows exponentially as more degrees of freedom are added. Evolution is a truly excellent search strategy, which tends to flatten the curve again. Nevertheless, even in the practical paletization problem described above, simply doubling the size of the palette to 32 colors would increase the search volume by a factor of 25616, or 3x1038.

This is not to decry the excellent work being done to speed up the process of simulated evolution. Simulated annealing, neutral genes, new kinds of genotype:phenotype mapping and suchlike are tremendous advances. Yet trying to speed up evolution in the face of large numbers of degrees of freedom is rather like trying to reach escape velocity by streamlining one’s bicycle—sometimes the numbers are just intractable and another method must be found. My personal (and somewhat heretical, amongst A-lifers) view is that simulating evolution, whilst informative scientifically, is of limited use to an engineer. In short, evolution is good, but it’s not that good. It is a valuable topic of study, but not the AI philosopher’s stone.

I evolve, therefore I am

Besides a natural lock-in and a tendency to "exponent-blindness", the final reason that evolutionary methods are beginning to drown out all other approaches in A-life is to do with the meaning of life itself. The currently fashionable definitions of life revolve around self-replication, evolutionary potential and other fairly mechanistic criteria. At a reductionist level, many evolving systems can be considered either to be examples of life, or examples of life-like systems, depending on your view. Because evolutionary systems satisfy some of the necessary conditions for life, there has been an increasing tendency to assume that this is all life is. There is something of a syllogistic fallacy hidden in here. By definition, your uncle is simply your father’s or mother’s brother. But this doesn’t mean that "uncle-ness" can be explored and understood purely in terms of genealogy—uncles have many other properties that, whilst not necessary for their classification, are extremely important to us as human observers. Similarly, you may be satisfied with replication and evolvability as necessary conditions for defining life, but these are not the whole of what it means to be alive, by any stretch of the imagination. Artificial Life, one of the most holistic, synthetic fields in science, is falling foul of reductio ad absurdum. In practical terms, this means that research into morphogenetic, chemical and neural mechanisms, communication, perception and intelligence is being eclipsed by the overwhelming shadow of evolution. I happen to believe that evolution is deeply limited in its power to deliver results within reasonable time scales for reasonably complex problems. Therefore, from a technological perspective at least, I fear that Artificial Life is rapidly heading down what is known in the trade as an evolutionary dead-end.

Artificial Life

1987 – 1998

In Memoriam

 
Copyright © 2004 Cyberlife Research Ltd.
Last modified: 06/04/04