IEEE 2

 

Home
Up

The following article was published in "IEEE Expert", November/December 1997. Copyright © IEEE 1997.

Three observations that changed my life

By Stephen Grand

Exactly whose childhood do I remember? Why is it that splashes leave only ripples? Could I copy myself into a computer? These three questions have, over the years, shaped my perception of the universe, of science, and above all of artificial intelligence. The first is a question about materialism, the second about persistence, and the third about simulation. My attempts to answer them have brought me firmly into the "strong" camps of both AI and artificial life.

Strong A-life asserts that, under certain conditions, we can describe computer software as "alive," rather than merely "lifelike." Similarly, strong AI asserts, or at least used to assert before it became unpopular, that machines are capable of being conscious. The weak-AI camp has no truck with such notions and insists that, although AI can make machines that demonstrate intelligent behavior, there is some fundamental objection (as opposed to a merely practical one) to conferring consciousness on any mechanical contraption. Now I appreciate that I might be on shaky ground here—this magazine was, after all, dominated until recently by the world of expert systems, which I’d imagine are the epitome of the weak AI craft—good, pragmatic, and useful applications of machine "intelligence," with no airy-fairy pretensions. However, I believe there’s quite a sniffy, "don’t be silly," attitude toward anything as metaphysical as consciousness in workaday AI, which I consider to be a shame, a limitation, and an admission of failure.

If you read my article in the July/August Expert (see "Creatures: an exercise in creation," on pp. 19–24), you know that I was the chief creator of a frivolous but popular piece of entertainment software, called Creatures. I set out to entertain people by letting them keep what amounted to virtual pets on their computer. However, unlike all similar programs that I know of, I specifically set out to generate a significant rapport between the users and their pets, not by trying to fool them into believing that the latter were alive, but by trying to make the pets alive. In other words, I assumed the validity of the strong A-life argument, came to some conclusions about the relevant principles and criteria, and then set out deliberately (and laboriously) to create life. Now that I have finished my first iteration toward this goal, more than a quarter of a million users and well over a million creatures (or "creature-like systems," if you must) are out there. Many of those users, if not some of the creatures, would understandably like to know whether I succeeded. Actually, so would I.

I decided to speak on this subject at a recent conference, and thought I might first ask the users of Creatures whether they believed the creatures were alive (not conscious). So I put out a request on the Creatures Usenet group, alt.games.creatures. Of the replies I received, the majority were "believers," which is perhaps not surprising, because they are self-selected enthusiasts. As evidence, many of them cited the usual criteria for life that can be found in any biology textbook. Nevertheless, even though I was generally satisfied that these criteria were meaningful and that Creatures more or less fulfilled them, I saw that a strong, intuitive counter-argument existed, even in the minds of believers. To quote from a Usenet correspondence from D. Benge,

When I saw my first Norn die, I was shocked at how much emotional strain I went through… I kept having to remind myself that this is just computer software and that it’s not real.

Interestingly, this is exactly the response I often hear in implicit or explicit defense of the weak-A-life viewpoint. People say, "Of course it can’t be alive—it’s just a computer program." One of the esteemed editors of this very magazine said in the July/August issue (in relation to a competitor’s product, I hasten to add), "I don’t think you can have a true friendship with executing code."1

Contained in this argument and almost invariably prefixed by "of course," to show how self-evident they are, are two specific objections. First, "It’s just software and therefore not real," and second, "It’s just an algorithm, blindly following instructions." The first is an argument about virtuality versus reality, and the second is an appeal to indeterminacy. Both of them at first sight seem very reasonable—hence, the "of course." However, I believe them both to be wrong and, what’s more, symptomatic of a general failure in our philosophy of science and our understanding of reality—the uglier side of the doctrines of Reductionism, Materialism, and Mechanism.

However, enough preamble; let’s start with the first observation. It’s simple and obvious, but it might point out the traps we can fall into when we forget that the materialism forced on us by our sensory and cognitive processes doesn’t necessarily reflect reality.

Whose childhood do you remember?

All I ask you to do is remember some episode from your childhood. Muse on it for a moment and relive it; then ask yourself, how come you believe you experienced those events at all? After all, you weren’t there at the time! Probably not a single atom that’s in your body now was there then. You still consider yourself to be the same person, yet you’ve been replaced many times over—you’re not even the same shape as you were then. Whatever you are, therefore, you are clearly not the stuff of which you are made. Matter flows from place to place and momentarily comes together to be you. That is both obvious and startling at the same time, to me at least. We normally take it for granted that people are things, whereas really they are phenomena; in fact, they are persistent phenomena, something I’ll discuss further in Observation Two.

The first time I watched a "glider" move across the screen in a computer simulation of Conway’s Game of Life and realized that something was moving and yet no thing was moving, I started to see how wrong it is to view the world as made of stuff—it is really made of phenomena. You don’t need to draw any particular conclusions from this observation, except to note that our materialistic viewpoint, our division of the world into objects, and our belief in stuff are deeply ingrained, rarely questioned, and just plain wrong. Even our language makes a pejorative discrimination between stuff and nonstuff: "solid" and "substantial" are "good" words; "insubstantial" is an insult. "Tangible assets" are better than "intangible" ones. "Material facts" are to be approved of, while "immaterial" means "irrelevant." Our intuitive materialism is so strong that we even find it paradoxical to learn that a cloud, which floats above our heads, supposedly weighs many tons. There is no paradox, of course—like us, a cloud is a persistent phenomenon, not a thing, and it is meaningless to assign weight to a "mere" region of space through which moist air passes and condensation occurs.

All right, let’s move swiftly on to Observation Two, which leads to somewhat more controversial conclusions.

There’s no such stuff as stuff

Imagine a smooth water surface—maybe a swimming pool. Imagine that nearby is a switch that allows you to switch off gravity. Having done this, pull the surface of the imaginary water into strange shapes. We’ll assume there’s no surface tension either, so you can make any shape you like—in my mind I’ve made a wholly authentic scale model of San Francisco, but then I have a vivid imagination! Okay, take one last look at your water sculpture and then switch the gravity back on again. What happens? Well, all hell breaks loose and San Francisco collapses into a mass of different forms. However, after a while, everything settles out, and you are left with nothing but sinusoidal ripples (and maybe a vortex or two), moving at a uniform speed across the surface. What does this tell us? Well, it tells us that on water, ripples are a persistent phenomenon, while the Golden Gate Bridge is not.

To my mind, the most fundamental law of cosmology is this:

Things that persist, persist; things that don’t, don’t.

Is that profound, or what? But actually, it’s the key to everything you see around you. Many have tried, but few have succeeded, and everything you see is an example of some phenomenon that has succeeded in persisting.

Rudyard Kipling wrote a poem that was something to do with "How, what, why, when and where," and science is the study of the how, what, why, when, and where of persistence. The older natural sciences were preoccupied with the questions "What phenomena persist?" "Where do they persist?" and "When did they persist?" However, these were and are only data-gathering exercises for the more important questions of "How do things persist?" and "Why do things persist?" By "why," I mean, what is the filter or ratchet that discriminates persistent from transient? And by "how," I mean, how does the phenomenon exploit the filter?

When you start to analyze the universe in terms of classes of persistent phenomena instead of objects, and categorize them according to how and why they persist, a striking continuity exists. Things as diverse as photons and minds seem to be made of the same (non) stuff. Our naturally materialist intuition elevates hardware and demotes software. From this new standpoint, hardware is relegated to being a mere subset of software.

Let’s start with ripples. Why do ripples persist? The wholly unsatisfying but adequately true reason is that ripples are a stable phenomenon. I’ll distinguish that kind of stability from others in a moment. How do ripples persist? Well, the key point is that they persist by something akin to "running to stay upright," a metaphor that, in my fevered mind, conjures up images of Scotsmen attempting to toss the caber in the Highland Games. To prepare to toss a caber, you must balance a large tree trunk in your hands, let the tree start falling, and then run forward to prevent it tilting too far. If you run too slowly, the caber will fall right in front of you, while running too quickly causes you to overtake it and probably get crushed. Either way, you cease very quickly to be a caber-tossing Scotsman, if not a Scotsman altogether! In a population of Scotsmen accelerating at random rates, most will go too fast or too slow and fall over, and the distribution of speeds among those remaining will consequently converge on the critical value. Ripples are not really like Scotsmen of course, largely because they are not things and therefore can’t run. What they actually do is propagate their form across space—I just find the Scotsmen image mildly more entertaining.

Another phenomenon that persists by propagation is light. A photon is (if you’ll forgive my simplistic description for the sake of brevity) a disturbance in a magnetic field that collapses, generating a disturbance in an electrical field that collapses… ad infinitum. In doing so, the phenomenon propagates through space at a characteristic speed. A photon in that model is not a moving thing, but a propagating waveform—a phenomenon that persists by running to stay upright. In the Big Bang, we can imagine all sorts of field disturbances being set up, but most of them failed to be stable and so disappeared, while phenomena of the right type, hurtling around at 3 ´ 108 ms-1 (somewhat faster than the critical velocity for Scotsmen) happened to persist.

By now, all the physicists will have abandoned me as an ignorant fool and gone off to play with their superstrings, so I can go on with impunity to draw an even more naive model of perhaps the next most primitive form of persistence—the atom. Photons and free particles rush headlong through the universe, but much the same phenomenon can exist trapped like a collection of standing waves (or whatever quantum physicists conjure up in their fevered imaginations) and gain the ability to remain localized in space. An atom is a more complex disturbance than a photon (images of Scotsmen rotating around their interlocked cabers might be taking the analogy too far), which doesn’t have to hurtle headlong through the universe. It’s still made of the same "stuff," though—we can still visualize it as a distortion of fields, rather than a "thing." The "why" is still the same (it happens to be a stable configuration), but the "how" has changed somewhat. However, not having to run to stay upright gives it a whole new set of properties: while photons must rush straight through each other (a condition of their persistence), atoms hang around long enough to interact. They form molecules.

So far, the "why" has remained unchanged, but bigger things are now afoot. We imagine a Big Bang in which most kinds of disturbance failed to persist for very long, except for atoms, free particles, and photons. The atoms tended to clump into molecules, and the Law of Cosmology stated above ensured that once persistent configurations had emerged, they tended to hang around, as the "persistent" suggests. The universe is a big place, and just being stable in some absolute sense was quite enough to ensure survival. That is, until replicators got invented! At some point or points in history, a suitable soup of molecules emerged with a neat new trick for persisting—they made copies of themselves. This was very clever, because although the molecular mishmash itself might not be very stable, if it had time to replicate before it got destroyed, then the pattern, the persistent metaphenomenon, would remain.

This was the point at which life (for want of a better name) got invented, and with it came a completely new "why." Before life, the filter was that more stable things simply lasted longer than less stable things, and so more of them were around. Everyone still had plenty of room. The snag with replicators, though, is that their offspring are created right next to the parents, and the darn things just keep on reproducing exponentially. The consequent overcrowding changed all the rules. It was no longer enough to be absolutely stable; now the key thing was to be relatively stable—in other words, more stable than the other guys nearby. The "why" is, of course, "survival of the fittest," and the resulting arms race is called coevolution.

The problem with evolution is that it is constantly upping the stakes—the "fitness landscape" keeps changing as new, more competent forms of persistence emerge. This (happily for us) quickly set the scene for a new range of "hows," called adaptation. It is a great idea, if you want to persist, to change your own configuration in response to changes in those around you (your environment), and any phenomena that show such adaptive behavior are likely to persist longer than those that don’t.

Adaptation is the precursor of intelligence, and we might even risk honoring it with the title reactive intelligence. This ability to change in response to circumstances is a neat trick. Even neater, however, is the ability to change before the stressful event occurs. Developing the ability to shelter when it rains is not as smart as seeing the gathering clouds and deciding to remain indoors. Such preadaptation, or predictive intelligence, once it emerged, was not going to go away.

After the ability to stay indoors in expectation of rain comes the ability to invent the umbrella. This is true creative intelligence and dramatically increases the persistence of phenomena that exhibit it, because they are no longer slaves of their environment but masters of it. By this point, I think we can safely describe phenomena of this class as "minds," because to invent, you must be able to rehearse chains of predictions without actually implementing them, and for that you need the ability to make internal models of the world. From such recursion perhaps comes consciousness.

This process of adding new "hows" goes further—one clever trick for persisting is to create colonies of cooperating specialists. This worked a long time ago for multicellular creatures and works now for societies. And there are intermediates and sideline phenomena too. The aforementioned "weighty" clouds are a persistent phenomenon, because they contain the mechanism for their own regeneration, making them "dynamically stable." Vortices, such as whirlpools and hurricanes, are also somewhat dynamically stable, because they can reform when disturbed.

One of the key assertions I’m making about the above characterization is that everything is made from the same (non) stuff. At no point is there a clear break between hardware and software, between matter and form. Such a break is largely an illusion caused by the metalevel at which our own sensory organs exist. To my way of thinking, no fundamental distinction exists between photons, molecules, minds, and societies—they are all persistent disturbances in the basic fields of the universe. Matter is no more real than my mind is, whatever you may think of my mind.

I’ve no doubt that this is contentious, and perhaps you feel it is irrelevant. However, I now want to draw the subject toward artificial intelligence. For the moment, just note that the above classification forms a rough but discernible hierarchy—metaphenomena superimposed on lower-order phenomena.

Figure 3. A very rough hierarchy of forms of persistence.

Can somebody get me out of here?

On to Observation Three. Let’s imagine two computer simulations of conscious, living things. One looks alive but clearly is not; the other, I hope you’ll conclude, has to be both alive and conscious. The latter simulation is an impossible dream, but by looking at the continuum between the two extremes, we might be able to see how far down we can draw the line. More importantly, we might see a vital property that changes as we cross the border between living and nonliving or conscious and unconscious.

Our not-really-living-but-seems-so program is easy enough to choose—how about Eliza? This is not ridiculously far from a candidate for the Turing Test, but nobody who is aware of Eliza’s internal structure would be fooled for an instant. For the other end of the scale, we must do some imagining. Imagine a computer model of quantum theory. It really doesn’t matter what internal representation we use, as long as its behavior fits that observed in the real world to a sufficient degree (please allow me to gloss over what "sufficient" means, as I don’t have space to deal with it now). To generate the randomness for quantum uncertainty, for example, we can hook our program up to a radioactive source, or simply link it to the local lottery.

It’s not unreasonable to assume that such a model can be made—indeed, might already have been made. Now use this simulation of quantum behavior to build some atoms—if you can’t make atoms from it, it isn’t a model of quantum theory. These atoms would undoubtedly diverge rapidly in their quantitative behavior from any real atoms they have been modeled on (it’s unlikely that God gets his uncertainty values from the same lottery as we do). However, we can reasonably suppose that they would continue to display the appropriate qualitative behavior—it’s not hard to imagine using a quantum-level simulation to model real atomic phenomena, such as bonding.

Now let’s use the atoms to make some molecules—maybe even some protein molecules. Perhaps our simulation’s accuracy is breaking down and our simulated proteins won’t fold up quite like their natural counterparts, but for the sake of argument, let’s assume they do. While we’re at it, let’s assume that we have a gigantic computer to play with, and memory and speed are no object. If we knew how, we could group our massed simulated protein molecules into a simulation of E. Coli (or C. Elegans, to be more apt, since something quite close is already being attempted2). However, let’s be bold and allow ourselves another fun but impossible toy to play with. Let’s have a "copying gun" that can scan the detailed atomic structure of any object and create, using our simulator, an exact replica in cyberspace. It doesn’t matter if such a thing is impossible or awkwardly recursive—this is a thought experiment. Point the gun at the room and create a simulation of it. Now point the gun at yourself.

What will happen? Well, assuming the scanning process is nondestructive (and I’m sure you wouldn’t dream of pointing the thing at yourself if it weren’t), there will now be two of you—your experience of the world will bifurcate. One of you will put the gun down and look at the computer screen to see the very startled face of your own doppelgänger. He (or she), on the other hand, will be convinced that he has been transported bodily into the computer, once he realizes that beyond the door lies only blackness. The important point is that the simulated "you" will, unless something is fundamentally wrong with my logic, believe that he is still you. He will therefore believe he is alive, and he will have no doubt that he is conscious. And who are we to disagree with him?

As far as I can see, apart from a "religious" (that is, faith-driven) retreat into Cartesian dualism, the only reason you could disagree with this thesis is if you think that such a detailed simulation is impossible in principle. There are several roads to explore if you want to argue for this impossibility: Sensitive dependence, criticality, nonpolynomial behavior, undecidability, and quantum computation are all good buzzwords, and Roger Penrose is one of those who have attempted to raise such objections.3 Nevertheless, I find such arguments unconvincing and occasionally even verging on the hysterical. Penrose is arguing against explaining away consciousness as merely a computation, and although I disagree with his rationale, I agree with his sentiment. To me, accepting consciousness as an algorithm in the generally understood meaning of that term would indeed be a mistake, but to regard it as a law-bound metaphenomenon demeans it not at all. As I have tried to outline above, I consider all such phenomena to be real in their own right, and certainly no less real than matter itself. To argue that such phenomena are computable actually increases our regard for them, by further isolating them from dependence on a particular substrate.

So if you accept, with me, that such a thought experiment would result in a hypothetical simulation that is both alive and conscious (and not simply lifelike and apparently conscious), we have to consider what the significant difference is between this case and Eliza. Sheer complexity is an obvious choice—the imagined program is many orders of magnitude more complex than Eliza. However, I would like to assert that there is another, more fundamental distinction between Eliza and the thought experiment.

Remember the metaphenomena—the way "mind" is a persistent phenomenon not distinct in fabric from, but hierarchically superimposed on, "atom"? I believe that this remains true in computer software, too. So here is my big claim: a raw computer simulation of a phenomenon is not an instance of that phenomenon, no matter how much it looks like it (an algorithm directly simulating the motion of a particle does not itself have mass and inertia). However, a metaphenomenon built from such simulated building blocks is fundamentally indistinguishable from the same metaphenomenon built from so-called "real" building blocks—they occupy different universes, but are equivalent. Simulated atoms are not atoms. Molecules built from simulated atoms are molecules. Organisms constructed faithfully out of simulated molecules will be alive.

I thus distinguish between first-order and higher-order simulations. First-order simulations are not instances of the phenomena they simulate, but higher-order simulations (those that exist in the simulated universe) are. A first-order simulation is simply an algorithm. A second-order simulation is also (necessarily) an algorithm, but simultaneously it is a phenomenon. One paradigm views the computer as a "processing device," while the other uses it as a "place to build things," and these are fundamentally different concepts. Plurality has a lot to do with it—it is populations of virtual objects constructed from first-order algorithms that perhaps hold the key. Algorithms simulate virtual objects, which must act in concert to create a virtual organism.

Despite all the metaphysical buildup, this assertion is apparent in much simpler, practical cases—some simulations simply try to emulate a system’s outward behavior, while others attempt to reproduce it by constructing it out of multiple, interacting building blocks (that themselves need only be emulations). Some flight simulators designed for entertainment, for example, are mere emulations, and as such have to be "kludged" by the programmer to exhibit explicitly what in reality are emergent phenomena, such as stalling and spinning. Others are realistic emulations of flight surfaces and fluid dynamics, collected together into a second-order model. In these, stalls and spins emerge naturally from the system. Life and mind are emergent phenomena, too, and it doesn’t seem improbable that the same logic applies.

I conclude, therefore, that a computer cannot be alive or conscious, nor indeed can a computer program. On the other hand, things built inside computer programs can. Creating nth-order model organisms from simulations of biological building blocks seems to be a road that is at least headed in the right direction, although other forms of representation might also cause similar phenomena to emerge. Perhaps Good Old-fashioned AI's abstract, symbolic representations of mental structures and processes will do the trick, but I doubt it. The more explicit the relationship between the internal representation and the outward behavior—in other words, the fewer orders of abstraction that lie between form and function—the less likely it is that this will happen.

These concepts of "virtual components" and "computation by population" are at the heart of connectionism, data-driven programming, and contemporary thinking on agent-oriented software. Sometimes the practitioners forget or fail to realize it, but such paradigms are founded on "computation by simulation"—a view of the computer as a place to build things. Procedural thinking is still sadly dominant, however, and leads to a top-down viewpoint that is ultimately doomed to failure. What we need is a belief in cyberspace—a recognition that such concepts as "mind" and "life" are not themselves algorithmic, yet can be constructed within universes built from algorithms. The waters are still very murky, but in my mind at least, such quirky thought experiments as those I’ve outlined help to cast a little light into the depths. I still can’t tell my Creatures users whether their pets are alive (although I doubt if they come close enough for such a classification to have significant meaning). Nevertheless, I’m sure I approached the problem in the right way, and I mean to keep on trying until I get there.

References

1. D. Taylor, "Three ways to get A-life," IEEE Expert, Vol. 12, No. 4, July/Aug. 1997, pp. 25–30.

2. H. Kitano et al., "The Virtual Biology Laboratories: A New Approach of Computational Biology," Proc. Fourth European Conf. Artificial Life, MIT Press, Cambridge, Mass., 1997, pp. 274–283.

3. R. Penrose, The Emperor’s New Mind, Oxford Univ. Press, 1989.

 
Copyright © 2004 Cyberlife Research Ltd.
Last modified: 06/04/04