IEEE 3

 

Home
Up

This article was published in IEEE Intelligent Systems magazine. Copyright © 1998, Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

Anarchy in action

Steve Grand

In a previous life I trained to be a teacher of young children. Of the various visits I made to schools during that period, two stand out in my memory. School A was a very old-fashioned place, where the children sat in neat rows, the teacher was in complete control and everyone did the same thing at the same time. School B, on the other hand, was an anarchy. The whole building was one large, open-plan area. The children were not divided into classes, and each child acted as an autonomous unit, managing her own curriculum from her own, personal timetable. To the casual observer, school A was a model of organization, and school B was a disaster, with higher noise levels, no obvious structure and apparently no discipline. But the casual observer would have been completely mistaken.

School A may have been a well-oiled machine, but school B was an organism. Educationally, the children in school B were better motivated, more self-sufficient and were actively learning, rather than passively being taught. However, education aside, what I want to draw your attention to is the organizational dynamics of the two systems. Take robustness, for example. Dropping an inexperienced and, frankly, pretty useless student teacher into the system was a good test for each organization’s robustness. When I was let loose on school A, the result was an almost immediate catastrophe. Trying to do thirty children's thinking for them was beyond my underdeveloped powers of control. While I was dealing with one child, the other twenty-nine were skittering around like ice cubes in an earthquake. My attempts to institute learning in groups, using work-cards, rapidly had to be abandoned. Like the teachers before me, I was forced to subsume thirty individuals into a single class unit, simplifying the system purely in order to control it. On the other hand, when I visited school B it embraced and absorbed me like a skin graft. I had no problems of control, because there was nothing for me to control. I simply looked up my timetable and went to the place allocated. Children who needed expert help came to that place when it suited them and asked me, indifferent to the fact that I was a stranger. If I hadn't been there, they'd have sought help elsewhere, or simply sorted it out amongst themselves. I was left with the distinct impression that school A was a heavily damped, simmeringly sub-critical system, just waiting to explode into chaos at the slightest perturbation, while school B contained a mass of feedback loops that kept it in a lively dance, while allowing it to resist disturbance and heal itself when damaged.

Now, which of those schools provides the best metaphor for conventional computer software? Hands up all who said "school A". Yes, computer programming is undoubtedly the ultimate exercise for control freaks. Like school A, the whole direction of control in the bulk of computer software is top-down. Routines or objects high up in the hierarchy tell things lower down what to do and when to do it. The programmer has to be able to encompass the whole system in his head, and most of the features of programming languages are there to help minimize the risk of the whole thing getting out of his control. But school B suggests that there might be another way. Perhaps our obsession with control is a mistake. Perhaps it is the obvious and rational way to deal with simple systems, but fails badly when applied to complex ones. Certainly school A appeared to be at the limit of complexity for that kind of autocratic system - larger class sizes might not have been manageable as single units, no matter how good the teachers were. School B may have had its problems, too, were it to be scaled up. Nevertheless, the entire school was behaving as a single entity, and so school B was already demonstrating greater scalability than the subdivided classes of school A. As software complexity grows (which it inevitably does), the problems of managing that complexity grow exponentially. Nowhere is this more significant than in Artificial Intelligence, where we are attempting to emulate some of the most sophisticated and complex forms of behavior. It is a reasonable assertion (one to be backed up another time) that intelligent systems cannot be simplified - that their behavior is necessarily a result of their immense complexity. If we are to make intelligent computer programs, they must be complex. If that complexity exceeds our ability to manage it by conventional, top-down modes of thought, then we must find another way. To do this, we need to understand how anarchy works.

One of the largest anarchies on the face of this planet is your own body. Over one trillion cells could not conceivably be controlled in a top-down, centralized hierarchy reminiscent of school A. That would be like trying to apply state socialism in a country 200 times larger than the population of the world (and we know that it fails at scales much smaller than that). And yet the trillion cells that make up your body work together in such a coherent way that the net result believes itself to be a single entity. How does this work, and can we learn anything from it that will help us to write exceptionally complex software, such as that needed to emulate the behavior of the human brain?

Sadly for us, one of the most significant reasons is that the human body was evolved, not designed. Evolution can afford to make countless mistakes, unlike most computer programmers, and works by the simple expedient of discarding the rubbish and keeping (and building upon) the good bits. Evolution doesn't have to understand its creations, and so has no limit to the complexity it can create. Nevertheless, it seems to me that, for one reason or another, evolution has still exploited a number of complexity-management tricks that we can steal for our own, man-made constructions.

Perhaps the most striking feature of complex organisms is their cellularity. Rather than being made from one continuous but varying substance, or from many totally dissimilar parts, organisms are made from repeated building blocks, derived from a common prototype. The historical and developmental reasons for this are obvious, but being cellular has important implications for complexity management, too. The cool thing about cells is that they all look like cells - whether they are oxygen transporters, computing elements, acid secretors or any of the thousands of different things cells can be. Because all cells are essentially examples of the same class, they implicitly know how to talk to each other and react to each other. Similarity provides them with a common language or interface. Without this uniformity, each cell would need explicit knowledge of the other cell types with which it needs to interact. As the number of functional types increased, the number of types of interaction would increase explosively. Because of the essential similarity between cells with different functions, none of this explosion takes place. Cellular uniformity does for an organism what a telephone exchange does for the otherwise horrendous complexity of a telephone network.

A second thing to note about cells is that they aren't autocratic. Cells don't issue orders, they make requests. We humans are naturally autocratic creatures, and this love of command and control is reflected in our design of programming languages. Routines call subroutines, functions invoke sub-functions, objects execute methods on other objects. This is fine (well, reasonable) as long as those subroutines exist and are well defined, but things can go horribly wrong if a part of the system fails or goes missing. Recall how removing a teacher and replacing him with an untrained fool had a rather more dramatic effect in school A than it did in school B. In natural systems, things rarely have such a direct and brittle effect on each other. Usually, one component will emit a token of some kind (hormone, neurotransmitter, etc.) and another component will independently choose whether and how to respond to it. The important point is that the sender needs to know nothing of the recipient and vice versa. The two components are isolated from each other and hence do not rely on each other. Cells are not proactive but reactive. Natural systems thus rarely suffer from the equivalent of dangling pointers.

The most significant feature of a cell (perhaps even its defining characteristic) is its membrane. The purpose of a membrane is of course to keep the outsides out and the insides in, and in biological systems this was one of the first "clever tricks" that made life possible. Modern software utilizes a similar concept to isolate components from each other, through the Object-oriented Programming methodology. However, there is another way in which the concept of a membrane as a barrier can play a part in software. If you read my piece in the last issue, you may recall that I was asserting a distinction between two kinds of computer model. First-order models are computer programs (based on procedural concepts), which simply "emulate" the behavior of some physical phenomenon, but cannot be said to be true instances of that phenomenon. Second-order models are those that are built from aggregates of first order models, for example simulations of molecules built from aggregated first-order simulations of atoms. My assertion was that, at least in principle, such second-order models could actually be said to be instances of the phenomena they simulate. So in the above example, the atoms are not actually atoms, but the molecules are potentially equivalent to "real" molecules, albeit occupying a parallel, virtual universe. The concept of a membrane-bounded cell is highly appropriate as a mechanism for isolating these two conceptual realms from each other. Metaphorical cells can act as the basic building blocks of a simulation, bounded by membranes. Inside the membrane is the procedural, computational, "Von Neumann" world, while outside is cyberspace – the virtual universe in which second- and higher-order metaphenomena exist.

In many respects, then, the biological concept of cellularity is applicable to computer software, especially if you have a predilection for designing bottom-up, massively parallel simulations for creating intelligent artificial life forms, as I do. Indeed, ever keen to practice what I preach, I designed (and my company has built) a computer modeling system based heavily on the concept of cellularity, both as a means of separating procedural space from cyberspace and as a way of coping with complexity growth and the need for flexibility and robustness.

Cellularity is only one of many ways in which biological systems can inspire computer science and, in turn, artificial intelligence. Yet still we tend to persist with our serial, top-down, control-freak attitude to software design. To some extent we find ourselves trapped inside a paradigm borne of post-mediaeval reductionism, but some of it goes much deeper, traceable back through our language constructs to something deep in our psyche. Nevertheless, as the things we make get ever more complex, this is an attitude we need to shake off. In many other walks of life this is indeed starting to happen. Sadly, it didn’t happen in education, and organizations like school A are still far more common than those of type B. However, the last few years have seen a decline in top-down, centralized thinking (not least in politics) and systems like the Internet are helping us to understand effective bottom-up methods of organization. This paradigm shift is happening not a moment too soon, because the complexity of software and hardware is rapidly exceeding our ability to regulate and control it. My New Year’s wish is that we all keep a close eye on how nature has tackled these problems and learn from them before it is too late. I hope that 1998 will be the Year of Anarchy in computer science.

 
Copyright © 2004 Cyberlife Research Ltd.
Last modified: 06/04/04