IEEE 8

 

Home
Up

Copyright © 1999, Institute of Electrical and Electronics Engineers, Inc. All rights reserved. This article was published in IEEE Intelligent Systems magazine

The Year 2001 Bug: Whatever happened to HAL?

Steve Grand

In 1950, Alan Turing made a prediction. Within fifty years, he said, the idea that machines can think will be commonplace and computers will routinely pass the Turing Test. Well, it’s now 1999, so we in the AI business have only one year left in which to make his prediction come true. Bear in mind that a mere twelve months after that, the general public is going to be wondering how we’re getting on with building HAL. Forty-nine years down, only one to go. Tricky! Perhaps the time has come to do what any sensible person does when up against a tight deadline: let’s spend the remaining time concocting some watertight excuses for why it’s not our fault we’ve failed! I’ve certainly been scratching around for my own list of things to blame, and here are some of the get-out clauses I’m planning to invoke if challenged:

1. The Digital Computer

A friend of mine is the passionate protector of a couple of very special robots. These robots are capable of navigating back to base to recharge themselves when their batteries run low, among other clever tricks. "So what?" you might ask. After all, many of the state of the art machines currently blundering their way around the robot labs of the world can and do carry out these very functions every day. Sometimes they even succeed.

However, these particular robots were built way back when Turing made his prediction, by a very forward-thinking Englishman called Grey Walter. Grey Walter’s "tortoises" are three-wheeled vehicles, with a Plexiglas dome that acts as an all-round bump sensor, plus a small rotating turret containing a photocell. In this respect they are barely different from modern mobile robots. However, my friend is fond of pointing out a slightly more prominent difference: the tortoises contain about 4,999,999 fewer transistors! Where today’s robots habitually use a Pentium chip, Grey Walter’s robots were controlled by a single vacuum tube. By the marvels of modern technology, controllers have increased in potential power by a factor of five million in the last half-century. In the same period the state of the art in robotics has progressed… well, not a jot, really. Somehow, the invention of the digital computer must have diverted our attention from making any further progress in robot intelligence. Perhaps it’s an effect similar to the one that prevents people who kit themselves out in the finest hi-tech hiking equipment from ever wandering more than half a mile from their cars. Perhaps if we’d stuck to vacuum tubes we’d have made more progress. It’s clearly all the fault of the digital computer.

2. Chess

Conveniently, we can blame the digital computer in great part on Alan Turing himself (note for American readers: Von Neumann, Eckert and Mauchly didn’t invent computers all by themselves, whatever certain US presidents would like you to believe. Turing provided most of the theoretical and practical underpinnings of the field and yet is a sadly unsung hero—especially in his native Britain, where perhaps only one in a hundred people has ever heard of him). If we can blame Turing for one part of our failure to achieve his own prediction, then we can probably blame him for another, too. Turing was the person who first thought that chess would be a good Standard Problem for AI. The logic seemed to be that humans are very intelligent, and yet even they find it hard to play chess. Chess must therefore be an excellent indicator of intelligence. The facts that humans also find it hard to balance on one leg or remember long strings of digits seem somehow to have escaped his notice. If they hadn’t, the history of AI might have been altogether very different (though probably even less successful).

To be fair to Turing, he recognised that "situatedness" (being embedded in and responding to a real environment) was a very important factor in intelligence. However, he expected to work with extremely primitive computers and so wanted a challenge that could be carried out using no other sensory device than a paper tape reader. So he chose chess. Nevertheless, chess is a very misleading guide to intelligent behaviour and the game has cast a cloud over AI ever since. Even now, some people seem to think that if you can make a machine play better chess than a grand master, then that machine must be very intelligent. A telegraph pole can stand on one leg better than an Olympic athlete, and telephone directories can remember more digits than a numerical prodigy but, as I say, somehow these things don’t count. Anyway, chess is a handy scapegoat for us, and a very justifiable one, in my humble opinion.

3. Testosterone

Chess is a war game, and war is all about domination. In many people’s minds, the words "domination" and "control" are considered to be roughly synonymous, and the idea of controlling something therefore implies dominating it, by force, from the top. This attitude is something I put down to an excess of testosterone amongst (male) computer scientists. Most women, on the other hand, know that control doesn’t necessarily require force—ask any First Lady. Likewise, most fatalists (and I count myself as one) recognise that control is as much an effect as it is a cause. In many fields, the notion of "command and control" has finally begun to give way to "nudge and cajole". Take sea defences, for example. Like King Canute, people now recognise that they cannot hold back the tide by brute force, so they gently steer and guide the force of the sea instead, turning its own strength around in their favour. Most computer programs used in AI, on the other hand, are still very top-down, centrally controlled structures. Even our understanding of the brain is heavily conditioned by the macho notion of control. We assume the brain "does things to data", whereas it is just as reasonable (and perhaps a great deal more illuminating) to think of the brain as "being done unto by the data"—it is not the brain that controls the data but the data which do things to the brain. I always imagine one of those old-fashioned coin-sorting machines, where the coins slide down a slope past a set of holes of increasing diameter. When they reach a hole wider than themselves, they fall through, and thus the coins become sorted by type. It is not the machine (brain) that sorts the coins, but the coins (data) that sort themselves. Anyhow, this top-down, centralised, macho attitude to intelligence and software design is something I believe to have hindered progress, so if called upon I’ll lay some of the blame on testosterone.

4. Blinkers

We can only use our eyes to look at one thing at a time. When we are doing sums in our heads, we can’t simultaneously compose poetry. Even when we’re drunk we find (to our dismay) that we cannot walk in two directions at once. In short, the human mind appears very serial in its "higher" activities, due at least partly to attention, concentration and physical unity. It seems natural, then, to think of thinking itself as a serial process. Thus many if not most AI techniques are based upon serial algorithms. Here again Turing can take a lot of the responsibility. When he was thinking about problem solving in relation to computability, he discussed the notion of a "definite method" and implied by this a serial set of operations: an algorithm. Turing machines thus epitomise the concept of a serial process, and we’ve lived with this serial mentality ever since. Yet the brain, for all the apparent unity of the mind, is clearly a massively parallel device, and I think this is overwhelmingly important. For a machine to be creative (and creative ingenuity is the hallmark of intelligence, at least at the human level) it must by definition perform in a way that the designer didn’t "program in". Its behaviour is therefore emergent, and emergent behaviour (also by definition) is only exhibited by parallel systems, since it is the simultaneous interaction of many parts that creates behaviour at new levels of description. Happily, even a serial digital computer can be made to behave like a massively parallel device, simply by slicing up time. However, the vast majority of AI programming techniques (neural networks and GAs are notable exceptions) do not begin with a time-slicing loop, and so will never be creative, ingenious or (I believe) intelligent.

So there we have it. We AI researchers are clearly blinkered, domineering, chess-playing computer nerds, and that’s why we’ve never got round to finding all the solutions that Turing expected of us! But we do have our caring side. Maybe our trump card is actually a sense of responsibility and concern for our fellow man. We’ve carefully avoided creating HAL, and have therefore saved humanity from hearing those awful words from the film 2001: A Space Odyssey: "I’m sorry Dave, I’m afraid I can’t do that". Robots will never take over the world now, and it’s all thanks to us. When Christmas 1999 comes round and people start to say to me "you AI guys really screwed up, didn’t you?" I’ll simply have to tell them what heroes we are and how grateful they should be that we did.

 
Copyright © 2004 Cyberlife Research Ltd.
Last modified: 06/04/04