About AI

 

Home
Up

Artificial intelligence has a chequered history, and for most of its fifty years has been rather a misnomer.

To mildly paraphrase Marvin Minsky, much AI is intended to "make machines do what humans use intelligence to do". I need intelligence to find the square root of a 16-digit number; my pocket calculator can do this task; therefore my pocket calculator is AI. But of course this doesn't make it intelligent!

This is often known as Soft AI - it is really an attempt to create substitutes for intelligence. When marketing people claim that your microwave contains artificial intelligence, they usually just mean that it contains circuitry that is highly conditional in its behaviour. Of course what they mean and what they are trying to make you infer aren't always the same thing!

Hard AI is concerned with making things that are really intelligent (whatever that means), and there have been several distinct flavours of Hard AI over the years:

Symbolic AI is the oldest. It assumes intelligence is defined as reasoning, logic and all those high-level things that really intelligent human beings are good at. Reasoning can, up to a point, be defined as a set of symbolic operations (for example Boolean algebra, which defines logical propositions in terms of AND, OR and NOT and then performs methematical operations on them). Since formal symbolic operations can be automated (that's what computers were designed to do) then it is possible for a computer to reason.

Nevertheless, this is a very impoverished notion of intelligence and hits up against a lot of serious snags, not least of which is finding a formal set of operators and symbols that can do everything a reasoning person can do (especially as people often don't think logically yet can solve really hard problems). Using logic to try to program a computer to ride a bicycle would be an absurd idea, for example.

Connectionism came next (or around the same time, but grew in popularity later). Connectionism draws attention to the fact that the only significantly intelligent machines we know of are brains, and brains are not like serial computers. They are made from billions of small neurons - each of which is very stupid on its own, yet incredibly intelligent in concert with its fellows. This is an emergent viewpoint, of course (see A-life).

Connectionists tried to build networks of simulated neurons that would be emergently intelligent. Sadly they did it without anything like enough understanding of biology! When people talk about neural networks, they are usually referring to a particular connectionist design called a back-propagation network. These have no more in common with real brains than a knight on a chessboard has with the kind in shining armour.

New AI turned up in the late 1980's as a result of dissatisfaction with the assumptions behind the old approaches. In New AI, intelligence arises at a much lower level (having more to do with riding a bicycle than playing chess), and the main inspiration is not the human brain but the "brains" of extremely simple invertebrates.

All three schools of AI have a lot wrong with them. It's commonly believed that AI research is making steady progress, but I don't think that's true at all. I think we fundamentally don't have a clue how to do it. All AI research has been valuable - nobody learns anything without making mistakes - and Soft AI has given us many useful kinds of technology. But we're still just poking around at the problem in the hope that gives. Until we understand the fundamental operating principles of the mammalian brain I don't think we'll even know where to start. On the other hand, traditional reductionist neuroscience isn't doing very well at telling us these principles, so it may well be that the answer to both problems comes from AI researchers poking around to see what works.

 
Copyright © 2004 Cyberlife Research Ltd.
Last modified: 06/04/04