“Small Furry Creatures”

Design Definition

 

 

Steve Grand

Millennium Interactive Ltd.
19th July 1995

 

 

1.     Overview

This section provides a brief overview of the product and its features, for reference.

1.1     The Concept

“Small Furry Creatures” is the working title of a computer entertainment product that enables people to keep a ‘pet’ inside their computer, either at home or in the office. These ‘virtual pets’ are intelligent, lifelike and curious creatures, who live inside a virtual world filled with objects and locations for them and their owners to explore. Virtual pets can be taught a rudimentary language, need care, attention and an education if they are to thrive, and can even, in expert hands, breed and produce unique offspring.

The premise is that this product is not a conventional game but an entertaining experience in the widest sense. What we are saying to the public is:

“Here is a new form of artificial life. It is intelligent enough to provide you with interest and amusement. Here is a tank to keep it in. Here is a forum where you can communicate with experts and like-minded enthusiasts. Go forth and multiply them. Tell us what you find. Let’s see how they develop. Where will it all lead?”

1.2     Key features

·       A believable, living world inside the computer. Creatures to interact with; places and objects to explore.

·       New Artificial Intelligence technologies provide, as far as we know, the first ever commercially available artificial lifeforms.

·       No plot or imposed sequence. Something to ‘play with’, rather than something to ‘play’.

·       Capitalises on the user's imagination, curiosity and sympathy. Largely non-aggressive and also not male-biased.

·       Easy to use, but not limited in scope. Opportunities for everyone from young children, through pet lovers and the generally curious computer user, right up to the specialist A.I. and A-Life enthusiast.

1.3     Platform

The PC version of the product is designed to run under Microsoft Windows, using 32-bit code. It will run in Windows 3.1X using the Win32s library, or Windows NT and Windows 95 in native mode. The design is optimised for Windows 95. Full use is being made of the Windows operating system, and the product is fully capable of multitasking alongside other applications in the office environment, such as spreadsheets.

1.4     Technology

A key element in this product is the use of a proprietory Artificial Life system, comprising a neural network brain model, a simulated biochemistry and a ‘genetic’ coding scheme. These are described in detail in the technical paper appended to this document.

Two other significant technological elements of the product are the extensive use of a specialised script language for controlling objects and creatures, and the use of Microsoft’s Direct Data Exchange mechanism to provide external connectivity. The combination of these sub-systems offers the following features:-

·       Flexibility — The behaviour of objects and creatures is controlled by editing their event scripts, using an external editor. This facilitates easy development, debugging and enhancing of objects without the need to change compiled code.

·       Efficiency — DDE and script links between the main application and external ‘mini-apps’ allow for a rich and attractive user interface without excessive memory demands.

·       Extensibility — New objects can be imported into an existing world, creatures can be moved to new worlds and new tools and facilities may be provided after publication of the initial product.

1.5     User Support

Millennium Interactive Ltd. plan to provide extensive user support by developing a ‘Small Furry Creatures’ World Wide Web site on the Internet (the “InterPet”). This will provide a forum in which users and developers can discuss the project, and also a medium for the trading and dissemination of information, creatures, new objects and tools.

2.     Specification

This section specifies the features of the design, as at 19/7/95. Minor modifications and improvements to the following are likely as the project develops further.

2.1     The virtual world

2.1.1     Landscape

The planet Albia is a thin, discoid world, where East and West are the dominant directions, and North and South are largely irrelevant. Imagine two sheets of glass enclosing a thin film of soil, rather like a wormery or formicarium.

Much Albian life occurs underground, in burrows. Changes of level are effected by the use of elevators and cable cars. Settlements contain burrows for living in, schoolrooms, hospitals, libraries, engine rooms and others, each containing useful and interesting objects.

The surface is divided naturally into several regions, with characteristic climate, geology and architecture.

The virtual world is being prepared as a three-dimensional model, which will be digitised and used to provide realistic backdrop and scenery elements for the display.

2.1.2     Life-forms

Albia comprises a motley collection of strange life-forms. Aside from the unintelligent and purely decorative fish and birds, there are a range of creatures (Norns, Grendels and Ettins) equipped with fully working brains. The Norns' brains are unformed at birth, and it is up to the user to train them and help them develop. The other species will be driven by different sets of needs, and will have been ‘trained’ to some extent before being frozen onto CD-ROM, so that they exhibit characteristic behaviour.

Some of the creatures are alive and kicking right from the start of the game, though few will be seen until the world has been explored somewhat. The user’s pet Norns, however, start off as large eggs, a range of which is available (perhaps on a separate floppy disc). The user will choose one or more of these eggs and place them into the world, whereupon they will hatch and a baby Norn will emerge.

The user will be able to rear and educate these creatures and eventually breed them. Any ‘deaths’ can be made up for by hatching another egg from the stock. Excess offspring may be distributed over the Internet or from friend to friend. Offspring are genetically related to their parents, and show inherited characteristics. Mutation is also possible, allowing some limited potential for evolution.

2.1.3     Objects

Strewn around the landscape are the following types of object:-

·       Intelligent Creatures — These are drawn as ‘articulated’ composite objects, allowing smooth movement, a wide range of poses (well over a million) and lifelike actions.

·       Stupid Creatures — Small, animated but not articulated birds, fish, etc. will travel around the landscape to add a bit of movement.

·       Simple Objects — The landscape will be littered with a wide range of simple objects, such as food, fuel and toys.

·       Compound Objects — More complex objects, consisting of several parts, will include machines (steam engines, computers...) and vehicles (trucks, boat, airship...).

·       External objects — Some Objects, when activated by the user (unlike activation by a creature) will generate ‘magnified’ versions of themselves in the form of ‘mini-applications’, allowing the user to make further interactions.  For example, clicking on a book will show a magnified book whose pages can be read; clicking on an EEG machine will allow the user to watch a creature's brainwaves on a magnified version of the machine. Some magnified objects will automatically register themselves as icons on the toolbar, for future access.

2.2     Graphics

2.2.1     Viewpoint

As described above, the world is viewed from the side, as a thin slice. Scrolling is from side to side and vertically.

2.2.2     Backdrop

The backdrop will be made by digitising a ‘real’, three dimensional model of the world. This model is extensive, and the final graphics will occupy approximately 20x5 default-sized windows in area. All graphics are in 246 colours (256 minus the Windows system colours).

2.2.3     Objects

Many objects will also be modelled in 3D, digitised and then edited by pixel artists. The creatures themselves may finally be modelled from clay or they may be fully rendered on a graphics workstation.

2.3     Sound

Extensive digitised sound effects will be included, both to signal and enhance the actions of creatures and other objects, and to provide background ‘atmospherics’ (for example jungle noises).

Because the sound effects are provided through the Windows multimedia interface, they support a very wide range of sound cards.

2.4     User Interface

2.4.1     WIMP

The general user interface will conform to Microsoft’s guidelines for Windows, and especially for Windows 95. For example, the window is sizeable and moveable, remembers its last state and can be minimised in the usual way. Normal Windows menus, toolbars, dialogues and scrollbars are used. The main exception is the cursor — a ‘hand’ that exists within the virtual world will take over from the Windows mouse cursor whenever the latter moves onto the main window.

2.4.2     Scrolling

The ‘camera’ normally follows the user’s pet creature automatically, but can be panned a short distance to either side by means of scroll bars. The intention is that the user cannot simply see everything by himself, by must encourage his pet Norn to travel if he is to explore the world.

2.4.3     Pick Up and Drop

Many of the smaller objects can be picked up by the user (and often by other creatures) and dropped elsewhere (e.g. in a vehicle, for transporting) or occasionally ‘used’ on other objects. To pick up an object, the user clicks the right mouse button over it.

2.4.4     Activating

Many smaller objects can be activated by clicking on them with the left mouse button. Larger objects like machines and vehicles may have several discrete ‘buttons’ or other active areas to click on.

Activating some objects brings a magnified version onto the screen (as a complete mini application), for further interaction or monitoring.

2.4.5     Interacting with Creatures

Creatures can be clicked on with the mouse pointer (or other mouse-carried objects). Clicking the pointer over a creature's head ‘pats’ it (elicits pleasure), while clicking over its body ‘slaps’ it (eliciting pain).

Creatures can also be ‘talked to’, by typing, recalling or editing a word or phrase, which will then appear in a speech bubble next to the mouse pointer. The effect of speaking to creatures depends on whether, what and how well the user has taught them to respond to a given word. Teaching can be carried out by speaking a word in association with an object, or by typing that word onto a blackboard in a schoolroom, and associating it with a given picture. The user may use his own language, or any other that uses the Roman alphabet.

Creatures can see the mouse cursor and mouse-carried objects, and may well respond to having objects waved in front of their noses (perhaps like a carrot in front of a donkey).

These are the only (normal) forms of interaction with creatures, and the user will often have to be ingenious if he wants to make the creature do a particular thing.

2.4.6     Saving

The game will automatically restart from where the user last left off, unless reset. There is no purpose in allowing the user to deliberately save his position, since this is not a conventional game, and doing so would merely destroy the illusion that he was witnessing reality.

2.4.7     Online Help

We will provide extensive and interesting support materials via the Windows Help system. Some multimedia elements will be included, such as video clips containing discussions and interviews, demonstrations and other items.

The help system will consist of topics within the following areas:-

·       How to use the application.

·       How to look after your norn.

·       Informative articles on Albia, its objects, geography and creatures.

·       Background reading on the fields of Artificial Life and Artificial Intelligence.

2.5     Distribution medium

The product will be distributed on CD-ROM. This will possibly be accompanied by a single floppy disc containing the ‘eggs’ (‘wrapped in cotton wool’, as it were, to increase the sense of pet ownership and ‘specialness’).

2.6     System Requirements (PC platform)

·       486 / 25MHz minimum. 50MHz DX or better recommended.

·       VGA or SVGA display.

·       Mouse.

·       MS-DOS 3.2 to 6.0 or equivalent.

·       Microsoft Windows 3.1X, WFW 3.11, Windows NT or Windows 95. Video for Windows, Win32s and WinG runtime modules will be supplied for 16-bit Windows versions.

·       Minimum 4MB memory. 8MB recommended.

·       Sound cards: any MPC standard sound card

·       Double speed CD-ROM drive

3.     Appendix: A.I. technology

The following is an extract from a scientifically-oriented paper on the technology developed for use in Small Furry Creatures. This material is copyright Millennium Interactive Ltd., and is not for publication or duplication.

3.1     Objectives

Our intention is to produce an entertainment product for home computers which allows people to own, interact with and breed ‘intelligent’ virtual creatures (purely for fun), in a temporal and spatial context which emphasises the individuality of those creatures - their personality, autonomy and ‘lovableness’ (as opposed to simulations which focus on large communities of organisms and/or many generations of evolution). We are using A-Life philosophy and techniques to program the simulation, because we believe that this is the best, if not the only way to produce convincingly lifelike and autonomous creatures. The aim is that a lifelike dynamic should emerge from the mechanisms that model the creatures’ brains, physiology and genetics. In order to convince people (to their own satisfaction) that these creatures are in some meaningful sense ‘alive’, we are trying our best to avoid any hard-wired, pre-programmed or rule-following algorithms. Instead we are using a form of neural net for the creatures’ brains, a simple simulated biochemistry for their physiology and a genetically determined mechanism for their morphogenesis.

Technically, then, the aim is to perform a ‘grand synthesis’ of techniques from several A-Life fields (neural nets, autocatalytic sets, genetic algorithms, virtual-world simulation), combining them to create a single piece of software which aims to get as close as possible to the genesis of a ‘whole’ artificial lifeform. Unlike a research project, we are not concerned with establishing any truths from the behaviour that results; all that we require is that the outcome is entertaining to the general public.

Despite our commitment to the ‘bottom-up’ philosophy and the A-Life ideal, this product is obliged to run happily in a multi-tasking environment on an average domestic PC. Not only, therefore, do we have to forgo the elegant 3D environments used in many academic simulations, but we also have to be prepared to cut corners where necessary, in order to keep the processor load within bounds. For example, the neural net that provides a creature with a ‘brain’ concerns itself solely with the business of reactive decision-making. No attempt has been made to develop distributed pattern-recognition circuits for sensory pre-processing, nor to emulate the motor areas of the brain with regard to the sequencing of actions - both of these tasks are handled by algorithms. Similarly, we are not in the business of hopeful experimentation: we have to produce a finished product that does what we intended it to do, within a limited time frame. Therefore, we have been relatively conservative in our models, keeping to designs whose behaviour we can reasonably predict in advance, and whose performance is limited but substantially assured.

3.2     The neural network.

Each creature contains a NN to represent its brain. As explained above, this NN is concerned solely with the business of making high-level decisions about a sensible course of action, given information about the current sensory situation. Sensory data are pre-processed by algorithms, and often extracted directly from the environment - objects are recognised by their I.D., not by their shape. These data are presented as ‘analogue’ signals to a heterogeneous array of 1,024 ‘neurones’, whose collective job is to perceive the current situation, relate it to past experiences and determine an appropriate response, either from memory or by generalisation. The output of the NN is a set of signals, the strongest of which determines the course of action (such as ‘approach this object’ or ‘attempt to activate it’). The action is then executed by a ‘script’, written in a high-level script language.

The NN runs asynchronously with the other physical and environmental processes, with its state updated at approx. 100mS intervals. All neurones are similar in structure, but their parameters and inter-wiring change with position to divide the net into several distinct regions.

Each ‘neurone’ is a state machine, whose internal state is determined by summing the inputs from external senses or other neurones (each signal modulated by the ‘synapse’, through which it passes). The summed inputs exert a ‘nudge’ on the state value - the state is an elastic quantity which tends to revert exponentially at a given relaxation rate to a given rest state. The cell’s output is the current state when filtered by a threshold. By varying relaxation rates, rest states and thresholds, a number of different neurone dynamics may be created. One important consequence of the exponential relaxation of neurone states is that they become increasingly resistant to change, the further they get from equilibrium. This generates a valuable damping mechanism, which prevents the system from locking up in response to positive feedback or excessive input.

Each neurone can have one or more ‘dendrites’, which synapse onto other neurones. Each synapse has a signed weight (excitatory or inhibitory). This too is an elastic quantity, that tends to relax towards a rest state. The rest state is in turn an elastic variable that tends to relax towards the current weight, but at a much slower rate. Therefore, disturbing the synaptic weight creates a difference between it and its rest state: the weight then relaxes rapidly back to rest, but in the mean time the rest state rises a little more towards the displaced weight. The effect of this is reminiscent of a short-term and a long-term memory: feedback during learning has a strong but temporary influence on certain synapses’ weight values, yet repeated feedback in the same direction will create a much longer term, or permanent, shift in synaptic weight. If one puts one’s hand on a hot surface, the experience immediately overrides any motivation to repeat the action in the short term, yet does not altogether rule out that course of action in the future, unless the reinforcement is reliably repeated. This feature has an important role in providing a lifelike response to learning situations and is similar in some ways to the habituation reflex shown by biological neurones.

The NN is divided in terms of neural/synaptic parameters and wiring mode into several areas. Some of these use neurones simply as storage devices - simple ‘memories’ that retain information about internal and external changes elsewhere in the system. In this mode, the elasticity of their internal states makes them far more useful than straightforward computer variables. One region of this type stores the current levels of the organism’s ‘drives and needs’ - a single neurone stores the current hunger level, for example. Perturbations to these ‘drive’ neurones are the mechanism that provides reinforcement during learning (things which increase drives are considered negative reinforcement, while events that decrease them are considered positive). Another region is concerned with ‘attention-directing’: Each environmental object of which the creature is currently ‘aware’ (because it has come into view, or made a noise, etc.) is ‘assigned’ one of these neurones. Every subsequent visible or audible stimulus emitted by that object is used as an excitatory input to the relevant neurone. The states of these neurones therefore comprise a graph of the probable ‘significance’ of each object at any moment. The creature’s ‘attention’ is directed towards the object perceived as most significant. This attended-to object is the source for many of the NN’s sensory inputs, and is also the object to which the creature’s actions are applied.

The rest of the brain is devoted to the main activity of perceiving the sensory situation and deciding upon a response. The vast bulk of the neurones constitute a region called ‘Concept Space’ (although ‘Percept Space’ would have been a better choice). The duty of Concept Space is to combine the various sensory inputs and represent the current ‘situation’. Output from concept neurones is channelled to the Decision Layer - a small region of massively dendritic neurones which accumulate impulses from Concept Space and ‘vote’ for the course of action that they each represent.

The duty of Concept Space is to accept discrete sensory data such as ‘A is true; B is very true’ and de-multiplex them to provide an output that says ‘A+B is the current situation’. When (as is usual), many sensory inputs are firing simultaneously, Concept Space not only has to represent the overall situation (such as A+B+C) but also the ‘sub-situations’ A+B, A+C, B+C. This is because the creature has no means of knowing initially whether it is the total sensory situation that is important, or only some aspect or aspects of it.

Clearly, the total number of concept neurones that would be needed to represent all the possible combinations of  all sensory inputs (approximately 80) would be astronomical (280-1). Therefore Concept Space has to ‘wire itself up’ to represent sensory situations as they are experienced, since it is self-evident that any one creature will only experience a tiny proportion of the space of possible experiences. Concept Space is therefore the creature’s ‘event memory’, and combines with the Decision Layer to constitute its ‘rule memory’.

Each decision neurone makes a connection with the output of every concept neurone that fires. The synaptic weights of the concept-decision synapses are adjusted in response to reinforcement. Making a decision is then a simple matter of accumulating all the inputs from Concept Space into the neurones of the Decision Layer, then selecting the action whose decision neurone is firing most strongly. A number of other influences are brought to bear on the response patterns of the Decision Layer in order to ensure that, for example, creatures are not fickle, forever changing their mind as the sensory situation gets modified during the course of their current action. Both concept and decision cell dendrites atrophy over time to implement a ‘pruning’ mechanism, freeing up cells/connections that turned out to have no value.

One important part of the concept-forming process is generalisation. If a situation A+B exists and has been experienced before, then there will be a concept neurone representing A+B, and it will synapse onto one or more decision neurones, according to which action(s) the creature took in that situation in the past. The synaptic weight of each connection represents how appropriate or otherwise that action was, and the best action to take this time will follow directly from the strength of the resultant signals (moderated by the influence of other current and recent situations). However, what happens the very first time that A+B is experienced? There are no non-zero-weighted synapses between A+B and any of the decision neurones to guide the creature. A random course of action would be, at the least, very un-lifelike. Therefore, the creature must generalise from previous, similar situations, to provide an educated guess at how to react to this novel situation. The initial hope was that this could be achieved by encouraging concept neurones to wire themselves up to represent situations, such that similar situations were represented by nearby neurones. Then a simple ‘lateral excitation’ effect, whereby firing concept neurones encourage their neighbours to fire in inverse proportion to distance, would provide the necessary influence: A+B fires strongly, but makes no recommendations; however, A+B’s neighbours represent similar, previously experienced situations, and their lateral excitation makes recommendations (albeit weakly) on A+B’s behalf. If the total vote recommends an action that turns out to be profitable or painful, then A+B will ‘learn’ how to react in the future.

This would be a powerful mechanism. However, it requires the invention of a ‘rule’ for dendritic growth that encourages the development of a geographical relationship between similar concepts - something akin to the development of ‘place fields’ in the Hippocampus. A ‘grow towards the nearest source of signal’ rule can be imagined for biological neurones, but this leads to practical difficulties when converted to a computational model. In any case, all geographically based solutions of this kind are likely to suffer from the ‘problem of common centres’. For example, suppose A and B form diagonally opposite corners of a rectangle with C and D. The logical place for a neurone to represent A+B is near the centre of the rectangle. Unfortunately, that is also the logical place for C+D. If A+B fires for the first time, it will be strongly influenced by the lateral excitation of the pre-existing and nearby C+D. Unfortunately, there is no reason to suppose that situation C+D and A+B are related. Thus the generalisation is inappropriate. It is still possible that a self-organising mechanism can be invented that does not suffer unduly from the common centres problem, and such a mechanism would have a most interesting structure. Work in this direction will, however, have to wait until less stressful circumstances exist!

However, in the mean time, a less interesting system has been implemented, which has no need for a geographical juxtaposition of similar concepts. Already, whenever A+B+C fires, so too do A, B, A+B, and so on. These (especially the single-input species) are more than likely to have occurred previously, and almost by definition A+B is a similar situation to A+B+C. Therefore the sub-concepts automatically provide the sought-for generalising influence, and no geographical relationship is necessary (although for aesthetic reasons it is still desirable). The present model, therefore, employs a dendritic growth rule that allows concept neurones to form into binary trees, representing composite situations in the manner: A, A+B. AB+C, ABC+D, and so on. This is unsatisfactory in many ways, and is one of the places where a departure from A-Life principles was necessary - some pragmatic top-down processing is needed to ensure the proper development of concept trees. However, it is computationally efficient, and it works well enough. Perhaps before the end of the project, another mechanism will emerge to replace it. Perhaps one might even evolve to replace it - see below.

3.3     Non- or Semi-neural mental mechanisms

Virtual creatures provide no tactile interaction with the user, and their ‘body language’ is limited in its ability to communicate the creature’s ‘feelings’. To overcome this inevitable lack of ‘rapport’ between user and pet, it was decided to implement a limited form of language. Thus creatures can feed back to the user information on how they are feeling and what they are trying to do, and the user can issue ‘commands’ to his creature and play the role of ‘trainer’. It was felt unlikely that any NN support for the genuine learning of language could be developed in time, yet it was important that language did not override the creature’s autonomy by being coded as a ‘bypass’ to the NN. Therefore, a simple scheme was developed that 1) uses the state of the brain’s attention-directing neurones as a guide to establishing a ‘learned’ connection between objects and words during teaching, and 2) feeds algorithmically-recognised spoken commands into the net as sensory inputs that are pre-wired before birth to achieve the desired effects (this pre-wiring is performed by effectively ‘training’ the brain in embryo to associate word stimuli with the desired responses - a technique that is also used to pre-wire a few ‘instinctive reactions’ into the pre-natal NN). Nouns, for example, stimulate neurones in the attention director relating to the object they represent, thus encouraging the creature to divert attention to that object. Verbs (those words relating to the actions that the creature can perform) are presented to Concept Space as sensory inputs that have pre-wired positive associations with the equivalent decision neurones. If you say to your creature “get the ball”, the word “ball” tempts the creature to attend to the ball object (if any such object is present) and the word “get” stimulates (via a concept neurone) the ‘pick up’ decision neurone. All other things being equal, the creature will then do as commanded. Despite this system being a ‘cheat’ in philosophical terms, it at least works with the brain, rather than overriding it: the creature will refuse to shift attention if another object is perceived as too significant, and will refuse to respond to the “get” command if other actions recommend themselves more highly. The more ‘practice’ the creature has had in learning the words, the more likely they are to be responded to (practice increases the weights on the sensory inputs). Generally, the technique works well, and some interesting phenomena have occasionally been observed when several creatures interact with each other via this language mechanism.

Another ‘cheat’ that has been grafted onto the design for pragmatic reasons is goal orientation. The NN is a purely reactive mechanism, and only concerns itself with reacting to objects currently within observational range. If a creature is hungry and food is nearby, then the creature will soon learn to eat the food. It is unlikely, however (though it is possible indirectly), that he will seek out distant food. To simulate this behaviour a table of numbers is provided that describes to what degree each object type is beneficial in satisfying each need. This combines with a ‘memory’ of where such an object was last seen, and a table of ‘navigational heuristics’ to provide ‘prompts’ to the NN (in a similar manner to the language system) that tend to nudge the creature to perform actions that seek out the object which best satisfies his most pressing need. Currently these tables’ contents are statically defined. However, it should be possible, given time, to provide feedback paths that allow the creature to modify his perception of each object’s value and perhaps even tune the heuristics that control the seeking-out process.

The above brief summary of the NN dynamics glosses over a number of problems and necessary extra ‘features’ of the design. However, despite that fact that the system is largely feed-forward (dull), containing no totally internal positive feedback loops (exciting), the resultant behaviour is generally quite gratifying to observe, and the necessary requirements of a lifelike dynamic, elementary learning and rudimentary intelligence seem to have been met.

3.4     Physiology

A brain is not enough: for our customers to have an enjoyable relationship with their ‘virtual pets’, the creatures must also show such phenomena as the ability to contract disease, or suffer from malnutrition. Moreover, it must be possible for their owners to ‘cure’ these problems by the appropriate use of herbs and pharmaceuticals provided within the virtual world. Therefore, the creatures need a reasonably convincing physiology.

Currently such facilities have not been programmed, but the aim is to incorporate a simple ‘biochemistry', comprising the following components: a ‘bloodstream’, or ‘protoplasm’, which acts as a reservoir for a collection of ‘chemical factors’; a set of ‘emitters’, which are attached to various objects inside a creature (sensory organs, neurones, etc.) and when stimulated generate one of these chemical factors; a set of ‘reaction sites’, which are capable of converting one or more reactant factors into a product factor (e.g. A+B->C, A->nothing); and, finally, a set of ‘receptors’, which are sensitive to a given chemical factor, to a given degree, and which cause changes in the object (neurone, muscle, or whatever) that they are attached to.

Such a system is capable of simulating digestion and respiration, the driving of the reproductive process, an ‘immune system’ and the control of the ‘needs and drives’ reinforcement mechanism. For example, the following diagram represents a simple model of digestion and its effects on needs and drives:

 

It should be relatively straightforward to use this scheme to express the necessary interactions required to simulate disease, medicine and their physiological effects. Just as important is the fact that this model is only one of the possible ‘physiologies’ that the scheme is capable of implementing. This makes evolutionary changes possible (see below).

3.5     Genetics

One of the things we think people are going to want to do with their creatures is breed them. Not only that, but they will want to be proud of their offspring as individuals. Therefore, offspring must inherit characteristics from their parents in a believable and lifelike way. We would also like our customers to feel that these phylogenetic changes have the potential to introduce novel alterations to the makeup of their creatures. In short, they will be pleased if they can believe it possible that these creatures can evolve.

Clearly, although there will be hundreds of thousands of creatures in existence, and although we will do our best to encourage cross-breeding via the Internet and suchlike, no genuinely productive evolution is likely in such a limited number of generations. However, it is important to us that we can be honest and say that such things are possible, no matter how remote that possibility may be. In any case, we have to provide for the inheritance of characteristics, and it is only a small step beyond this to provide for mutation. Our aim is to create a ‘whole’ living organism, and a proper genetic scheme is more or less demanded by that aim.

The physiological model outlined above is clearly very easy to encode within a ‘gene’, can be mutated to generate ‘alternative’ physiologies and is capable of expressing a multitude of different ‘reaction maps’ - most of these will work, some will lead to (hopefully) lifelike ‘handicaps’ and some might even turn out beneficial. Likewise, the creature’s physical appearance - limb shape, skin markings, etc. - can be determined genetically (if only by a mix-’n-match process from a limited repertoire of possibilities). The brain, too, especially because of the ability of biochemical ‘receptor sites’ to bond to neurones, can have its structure determined by genes. What is less certain, at the moment, is the extent to which we can provide for genuinely creative evolution. We can satisfy our public by allowing genetics to ‘tweak’ neural, physiological and physical parameters, but we and they would be even more intrigued if it were even remotely possible that new brain models, or never-before-seen body markings could emerge - that the species that we release into the wild is just the beginning of a trail of evolving and improving and surprising intelligent lifeforms. This is perhaps just a wild dream, but it is at the back of our minds as we develop the product.

A fairly obvious genetic ‘code’ can be used to control the structure of a new-born creature. Each ‘gene’ will have codons that specify the location of a ‘site’ (a group of neurones, a sensory organ, etc.), followed by a ‘type’ code that specifies what is to follow, followed by a type-specific ‘instruction’ for the item that must be built and attached to that site. For example, a gene might start with the triplet “RGI”, which perhaps denotes a particular group of 32 neurones in the NN, then “R”, which specifies that the codes to follow are going to be used to create a ‘chemical receptor site’, which will attach to that group of neurones. The nature of the receptor might next be represented by “CSF”, where “C” is the chemical that binds to that receptor, “S” specifies the sensitivity, and “F” specifies the object-specific function that is invoked when the receptor is stimulated. Other genes might code for reaction sites, emitters, dendritic wiring rules and physical appearance factors (the latter two might actually result indirectly from the specification of biochemical structures).

A string of these genes represents a ‘chromosome’, and several of these chromosomes (which may be diploid) engage in crossing-over during ‘meiosis’, with subsequent mutation, repetition and other cutting errors. It is not hard to produce a fairly involved and sophisticated reproductive mechanism, even though the user may never know of its subtlety.

The genotype will be used once, at birth, to specify the makeup of the new creature. No attempt is being made to emulate a true, self-organising, morphogenetic process, as this is one of those systems whose total behaviour is currently far too hard to predict, and thus is too dangerous to include in a commercial product.

Perhaps we will be able to code for the replication process within the gene itself. This would perhaps allow the possibility that ‘viral’ DNA could become inserted into the genome, amongst other things. Simulating the transmission of disease by ‘viruses’ is an entertaining possibility, but it requires that the ‘reading’ of genetic information be performed continuously (as in protein synthesis), rather than once at conception.