Can machines be conscious? a simulated CyberChild, and five axioms.

Researching the binding problem, I came across a special edition of the Journal of Consciousness Studies about “Machine Consciousness” and in particular the CyberChild project, as well as a lot of discussion about what consciousness really is. Theres a sort of Turing test implicit here as well: can you simulate consciousness (and if you do, is it still consciousness?) (and if you succeed, do you cease to be a simulation and become a chose en soi?)

The special JCS edition arose from a meeting in 2001 at the Swartz Foundation to discuss whether a machine could be conscious. The consciousness editor describes the Cyberchild as “the computer simulation of the brain, body, and environment of a very young infant; the architecture of the child’s brain is a close neural model of … the relevant parts of the mammalian nervous system …the strategy is developmental and interactive, in that the child must signal its needs to the experimenter — for example, by crying appropriately — and the experimenter must respond.”

The abstract of the article by Cyberchilds creator, Rodney Cotterill, says: “The underlying model is based on the known circuitry of the mammalian nervous system, the neuronal groups of which are approximated as binary composite units. The simulated nervous system includes just two senses — hearing and touch — and it drives a set of muscles that serve vocalisation, feeding and bladder control. These functions were chosen because of their relevance to the earliest stages of human life, and the simulation has been given the name CyberChild. The system’s pain receptors respond to a sufficiently low milk level in the stomach, if there is simultaneously a low level of blood sugar, and also to a full bladder and an unchanged diaper. It is believed that it may be possible to infer the presence of consciousness in the simulation through observations of CyberChild’s behaviour, and from the monitoring of its ability to ontogenetically acquire novel reflexes.”

You can see that this might be more conscious (though less realistic) than the Laerdal simulated baby used to train medical staff: although that is similarly programmed to react to pain, it does not presumably learn, or change its behaviour.

Behaviourism is Back, an article by Majid Amini in the journal Minerva, puts Cotterills work firmly in the neo-behaviourist school: that is, the possession of consciousness is demonstrated by the ability to make muscular (ie visible) responses, or “by linking conscious experience to observations from neuroscience and cortical anatomy about correlated brain activity and function, and to the descriptions of mental processing that come from psychology and cognitive science”.

I cant find anything on the internet about the Cyberchild after about 2003 and no summary of the results of this project. According to the Danish Technical University website, Prof Cotterill is currently engaged in a project (which is) “a formal collaboration with Bjørn Nielsen and Claes Hougaard of Interactive Television Entertainment A/S, and it aims to introduce artificial consciousness to the Internet.” Ive emailed him to ask if more information is available on the internet.

Theres also a website about machine consciousness which lists several projects. This also refers to Five Axioms by Prof. Igor Aleksandr which are pre-requisites for consciousness:

Aleksanders five axioms of consciousness

Axiom 1: a sense of place
We feel that we are at the centre of an “out there” world, and we have the ability to place ourselves in the world around us

Axiom 2: imagination
We can “see” things that we have experienced in the past, and we can also conjure up things we have never seen. Reading a novel can conjure up mental images of different worlds, for example

Axiom 3: directed attention
Our thoughts are not just passive reflections of what is happening in the world – we are able to focus our attention, and we are conscious only of that to which we attend

Axiom 4: planning
We have the ability to carry out “what if?” exercises. Scenarios of future events and actions can be mapped out in our minds even if we are just sitting still

Axiom 5: decision/emotion
Emotions guide us into recognising what is good for us and what is bad for us, and into acting accordingly

The machine consciousness webssite also points to a US company called Imagination Engines Inc which is active in the field of robotics. Their concept of databots seems at first glance to be a top down attempt to do what Web 2.0 is doing from bottom up (tags, linking, memes, click streams, etc.) – but Im getting off my subject now.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *