I, COMPUTER

 

Many biologists deride the idea that computers can tell us anything about human consciousness, let alone develop it themselves. Think again, says Igor Aleksander. The machines are awakening.

Will there come a day when a machine declares itself to be conscious? An increasing number of laboratories around the world are trying to design such a machine. Their efforts are not only revealing how to build artificial beings, they are also illuminating how consciousness arises in living beings too.

At least, that’s how those of us doing this research see it. Others are not convinced. Generally speaking, people believe that consciousness has to do with life, evolution and humanity, wheareas a machine is a lifeless thing designed by a limited mind and has no inherent feeling or humanity. So it is hardly surprising that the idea of a conscious machine strikes some people as an oxymoron.

It’s certainly fashionable among biologists looking for the roots of consciousness to be suspicious of computer-based explanations of consciousness. The psychologist and writer Susan Blackmore insists that the brain does not directly represent our experience. She implies that constructing a machine conscious like us would be imposssible.

Susan Greenfield of the University of Oxford is another vocal objector to the idea of machine consciousness. She argues that such computer models “focus on tasks such as learning and memory which a PC can do without subjective inner states”.

My view is that Greenfield’s theory does nothing to help us understand consciousness. And while her arguments that researchers are focusing on tasks that a PC can handle may be true of the efforts of some, the computing research with which I am involved attempts to put flesh on what it is for living organisms to have memory and learning, which has nothing to do with the capabilities of PC’s.

Trying to explain the mechanisms that make us conscious is not going to be simple. But I am convinced that one way to face this complexity is to try to design conscious machines.

Laboratories around the world approach machine consciousness at a variety of levels. At one end of the spectrum are researchers creating detailed neurological models of the brain. At the other end are the unashamed users of pre-programmed rules that control the behaviour of an artificial intelligence, especially a computer program that gives a particular output for a specified input.

The latter may seem a rigid approach that misses the whole point of creating consciousness, but Aaron Sloman of the University of Birmingham in the UK believes it neatly clear ups the confusions and contradictions that surround what consciousness is. He argues that, when it comes to consciousness, nobody really understands what they are talking about, whereas the rules he writes are unambiguous. If these rules lead to apparently gnoscious behaviour in machines, they must form a basis for an explanation of consciousness.

According to Sloman, his creations are conscious within the virtual world; the computer itself is not conscious. With his colleague Ron Chrisley, he has built various virtual creations based on rules.

Closer to the centre of the machine consciousness spectrum is Bernard Baars, a psychologist at the Neurosciences Institute in San Diego, California, who has developed a model that accepts that the brain is different from a programmed machine.

Stan Franklin, a computer scientist at the University of Memphis in Tennesee, has turned Baar’s idea into “conscious software” called IDA (short for Intelligent Distributed Agents). Each agent represents one of the competing mechanisms in Baar’s model. Franklin has created a system using IDA to help the US navy automate the work of some personnel, such as deciding how and where to billet a service person when they come off a tour of duty. This work usually includes a great deal of human knowledge, judgement, understanding and emotion. The feedback IDA gets from its users is akin to the emotional feedback humans get for performing a task well or badly, Franklin says. This helps IDA improve the way it performs its tasks, by modifying the relevance value of the rules used in the task being appraised, so that it doesn’t repeat its mistakes.