Computers hold the key

But how do we know that our machines really do have something like inner sensations? The key to all this is that depictions in the brain model can be displayed on a computer screen because we know exactly where the depictive neurons are (they are not confined to one region in the brain) and we can decode their messages. At the moment this cannot be done with a real brain as even the most accurate scanner only shows very roughly which parts of the brain are active. But demonstrations of sensory depiction, depictive memory, attention and planning all currently run on our machines.

To verify our hypotheses that much of consciousness depends on muscular interaction with the world, we have also built a mobile robot equipped with most of the first four axioms. It has learned to “develop an interest” in the objects in its environment so as to plan its movements from one to another.

Will building machines like this help us understand what it is to be conscious? I believe so. Are five axioms adequate? From a deep inner questioning of what is important to me in my own claim that I am conscious, the five axioms seem to me to be a necessary minimum. But the field is open for others to add to this list.

Of course my robots will be infinitely less conscious of their worlds than I am of mine. But if their five axiomatic mechanisms are up and running, I wonder by what argument one could deny them their embrionic piece of consciousness? I may regret having said this, but I predict that machine consciousness will become a commonplace way of talking pragmatically about human consciousness. I would also predict that, in the same unspecified future, many machines will themselves claim to be conscious.