Enciclopédia e Hipertexto

Computers and the Inferential Basis of Consciousness

Luciano Floridi

 

Abstract

The paper has three goals. The first is to introduce the “knowledge game”, a new, simple yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as a test to discriminate between conscious and unconscious agents, depending on which version of the game they can win. And the third is to use the test to provide an answer to Dretske’s question “how do you know you are not a zombie?”.

Keywords

Artificial agents, consciousness, inferentialism, knowledge game, “muddy children” theorem, “the three wise men” theorem, zombies.


“Silently Peyton weighed his opponent. It was clearly a robot of the very highest order. […]
‘Who are you?’ exclaimed Peyton at last, addressing not the robot, but the controller behind it. […]
‘I am the Engineer.’
‘Then come out and let me see you.’
‘You are seeing me’. […]
There was no human being controlling this machine. It was as automatic as the other robots of the city – but unlike them, and all other robots the world had ever known, it had 
a will and a consciousness of its own.”

A. C. Clarke, The Lion of Comarre, 1949.

 

1. Introduction: how do you know you are not a zombie?

Consciousness is one of those fish we seem to be unable to catch, much like intelligence. We recognise its presence, traces and effects, but its precise nature, workings and “location” still escape our grasp. Tired of ending up empty-handed, some philosophers have recently tried to approach the problem of consciousness indirectly. If you can’t hook it, try to corner it. To this new approach belongs a series of mental experiments involving the possibility of conscious-less agents (see for example Symposium [1995]).

Imagine three populations of agents: robots (conscious-less artificial agents), zombies (conscious-less agents) and humans (conscious agents). I shall say more about the first two types of agents presently. At the moment, the assumption is that you are neither a robot nor a zombie. The question is how you know it.[1]Dretske [2003] phrases the problem neatly: “I’m not asking whether you know you are not a zombie [or a robot, my addition].  Of course you do.  I’m asking how you know it.  The answer to that question is not so obvious.  Indeed, it is hard to see how you can know it.  Wittgenstein (1921/1961: 57) didn’t think he saw anything that allowed him to infer he saw it.  The problem is more serious.  There is nothing you are aware of, external or internal, that tells you that, unlike a zombie, you are aware of it.  Or, indeed, aware of anything at all.” Whatever your answer to Dretske’s question is, it will cast some light on your conception of consciousness, but before we embark on any further discussion, let me introduce our dramatis personae.

Artificial agents are not science fiction but advanced transition systems capable of interactive, autonomous and adaptable behaviour.[2] Interactivity means that artificial agents and their environments can act upon each other effectively. Autonomy means that the agents can perform internal transitions to change their states teleologically, without direct responses to interactions. This property imbues agents with a certain degree of complexity and decoupled-ness from their environments. Adaptability means that the agents’ interactions can change the transition rules by which they change states. This property ensures that agents might be viewed, at a given level of abstraction (Floridi and Sanders [2004]), as learning their own mode of operation in a way that depends critically on their past interactions and future goals.

Zombies require more specifications since they do not exist. An agent Ag can be said to be conscious in four main senses. Ag may be environmentally conscious if

a.1) Ag is not “switched-off”, e.g., if Ag is not asleep, comatose, fainted, anaesthetised, drugged, hypnotised, in a state of trance, stupor, catalepsy, or somnambulism; or (disjunctive or)

a.2) Ag is able to process information about, and hence to interact with Ag’s surroundings, its features and stimuli effectively, under normal circumstances.

But Ag may also be phenomenally conscious if

b.1) Ag experiences the qualitative, subjective, personal or phenomenological properties of a state in which Ag is. This is the sense in which Nagel [1974] speaks of being conscious of a certain state as having the experience of “what it is like to be” in that state.

Or (and this is at least an inclusive “or”, and at most a misleading place-holder for a double implication) Ag may be self-conscious if

b.2) Ag has a (second- or higher-order) sense of, or is (introspectively) aware of Ag’s personal identity (including Ag’s knowledge that Ag thinks) and (first- or lower-order) experiences, both mental and perceptual (including Ag’s knowledge of what Ag is thinking).

All four states are informational in character: e-consciousness (a.1 and a.2) is externally oriented and first-order, whereas p-consciousness (b.1) and s-consciousness (b.2) are internally oriented and second- or higher-order.

The four sketches are not definitions or even approximate analyses. They only constitute a compass to clarify our initial position, which is that zombies lack p-consciousness and s-consciousness. Zombies may also lack e-consciousness in the (a.1) sense. For example, depending on how you wish to use the mental experiment, there may be no interesting difference between a zombie and a somnambulist. However, it is unclear whether zombies are also entirely e-unconscious in the (a.2) sense. Zombies are embodied cognitive systems capable, like us and some artificial agents, of some kind of first-order informational and practical interactions with their environment. The “kind” does not have to be anything human-like, it only needs to be indistinguishable (and not even intrinsically but only by us) from an ordinary human agent’s way of dealing effectively with the world.

The literature admits many different types of zombies, depending on whether

  • consciousness and related qualia are absent, inverted, alien or dancing;
  • their equivalence or identity to conscious agents is behavioural, functional or physical; and
  • their possibility is logical, metaphysical or natural (Polger [2000] and Polger [2003] provide a clear overview)

These distinctions are crucial in the debate on materialism and the conceivability of zombies starting with Kirk [1974]. However, they can be safely ignored here, where the most minimal conditions will be assumed, namely the logical possibility of zombie-like agents that are behaviourally indistinguishable by us from their conscious counterparts but that lack (in whatever sense you wish, from strong absence to weird “dancing”) all types of consciousness, apart from some minimal sense in which consciousness is discussed in (a.2). Zombies must (appear to) be able to exchange information with the environment as effectively as any ordinary conscious agent, for otherwise it would be idle to ask how you know that you are not a zombie. This seems to be Dretske’s assumption too: “The properties you are aware of are properties of – what else? – the objects you are aware of.  The conditions and events you are conscious of – i.e., objects having and changing their properties – are, therefore, completely objective. They would be the same if you weren’t aware of them. Everything you are aware of would be the same if you were a zombie.[footnote] In having perceptual experience, then, nothing distinguishes your world, the world you experience, from a zombie’s. (my emphasis)”. And in the footnote Dretske adopts the minimalist view specified above: “For purposes of this paper I take zombies to be human-like creatures who are not conscious and, therefore, not conscious of anything – neither objects (cars, trees, people), properties (colors, shapes, orientations), events (an object falling off the table, a sunrise), or facts (that the cup fell from the table, that the sun is rising).”

 
2. The knowledge game

One way to ascertain whether x has the property P is to set up a P-test and check whether x passes it. You know you are a car driver, a chess master, a medical doctor, a scuba-diver, or that you are not visually impaired, if you satisfy some given standards or requirements, perform or behave in a certain way, win games and tournaments, pass an official examination, and so forth. This also holds true for being intelligent, at least according to Turing [1950]. I shall argue that it applies to consciousness as well. I agree with Dretske that mental and perceptual experiences may bear no hallmarks of consciousness or any further property indicating our non-zombie (and non-artificial) nature. Consciousness does not piggyback on experience, which tells us nothing over and above itself. Blame this on the transparency of consciousness itself (it is there, but you can’t see it) or on the one-dimensionality of experience (experience is experience, only experience, and nothing but experience). But this does not mean one cannot devise a reliable test for the presence of consciousness. The knowledge game is such test.

 

 

 

 

 

 

 

 

The knowledge game is a flexible and powerful tool with which to tackle a variety of epistemic issues.[3] It exploits a classic result variously known as the “muddy children” or the “three wise men” theorem, the Drosophila of epistemic logic and distributed AI.[4] The game is played by a multi-agent system comprising a finite group of at least two interacting agents with communicational and inferential capacities. Agents are assigned specific states in such a way that acquiring a state S is something different from being in S and different again from knowing that one is in S. The states are chosen by the experimenter from a commonly known (in the technical sense of the expression introduced in epistemic logic) set of alternatives. The experimenter questions the agents about their states, and they win the game if they answer correctly. Agents can determine the nature of their state inferentially and only on the basis of the informational resources available. They cannot rely on any innate, a priori or otherwise privileged access (Alston [1971]). Most notably they have no introspection, internal diagnosis, self-testing, meta-theoretical processes, inner perception or second order capacities or thoughts.[5] Since the game blocks the system from invoking any higher-order, mental or psychological deus ex machina to ascertain directly the state in which the system is, we test the presence of consciousness indirectly and avoid the problem of dealing with consciousness by means of concepts that are at least equally troublesome.

Let me now sketch the argument. Some versions of the knowledge game can be won by all inferential agents. This guarantees initial fairness and avoids begging the question. But some more difficult versions can be won only by non-artificial agents like us and the zombies. And still others can be won only by conscious agents like us. So, if you win all versions, you are neither an artificial agent nor a zombie, and this is how you know that you are not one. And since the test is only a sufficient but not a necessary condition to qualify as a conscious agent, nothing is lost if you or your children do not pass it. So relax and enjoy the game.

 

3. The classic knowledge game: externally inferable states

In the classic version of the game, a guard challenges three prisoners A, B and C. He shows them five fezzes, three red and two blue, blindfolds them and makes each of them wear a red fez, thus minimising the amount of information provided. He then hides the remaining fezzes from sight. When the blindfolds are removed, each prisoner can see only the other prisoners’ fezzes. At this point, the guard says: “If you tell me the colour of your fez you will be free. But if you make a mistake or cheat you will be executed”.

He interrogates A first. A checks B’s and C’s fezzes and declares that he does not know the colour of his fez. The guard then asks B. B has heard A, checks A’s and C’s fezzes, but he too must admit he does not know. Finally, the guard asks C. C has heard both A and B and immediately answers: “My fez is red”. C is correct and the guard sets him free. As Dretske would put it: C is indeed in the state in which he thinks and says he is, the question is, how does he know it?

Take the Cartesian product of the two sets of fezzes. If there were three fezzes of each colour, we would have the following Table 1 (1 = red, and 0 = blue):

Table 1

a

b

c

d

e

f

g

h

A

1

1

1

1

0

0

0

0

B

1

1

0

0

1

1

0

0

C

1

0

1

0

1

0

1

0

The prisoners know that there are only two blue fezzes, so ¬h is common knowledge. This is a crucial piece of external information, without which no useful reasoning would be possible. Consider now A’s reasoning. A knows that, if B and C are both wearing blue fezzes, he must be wearing a red one (situation d). However, A says that he does not know, so now ¬d is also common knowledge. B knows that if both A and C are wearing blue fezzes, he must be wearing a red one (situation f). However, B too says that he does not know, so C also knows that ¬f. Moreover, since B knows that ¬d, he also knows that, if he sees A wearing a red fez and C wearing a blue one, then he can only have a red fez (situation b). Since B says he does not know, C also knows that ¬b. Updating Table 1, the final Table 2, available to C, is (the top row indicates who knows which situation):

Table2

 

BC

 

ABC

 

BC

 

ABC

 

a

¬b

c

¬d

e

¬f

g

¬h

A

1

1

1

1

0

0

0

0

B

1

1

0

0

1

1

0

0

C

1

0

1

0

1

0

1

0

At this point, the game is over, since in all remaining situations {a, c, e, g} C is wearing a red fez. Note that C does not need to see A and B so C could be blind. In a slightly different version the three prisoners are in a queue facing a wall, with A seeing B and C, B seeing C, and C looking at the wall. Despite appearances, the better off is still C.


3.1. A fairer version of the game

Sometimes the agents have a letter attached to their back or a muddy forehead, or play a card game (Fagin et al. [1995]). We only used 1s and 0s. The details are irrelevant, provided we are considering externally inferable states. Given an agent Ag, a state S and an environment E, S is an externally inferable state if and only if

In the classic version of the knowledge game, the prisoners exploit three environmental resources:

a) the nature and number of available states;

b) the observable states of the other prisoners;[6]

c) the other prisoners’ answers;

plus the fact they have common knowledge of (a)-(c).

Resource (c) is the only one that increases in the course of the game. This is unfair, for A can take no advantage of (c), whereas C cashes in on all the previous answers. B is the most frustrated. He knows that, given his answer, if he were C he would be able to infer his (C’s) state. The fact that B cannot answer correctly before C, despite knowing that his answer will allow C to answer correctly without (C) even looking at A and B (recall that C may be blind), shows that B knows what it is like to be C – both in the sense of being the agent whose turn it is to answer the question, and in the sense of being in C’s given state – but that C is still “another agent” to B. B knows Table 2 as well as C, but cannot put this information to any use because he is not C. If there were no “other minds”, there would be no difference in the location of B and C in the logical space of the knowledge game, but there is such a difference, so B and C are different, and B knows it.

To give a chance to every prisoner, the guard must interrogate all of them synchronically. Mutatis mutandis, the fair challenge goes like this:

Guard: “Do you know the colour of your fez?”

A, B and C together: “No”.

Now they are all in the state in which B was in the unfair game.

Guard: “Think twice. Do you know the colour of your fez?”

A, B and C together: “No”.

Now they are all in the state in which C was in the unfair game, so they can immediately add, without being asked a third time:

A, B and C together: “Yes, it is red”.

In the fair challenge, the prisoners work synchronically, no longer sequentially, and in parallel, as a multiagent system. The system is entirely distributed. It still relies on shared memory, but there is no centralised decision-taker, planner or manager, no CPU or homunculus that collects, stores and processes information and organises the interactions between the components. An interesting consequence is a net increase in efficiency. The multiagent system can now take full advantage of the resources (a)-(c) and extract more information from the environment, in fact all the available information, by excluding more alternatives under more constraints. Hearing C’s reply in the unfair challenge we only know that he is correct, but we do not know what A and B are wearing. Hearing the system, on the contrary, we come to know that all prisoners wear a red fez (situation a).

 

3.2. Winners of the classic knowledge game

Since all informational load is outsourced in the external environment and A, B and C are tabulae rasae that behave like inferential engines, they are replaceable by artificial agents or zombies. Three Turing machines in a network could ascertain, inferentially, that they are all switched on (three green led) and not off (two red led). Put them in a black box, query the box about its own state, and you will obtain a correct answer. Having hidden some details, it seems that the box is magically conscious of its own “switched-on” state. Indeed, the solution of the prisoners’ problem can be transformed into a computable algorithm generalisable to any finite number of interacting computer systems, something that turns out to be quite useful in industry.[7]

In one more variant of the classic version, we can imagine that a robot, a zombie and a human prisoner team up to win the game. Externally inferable states do not allow us to discriminate between types of inferential agents. We need a tougher game and I shall suggest three alternatives. We can make the agents rely on some information made available by their newly acquired states themselves. This game will be analysed in the next section, where we shall see that Dretske is right: in this case too we cannot discriminate among different types of agents, although for different reasons from those just discussed. We can then make the agents exploit whatever information is provided by the question itself. Artificial agents lose this version of the game. Finally, we can make the agent exploit the information implicit in their own answers. And this version can be won only by conscious agents, thus providing an answer to Dretske’s question. 

 

4. The second version of the knowledge game

The prisoners are shown five pairs of boots, all identical but for the fact that the three worn by the prisoners are torturing instruments that crush the feet, while the remaining two are ordinary boots. The guard plays the fair version of the challenge. Of course the three prisoners answer correctly at once. Fezzes have only useless tassels, but torturing boots can bootstrap.

Bootstrapping is a technique that uses the input of a short sequence of instructions to make a system receptive to a larger set of instructions. We can slightly adapt the term to describe the new state of wearing torturing boots because

In bootstrapping states, the information about the “large” states becomes inferable through the interaction between the “short” state, its carrier and its receiver (Barwise and Seligman [1997]).[8] Unfortunately, one cannot reach a satisfactory taxonomy of agents on these grounds. 

Bootstrapping states are useless for discriminating between humans and zombies because the inference requires no p-consciousness but only some sort of registration of the bootstrapping state as a premise to the successful inference. Since the underlying conscious life is the only difference between humans and zombies, whatever state is bootstrapping for the former may be assumed to be (at least functionally) so also for the latter and vice versa. But bootstrapping states are also useless for discriminating between artificial and non-artificial agents. 

A state is bootstrapping only relationally, depending on the source and the receiver of the information that indicates the state. So other types of agents can have their own types of bootstrapping states, which may or may not be bootstrapping for human agents (imagine the boots bear a barcode label). Since not all types of bootstrapping states are necessarily so for any human-like agent, a general bootstrapping game does not allow the necessary distinction between being able to play the game and winning it. For either the three types of agents are assessed on the basis of the same (types of) bootstrapping states accessible to all of them, in which case the game is useless, for they all win (participating is winning). Or the agents are assessed on the basis of different, i.e., their own, idiosyncratic (types of) bootstrapping states, accessible to only some of them, in which case the game is still useless, since each type of agent wins the game in which its own idiosyncratic states are in question (again, participating is winning).

One may object that precisely because some specific types of agents can be nomically associated to some types of bootstrapping states, the game can be modified so that the chosen (types of) bootstrapping states allow one to discriminate at least between artificial and non-artificial agents. But biological chauvinism won’t help, as already argued by Turing [1950]. Selecting some specific (types of) feelings or experiences or perceptions to show that we and the zombies can perceive them as bootstrapping, but artificial agents do not, would be like choosing “heart-beating” as a criterion of discrimination. First, we are back to the “participating = winning” situation, this time in the converse sense that losing the game is equivalent to being unable to play it, not to playing it unsuccessfully. This makes the game not only unfair but above all uninteresting, for it is trivial to show that agents with access to different resources perform differently. Second, the game either presupposes the difference between types of agents that it is supposed to identify, thus begging the question, or it misses a fundamental point, the indiscernibility of the differences between the bootstrapping experiences. Zombies are almost like us: they z-feel the z-pain of the bootstrap and they z-verify the corresponding z-state in ways that are either identical to ours or at least not discernibly different (for us) from them anyway. Dretske draws roughly the same conclusion when discussing zombies’ protopain. Likewise, it would be very simple to engineer artificial agents capable of a-feeling the pressure of the “painful” boot or any other bootstrapping state we may choose. In either case, as far as we know, there is no difference between experiencing, z-experiencing and a-experiencing torturing boots that can be usefully exploited within the game. So Dretske is right. Appeal to self-booting experiences won’t do. We need another version of the game.

 

5. The third version of the knowledge game

So far the players have taken advantage of (their common knowledge of) the information provided by (i) the nature and number of assignable states; (ii) the observable states of the other agents; (iii) the other agents’ answers; (iv) the assigned states, when they are bootstrapping. A source that has not yet been exploited is the question itself.

The prisoners are offered five glasses, three containing a partially-deafening beverage and two containing a totally-deafening beverage. The first thing the prisoners hear is the guard shouting his question. Of course the prisoners answer correctly at once. To them and to us, the question is trivially self-answering, yet why it is so is less obvious. The guard’s question (Q)

Self-answering questions are not the subject of much analysis in erotetic logic.[9] Perhaps they are too trivial. Sometimes they are even confused with rhetorical questions, which are really assertions under cover. Yet a self-answering question is not one that requires no answer, or for which the questioner intends to provide his or her own answer. It is a question that answers itself, and this can be achieved in several ways. The erotetic commitment of the question can be external. For example, asking a yes/no question while nodding may count as an externally, pragmatically self-answering question. Or the erotetic commitment can be internal. “How many were the four evangelists?” is an internally, semantically self-answering question. In our case, the erotetic commitment is relational. The question about the agent being in a certain state is self-answering in a more complex way, for the answer is counterfactually embedded in Q and it is so somewhat “indexically”[10] since, under different circumstances, the question or the questioning would give nothing away (henceforth this is what I shall mean by self-answering question).

For A to extract from Q the information that A is in S, something like the following is required:

  1. A, B and C can each be set in a new state, either S or ¬ S
  2. A receives the information contained in (1)
  3. A is set in a new state, either S or ¬ S
  4. A receives the information contained in (3)
  5. A’s new state is S
  6. A does not receive the information contained in (5)
  7. A receives the question Q about the nature of A’s new state
  8. A receives the information contained in (7)
  9. A reasons that if A were in ¬ S then A would be in D; but if A were in D then A could not have received Q; but A received Q, so A could receive Q, so A is not in D, so A is not in ¬ S, but A is in either S or ¬ S, so A is in S.
  10. A answers that A is in S.
An interesting example of this new version of the knowledge game is provided by Hobbes and Gassendi. At different stages, they both object to Descartes that states such as “walking” or “jumping” may replace “thinking” within the Cartesian project. “Ambulo ergo sum” would do equally well, they argue. However, Descartes correctly replies that they are both mistaken. “Are you thinking?” is self-answering but “are you walking?” is not. As we shall see in the next game, zombies can jump and walk but they still cannot infer (let alone be certain) that they exist, for they do not know that they themselves are jumping and walking. Whereas, even if we perceive ourselves jumping and walking, we may still wonder whether we are dreaming, in which case it is the activity of wondering (in other words: thinking) that one may be dreaming that makes the difference, not the dreamt state itself; or we may wonder whether we are zombies, and if so, whether we are zombies dreaming that they are walking, in which case too there is still nothing intrinsic to the jumping or to the walking that will enable us to tell the difference, i.e., to answer Dretske’s “how” question.  


Extracting (as opposed to verifying or deriving) information (the erotetic commitment) about states from self-answering questions about those very states requires agents endowed with advanced semantic capacities. These are often clustered under broader and more general labels such as intellect, reason, intelligence, understanding, high-order cognition or mind. In order to be less inclusive and to stress their procedural nature, I suggest we opt for reflection.

Reflection is not to be understood here as some higher-order awareness or cognition, if lower-order awareness or cognition is a sense of self, or consciousness. So far we have avoided relying on consciousness and we should not beg the question now. Reflection is not meant as privileged access, introspection or psychological awareness of internal states either, no matter of what kind and order. Rather, it is to be understood as the logical capacity of transcendental (backward) inference, from the question to its condition of possibility and hence to its answer.

Reflection so understood is something that artificial agents do not enjoy yet. The reader may be acquainted with “reflective” artificial agents that can win the classic knowledge game (Brazier and Treur [1999]), but that description is only evocative. Architectures or programs for computational systems (of AI) and systems for machine learning are technically called “reflective” when they contain an accessible representation of themselves that can be used e.g. to monitor and improve their performance. This seems to be what Ned Block [1995] has termed, in human agents, “access-consciousness” (see Bringsjord [1997]). But “reflective computing” is only a case of metaprogramming,[11] and the knowledge game does not contradict this deflationist view, since, strictly speaking the axiomatization of the reasoning involved requires only standard first-order logic (McCarthy [1971-1987] and McCarthy [1990]), even if it can be analysed using e.g. epistemic logic or the BDI (Belief, Desire, Intention) architecture (Rao and Georgeff [1991]).

Current artificial agents are unable to take advantage of the self-answering nature of the question because they are intellectually and semantically impaired, a bit like Descartes’ animal automata. Reflection is an AI-complete problem, i.e., a problem whose solution presupposes a solution to the “strong AI problem”, the synthesis of a decent (to many this is synonymous with human) level of commonsensical intelligence endowed with some semantic skills. As we still lack anything even vaguely resembling a semantically proficient and intelligent artificial agent, this version of the knowledge game suffices to discriminate between them (the artificial) and us (zombies and humans). Let me now qualify this claim.

        First, to be answered correctly, a self-answering question requires both understanding of the content of, and a detachment from, the question itself. Self-answering questions are part of the frame problem. A normal query works like an instruction that pushes an agent Ag into a search-space where a correct symbolic manipulation can identify the state of the agent itself. But a self-answering question pulls a reflective agent in the opposite direction, in search of what his own state must be if the question is receivable/accessible in the first place. Now, some counterfactual truths concerning a variety of type-situations can be soft-encoded, hard-wired or “interfaked” (faked through an interface) in our artificial agents. As in the Turing test, this is trivially achievable, but the claim is that this is not a solution but only an ad hoc and brittle trick. Contrary to artificial agents, zombies and humans can be assumed to have a full and intelligent command of semantics, and hence enjoy a counterfactual-reflective understanding of the semantics of an open-ended number of indexical questions. In principle, if the question is self-answering, zombies and humans should be able to appreciate it, but artificial agents cannot. Any digital make-up is here only a boring “catch me if you can” of no conceptual interest.

        Second, the claim comes with a “best before” date. Current artificial agents cannot answer self-answering questions insofar as the latter require understanding. What is logically possible for (or achievable at some distant time in the future by) an artificial agent is not in question, and it should not be, since it is a rather pointless question anyway. I explained at the outset that we are not assuming some science fiction scenario. “Never” is a long time, and I would not like to commit myself to any statement like “artificial agents will never be able (or are in principle unable) to answer self-answering questions”. The knowledge game cannot be used to argue that AI or AC (artificial consciousness) is impossible. In particular, its present format cannot be used to answer the question “How do you know you are not a computer simulation or a brain-in-a-vat?”. It is a test to check whether AI and AC have been achieved.[12]

Unlike artificial agents, zombies are reflective in the sense specified above, for they share with us everything but consciousness. So Descartes, Leibniz [1995] and Dretske are right: we and they can win this version of the knowledge game and nobody could spot the difference. Is there any other source of information that conscious agents can exploit inferentially but zombies cannot? Recall that the difference lies in the subjective nature of the states in the (b.1) or (b.2) senses. This is mirrored in the nature of the corresponding reports. A zombie Z knows that Z is in state S, but does not know that he is Z, or that S is his state. A human agent H, on the contrary, will find it difficult to dissociate himself from his own states, which are always “proprietary”. To H, H’s states are first of all his own states or states as he experiences them (thus-states), or of which he is conscious, at least in so far as his attention can be called upon S. A detached (third-person or zombie-like) perspective on one’s own thus-states can be acquired, but it is not the default option, and if adopted may seem rather contrived.

This intuition can be made profitable by exploiting a last source of information about the agent’s state, i.e., his own answer, thus coming full circle (in the classic version, the agents take advantage of the other agents’ answers, not yet of their own).

 

6. The fourth version of the knowledge game

The three prisoners are offered five tablets: three are completely innocuous and two cause total aphasia (the agent cannot speak). The prisoners cannot know which tablet they have taken in terms of externally inferable, bootstrapping or self-answering states. As usual, all forms of privileged or direct accesses to their state are excluded ex hypothesi. However, as soon as they acknowledge that they do not know, that is, as soon as they utter “I…” as in “I do not know”, of course they know at once and can answer correctly.

The three human prisoners are able to answer correctly but zombies are not because each human prisoner knows that the “I” in “I do not know” is himself, that the voice he hears is his voice, whereas the zombies do not.[13]Zombies, lacking any form of consciousness, are unable to identify themselves as the subjects appearing in their own descriptions or to whom the voice belongs, so they cannot apply to themselves the predicates qualifying the subjects they refer to in their own descriptions. They are therefore bound to lose the new game.

The fourth version of the game is not restricted to aphasic states, of course. One can take advantage of the detachment between reporting and reported agent by identifying a state S such that the reporting agent Ag knows that he is in S only if Ag knows that he himself is the reported agent x in his own description of x. I suggest we label similar states “Cartesian”, for obvious reasons.

A linguistic version of the test may help to clarify the point. In De Bello Gallico Caesar speaks of himself usually in the third person. Suppose the new tablets irreparably damage the capacity to issue third-person reports. Caesar could still use the first-person to speak about himself and identify his states correctly, but the zombie would be dumbstruck. Other, less fair versions of the new game can easily be envisaged along similar lines. Take the probabilistic zombies: there are n tablets, n-m cause total aphasia and m are completely innocuous. Make n increasingly larger and give the innocuous tablets to the m zombies. The system of zombies will have increasing support for the belief that it (the system) cannot speak, and it will say so, without being able to correct itself. Or we can have the cheated zombies, a game similar to the self-referential version, with five tablets, three innocuous and two causing total aphasia. This time the zombies are told that the distribution is two innocuous and three “aphasing”. After zombies A and B have mistakenly answered “I do not know”, the third infers that C (itself) has taken the “aphasic” tablets, says so and does not correct himself. 

A difference between artificial agents and zombies is that the latter are capable of counterfactual reflection. A difference between zombies and human prisoners is that the latter are also capable of subjective reflection. The time has come to draw out some consequences.

 

7. Some consequences of the knowledge game

In the knowledge game considered there are four sources of information: the environment, the state, the question and the answers. Agents are assigned predetermined states using the least informative setting (the initial difference in states is not imported within the system, whose components are assigned equal states). They are then assessed according to their capacities to obtain information about their own states from these sources inferentially. Depending on the chosen state, each source can identify a type of game and hence a class of players able to win it. The communicative and logical nature of the game excludes even very intelligent mammals from participation, including infants, chimpanzees, orang-utans and bottlenose dolphins, who consistently pass the test of mirror self-recognition (Allen [2003]). This, however, is not an objection, since the question addressed in this paper is not how you (a grown-up person who can fully understand the question) know that you are an animal. Conversely, whatever the answer to the latter question is, it certainly cannot rely on some unique logical capacities you enjoy.

The knowledge game is not meant to provide a definition of intellectual or semantic abilities or of consciousness. It is not a defence of an inferential theory of consciousness either, and provides no ammunition for the displaced perception model. Like the Turing test for AI, it purports to offer something much weaker, namely a reliable criterion to discriminate between types of (inferential) agents without relying on that foggy phenomenon that is human introspection.

The criterion employed is more than a successful means of identification though, because it is also informatively rich. An informatively poor criterion would be one that used an otherwise irrelevant property P to identify x successfully. For example, at a party you may successfully identify Mary’s friend as the only man in the room – this also conveys quite a bit of extra information about him – or as the person who is closer to the window, which is true at that precise moment but otherwise very poor informatively. The knowledge game relies on relevant and significant properties that characterise agents in an informatively rich way. This is like cataloguing animals according to their diets. The trap is to reduce (or simply confuse) what matters most in the nature of x to (or with) what makes an informatively rich difference between x, which has P, and y, which lacks P. An agent Ag may have the unique capacity to infer its Cartesian states and yet this may not be the most important thing about Ag, pace Descartes. We avoid the trap if we recall that our task is to answer Dretske’s question. Consciousness-centrism may be perfectly justified and even welcome, but it is a different thesis, which requires its own defence; it is not what the knowledge game is here to support.

The last version of the game suggests a view of consciousness as subjective reflectivity, a state in which the agent and the I merge and “see each other” as the same subject. It seems that artificial agents and zombies suffer from severe schizophrenia, being entirely decoupled from their selves, whereas animals are wholly coupled to external information, with humans half-way in between. The latter appear to constitute themselves as centres of semantic comprehension, prompted by, but independently of, the environment. Consciousness is then comparable to a mathematical fixed point: it occurs as a decoupling from reality and a collapsing of the referring agent and the referred subject. I suppose this is what lies behind the description of the acquisition of consciousness as a sort of “awakening”.

This perspective has some surprising consequences. The most obvious concerns the transcendental nature of the I. In the game, Cartesian states are inferentially appropriated by the agent as his own (p-consciousness) only because the agent is already conscious of himself as himself (s-consciousness). Once unpacked, this logical priority may mean that agents are p-conscious of their perceptual/internal contents not only after but also because they are s-conscious of themselves. It certainly means that it is not true that they are s-conscious because they are p-conscious. Perceptual or internal contents of which the agent is conscious do not carry with themselves the (information that the agent has) consciousness of the content itself or of himself as an extra bonus. Perhaps s-consciousness is not constructed from perceptual and internal knowledge bottom-up but cascades on it top-down. This IBM (“I Before the Mine”) thesis is a strong reading of Searle’s view that “the ontology of the mental is an irreducibly first-person ontology” (Searle [1992], 95). Adapting Harnad’s phrase, zombies are empty homes but your home is wherever yourself is.

If self-consciousness really has a logical primacy over the conscious-ed contents, I doubt whether the IBM thesis might be reconciled with some sort of naturalism. It is certainly not externalist-friendly, if by externalism one basically refers to a position about where the roots of consciousness are – outside the mind – rather than about where the search for them can start. For the knowledge game shows that, in explaining consciousness without relying on introspection, we still cannot progress very far by relying only on environmental information.

The knowledge game seems to support an internalist perspective, with an important proviso. Sometimes semantic and rational agents can obtain information about their own states only if they interact successfully in a collaborative context rather than as stand-alone individuals.[14] The external source is a “Platonic”, maieutic device, which has an eliciting role functionally inverse to the one attributed to the malicious demon by Descartes.[15] Thus, the knowledge game promotes an intersubjective conception of conscious agenthood, moving in the same direction as Grice’s Cooperative Principle and Davidson’s Charity Principle, while favouring an internalist view of self-consciousness.

 

8. Conclusion

“By intersubjective and inferential interaction” is the short answer to Dretske’s question. Here is how we reach it. Dretske asks “How do you know you are not a zombie?”. He comments: “Everything you are aware of would be the same if you were a zombie. In having perceptual experience, then, nothing distinguishes your world, the world you experience, from a zombie’s.” I agree. Dretske then asks: “This being so, what is it about this world that tells you that, unlike a zombie, you experience it?”. He answers “nothing”, and I agree, for externally inferable states are useless. Dretske finally asks: “What is it you are aware of that indicates you are aware of it?”. His and my answer is again “nothing”, since self-booting states are also useless. Dretske asks no more questions and concludes: “We are left, then, with our original question: How do you know you are not a zombie?  Not everyone who is conscious knows they are.  Not everyone who is not a zombie, knows they are not.  Infants don’t.  Animals don’t.  You do.  Where did you learn this?  To insist that we know it despite there being no identifiable way we know it is not very helpful.  We can’t do epistemology by stamping our feet. Skeptical suspicions are, I think, rightly aroused by this result. Maybe our conviction that we know, in a direct and authoritative way, that we are conscious is simply a confusion of what we are aware of with our awareness of it”. I have argued that this conclusion is premature because it does not take into account the agent’s inferential interactions with other agents. We have seen that there are two other versions of the knowledge game that clarify how you know you are neither an artificial agent nor a zombie. Just play them.

 

 Acknowledgements

I am very grateful to Olga Pombo for the invitation to give a series of lectures on the philosophy of information at the Universidade de Lisboa, during which I discussed the topic of this paper. I wish to thank the participants in these meetings for their helpful discussions. I would also like to acknowledge the useful comments and criticisms by Selmer Bringsjord, Gian Maria Greco, Patrick Grim, Paul Oldfield, Gianluca Paronitti, Jeff Sanders, and Matteo Turilli on a previous draft. Fabrizio Floridi provided me with the example of the three fezzes. If there are still obvious mistakes after so much feedback, I am the only person responsible for them.

 

References

Allen, C. 2003, "Animal Consciousness" in The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta

Alston, W. 1971, "Varieties of Privileged Access", American Philosophical Quarterly, 8, 223-41.

Alston, W. 1986, "Epistemic Circularity", Philosophy and Phenomenological Research, 47, 1-30.

Barklund, J. 1995, "Metaprogramming in Logic" in Encyclopedia of Computer Science and Technology, edited by A.  Kent and J. G. Williams (New York: Marcel Dekker), vol. 33, 205-27.

Barwise, J. 1988, The Situation in Logic (Stanford, CA: Center for the Study of Language and Information).

Barwise, J., and Etchemendy, J. 1987, The Liar: An Essay on Truth and Circularity (New York; Oxford: Oxford University Press).

Barwise, J., and Seligman, J. 1997, Information Flow: The Logic of Distributed Systems (Cambridge: Cambridge University Press).

Block, N. 1995, "On a Confusion About a Function of Consciousness", Behavioral and Brain Sciences, 18, 227-47.

Brazier, F. M. T., and Treur, J. 1999, "Compositional Modelling of Reflective Agents", International Journal of Human-Computer Studies, 50, 407-31.

Brazier, F. M. T., Treur, J., Wijngaards, N. J. E., and Willems, M. 1995, "Formal Specification of Hierarchically (De)Composed Tasks", in Proceedings of the 9th Banff Knowledge Acquisition for Knowledge-based Systems workshop, KAW'95, Calgary, edited by B. R.  Gaines and M. A. Musen (SRDG Publications, Department of Computer Science, University of Calgary), 25/1-25/20.

Bringsjord, S. 1997, "Consciousness by the Lights of Logic and Commonsense", Behavioral and Brain Sciences, 20, 144-46.

Bringsjord, S. 1999, "The Zombie Attack on the Computational Conception of Mind", Philosophy and Phenomenological Research, 59(1), 41-69.

Conway, J. H., and Guy, R. K. 1996, The Book of Numbers (New York: Copernicus).

Costantini, S. 2002, "Meta-Reasoning: A Survey" in Computational Logic: Logic Programming and Beyond - Essays in Honour of Robert A. Kowalski, edited by A. C.  Kakas and F. SadriSpringer-Verlag),

Ditmarsch, H. P. v. 2000, Knowledge Games (Amsterdam). University of Groningen, doctoral thesis in computer science, available online at http://www.ai.rug.nl/~hans/.

Dretske, F. 2003, "How Do You Know You Are Not a Zombie?" in Privileged Access and First-Person Authority, edited by B. Gertler (Burlington: Ashgate),

Elmer, J. 1995, "Blinded Me with Science: Motifs of Observation and Temporality in Lacan and Luhmann", Cultural Critique, 30, 101-36.

Fagin, R., Halpern, J. Y., Moses, Y., and Vardi, M. Y. 1995, Reasoning About Knowledge (Cambridge, Mass ; London: MIT Press).

Floridi, L. 1996, Scepticism and the Foundation of Epistemology: A Study in the Metalogical Fallacies (Leiden: Brill).

Floridi, L., and Sanders, J. W. 2004, "The Method of Abstraction" in Yearbook of the Artificial. Nature, Culture and Technology. Models in Contemporary Sciences, edited by M. Negrotti (Bern: Peter Lang), Available online at http://www.wolfson.ox.ac.uk/~floridi/pdf/loa.ps.

Floridi, L., and Sanders, J. W. forthcoming, "On the Morality of Artificial Agents" in Ethics of Virtualities. Essays on the Limits of the Bio-Power Technologies, edited by A.  Marturano and L. Introna (London: Athlone Press), Preprint available at http://www.wolfson.ox.ac.uk/~floridi/.

Groenendijk, J., and Stokhof, M. 1994, "Questions" in Handbook of Logic and Language, edited by Van Benthem and Ter Meulen (North-Holland: Elsevier Science),

Groenendijk, J. A. G., Janssen, T. M. V., and Stokhof, M. J. B. (ed.) 1984, Truth, Interpretation, and Information: Selected Papers from the Third Amsterdam Colloquium (Dordrecht, Holland ; Cinnaminson, U.S.A: Foris Publications).

Kirk, R. 1974, "Zombies Vs. Materialists", Proceedings of the Aristotelian Society, Supplementary vol. 48, 135–52.

Lacan, J. 1988, "Logical Time and the Assertion of Anticipated Certainty", Newsletter of the Freudian Field, 2, 4-22. Originally written in March 1945, this was originally published in Écrits, pp.197-213, 1966.

Langevelde, I. A. v., Philipsen, A. W., and Treur, J. 1992, "Formal Specification of Compositional Architectures", in Proceedings of the 10th European Conference on AI, ECAI-92, edited by B. Neumann (John Wiley & Sons), 272-76.

Leibniz, G. W. 1995, "Monadology" in Philosophical Writings, edited by G. H. R. Parkinson (London: Dent; first published in Everyman's Library in 1934; published, with revisions in Everyman's University Library in 1973), 179-94.

Lycan, W. G. 2003, "Dretske’s Ways of Introspecting" in Privileged Access and First-Person Authority, edited by B. Gertler (Burlington: Ashgate),

McCarthy, J. 1971-1987, "Formalization of Two Puzzles Involving Knowledge". manuscript available online at http://www-formal.stanford.edu/jmc/puzzles.html, first published in McCarthy [1990].

McCarthy, J. 1990, Formalizing Common Sense: Papers by John Mccarthy (Norwood, NJ: Ablex).

Moody, T. C. 1994, "Conversations with Zombies", Journal of Consciousness Studies, 1(2), 196-200.

Nagel, T. 1974, "What Is It Like to Be a Bat?" Philosophical Review, 83(4), 435-50.

Polger, T. W. 2000, "Zombies Explained" in Dennett's Philosophy: A Comprehensive Assessment, edited by A. Brooks, D. Ross, and D. Thompson (Cambridge, Mass: MIT Press),

Polger, T. W. 2003, "Zombies, V. 1.0" in Field Guide to Philosophy of Mind, edited by M. Nani and M. Marraffahttp://host.uniroma3.it/progetti/kant/field/zombies.htm),

Rao, A., and Georgeff, M. 1991, "Modeling Rational Agents within a Bdi-Architecture" in Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, edited by J. Allen, R. Fikes, and E. Sandewall (San Mateo, CA: Morgan Kaufmann), 473-84.

Searle, J. R. 1992, The Rediscovery of the Mind (Cambridge, Mass ; London: MIT Press).

Shimojo, S., and Ichikawa, S. 1989, "Intuitive Reasoning About Probability: Theoretical and Experimental Analyses of the "Problem of Three Prisoners"", Cognition, 32, 1- 24.

Symposium 1995, "Symposium on "Conversations with Zombies"", Journal of Consciousness Studies, 2(4).

Turing, A. M. 1950, "Computing Machinery and Intelligence", Mind, 59(236), 433-60.

Walton, D. N. 1991, "Critical Faults and Fallacies of Questioning", Journal of Pragmatics, 15, 337-66.

Werning, M. forthcoming, "Self-Awareness and Imagination" in Mind and Action, edited by J. Saagua

Wooldridge, M. J. 2002, An Introduction to Multiagent Systems (Chichester: J. Wiley).

 



[1] Compare this with the skeptical problem about propositional justification: one may be justified in believing that p, without this warranting that one is also able to know that one is justified (Alston [1986]). As Descartes saw, one may try to get out of this predicament by making sure that the test (for him, the method of doubt) run to check whether one is justified in believing that p brings out one’s knowledge that one is justified in believing that p (Floridi [1996]).

[2] For an introduction to agents and distributed systems see Wooldridge [2002].

[3] Two very different uses of the knowledge game can be found for example in Lacan [1988], discussed in Elmer [1995], and in Shimojo and Ichikawa [1989]. I hope to show the applicability of the knowledge game to the dreaming argument and the brain-in-a-vat or malicious demon hypothesis in another paper.

[4]  The classic version of the knowledge game has been around for decades. It is related to the Conway-Paterson-Moscow theorem and the Conway paradox (see Groenendijk et al. [1984], pp. 159-182 and Conway and Guy [1996]) and was studied, among others, by Barwise and Etchemendy [1987] and Barwise [1988]. For some indications on its history see Fagin et al. [1995], p. 13. The social game Cluedo is based on the knowledge game. Its logic is analysed in Ditmarsch [2000]. The Logics Workbench is a propositional theorem prover that uses various versions of the knowledge game as benchmarks (http://www.lwb.unibe.ch/index.html).

[5] For an approach to Dretske’s question in terms of self-awareness see Werning [forthcoming]. Lycan [2003] argues that the inner sense theory can be defended against Dretske’s criticism.

[6] The whole point of having a distributed system is that the components can communicate about their states. However, in our case this cannot be done by explicit acknowledgement of one’s state, since the experiment relies on the agents not knowing already in which states they are. Therefore, the communication must be in terms of external observation, which requires some form of access. All this is easily modelled in terms of observable states, but it does not have to be. For example, the prisoners could be blindfolded and made to choose the fez to wear, one after the other, in such a way that each would know only which fez the other two have chosen. In this case, they would have to rely on their memories of observable processes. Note, finally, that in our version the communication is verbal and explicit, but in another version the prisoners are merely asked to walk silently towards the door of the cell as soon as they know the answer. They all walk together after a given time.

[7] See for example DESIRE, a computational framework for DEsign and Specification of Interacting REasoning components to model distributed air traffic control that is based on the classic version of the knowledge game (Langevelde et al. [1992]; Brazier et al. [1994]).

[8] The verification process of a bootstrapping state requires more information (the “short” state) but less time (number of logical steps) than the derivation of the same state through external inference (imagine the case in which the torturing boots are also red and the prisoners are sitting at three tables and cannot see their own boots). Thus, verification capacities confer a selective advantage on the agent displaying them.

[9] In their excellent survey, Groenendijk and Stokhof [1994] pay no attention to self-answering questions.

[10] Selmer Bringsjord and Patrick Grim have pointed out to me that this use of “indexically” may not be entirely appropriate and could generate confusion. I appreciate their concern, but I do not know of any other term that would express the point equally well. The reader is warned that the qualification is slightly unorthodox.

[11]Barklund [1995] and Costantini [2002] are two valuable surveys with further references to the “three wise men” problem.

[12] Having made this much clear, I entirely agree with Searle [1992] and Bringsjord [1999] in their criticism of computationalism. As Bringsjord writes in his defence of Searle’s position against Dennett, in current, computational AI “[…] “the person building project” will inevitably fail, but […] it will manage to produce artifacts capable of excelling in the famous Turing Test, and in its more stringent relatives. What sort of artifacts will these creatures be? I offer an unflattering one-word response: Pollock, Dennett, and like-minded researchers are busy building… zombies”. Indeed, I am even more pessimistic than Bringsjord. I do wonder, however, whether neo-Frankensteinian AI may be computer-ethically questionable (Floridi and Sanders [forthcoming]).

[13] Of course nothing hangs on the language used. “Nescio” would support the same inference.

[14] This “social” point is emphasized in Moody [1994], see also the several contributions and discussions of Moody’s position in Symposium [1995], especially Bringsjord’s convincing analysis.

[15] The examiner/guard and the questioning father in the muddy children version have a crucial role, for they guarantee common knowledge among the players, see Fagin et al. [1995]. This external source is a sort of ghost outside the machine.