But how could this capacity of self-monitoring and self-criticism develop? We want to satisfy ourselves that such a development is possible. We can postpone the true details until science gets around to discovering them, so all we need at this point is a plausible "Just So Story." (On the use and abuse of "Just So Stories," see Humphrey 1982, and the commentary and response in Dennett 1983b.) Here is such a story.
Once upon a time there were creatures who had a full complement of working sense organs "informing" them of conditions in the world, but who were entirely unconscious. Their lives, however, were quite complex—so complex that when one applied the "Need to Know" principle to them, one found that they needed to "know" (in their merely behavioral way) quite a lot about the point of their activities. In particular, they had coordination problems with other members of their species, and the apparently optimal solutions to these problems required rudimentary forms of "communication"—rather like the bee dances and other forms of social insect message-passing, but differing in the fact that when one creature communicated to another, it "knew" what it was talking about and why. That is to say, it did not just communicate in a sphexish sort of way, whenever some environmental "trigger" presented itself (like the bee "driven" to dance by the presence of sugar water in a certain location).
It (or its ancestors) had "noticed" that sometimes better results were obtainable when one "discriminated" between different audiences on different occasions, depending on what both parties "knew" and "believed" and "wanted" (in their merely behavioral way).
For instance, creature Alf wouldn't bother trying to get creature Bob to "believe" there was no food in the cave if Alf "believed" Bob already "knew" there was food in the cave. And if Bob "thought" Alf "wanted" to deceive him, Bob would be apt to "disbelieve" what Alf said.
There are Artificial Intelligence (AI) programs today that model in considerable depth the organizational structure of systems that must plan "communicative" interactions with other systems, based on their "knowledge" about what they themselves "know" and "don't know," what their interlocutor system "knows" and "doesn't know," and so forth. We may suppose that our imagined progenitors in the thought experiment were no more conscious than AI robots would be—whatever that comes to. (I have no idea what that comes to in the minds of those who insist upon it; my strategy is to concede whatever point it is to them at this stage.)
Now it sometimes happened that when one of these creatures was stymied on a project, it would "ask for help," and in particular, it would "ask for information." Sometimes the audience present would respond by "communicating" something that had just the right effects on the inquiring creature, breaking it out of its rut, or causing it to "see" a solution to its problem. Now for this practice to gain a foothold in a community, the askers would have to be able to reciprocate on occasion in the role of answerers. That is to say, they would have to have the behavioral capacity to be provoked into making occasionally "helpful" utterances when subjected to "request" utterances of others. For instance, if one system "knew" something and was "asked" about it, this might usefully have the normal (but by no means exceptionless) effect of Provoking it to "tell what it knew."
Then one fine day an "unintended" short-circuit effect of this new social institution was "noticed" by a creature. It "asked" for help in an inappropriate circumstance, where there was no helpful audience to hear fne request and respond. Except itself! When the creature heard its own request, the stimulation provoked just the sort of other-helping utterance production that the request from another would have caused. And to the creature's "delight" it found that it had just provoked itself into answering its own question!
How could the activity of asking oneself questions be any less systematically futile than the activity of paying oneself a tip for making oneself a drink? So long as we adhere to "naive perfectionism" (Dawkins 1980) about the mind or self, and view it, as Descartes did, as an indivisible and perfectly self-communicating whole, the possibility that such reflexive activities could serve some purpose is hard to imagine. But think of all the occasions on which we remind ourselves, commend ourselves, promise ourselves, scold ourselves, and warn ourselves. Surely all this self-administration has some effect that preserves it so securely in our repertoires.
Under what conditions would the activity of asking oneself questions be useful? All one needs to suppose is that there is some com-partmentalization and imperfect internal communication between components of a creature's cognitive system, so that one component can need the output of another component but be unable to address that component directly. Suppose the only way of getting component A to do its job is to provoke it into action by a certain sort of stimulus that normally comes from the outside, from another creature. If one day one discovers that one can play the role of this other and achieve a good result by autostimulation, the practice will blaze a valuable new communicative trail between one's internal components, a trail that happens to wander out into the public space of airwaves and acoustics. (Recent experiments with "split-brain" subjects (people whose corpus cal-losum has been severed, breaking the normal highway of interhemispheric communication) dramatically reveal the brain's virtuosity in finding and exploiting novel channels of communication. (Note that it is the brain or the cerebral hemisphere, not the whole person, who is to be credited with the clever discovery—the person is utterly unaware of the tricky communicative ploys the brain comes to exploit.) See Gazzaniga and Ledoux 1978.) Crudely put, pushing some information through one's ears and auditory system may stimulate just the sorts of connections one is seeking, may trip just the right associative mechanisms, tease just the right mental morsel to the tip of one's tongue. One can then say it, hear oneself say it, and thus get the answer one was hoping for.
There is considerable evidence, drawn from experiments and from studies of aphasias and other disorders, showing that the processes of eech production and speech comprehension are not mirror-images of another; in hearing and understanding a sentence, one does not more or less just suck it in through the same brain machinery one otherwise uses for formulating and uttering a sentence, only running in re-erse. So there is no reason to suppose that the process of formulating, uttering, hearing, and comprehending a sentence would simply leave one back where one started cognitively. In particular, the cognitive tasks "automatically" subcontracted in the course of sentence generation and comprehension would seem to be just the right sort of jobs to stir up otherwise dormant pockets of knowledge that might contain the missing piece of some current puzzle.
So in this Just So Story, the creatures got into the habit of talking (aloud) to themselves. And they found that it often had good results— often enough, in fact, to reinforce the practice. They got better and better at it. In particular, they discovered an efficient shortcut: sotto voce talking to oneself, which later led to entirely silent talking to oneself. The silent process maintained the loop of self-stimulation, but jettisoned the peripheral vocalization and audition portions of the process, which weren't contributing much. This innovation had the further benefit, opportunistically endorsed, of achieving a certain privacy for the practice of cognitive autostimulation. And privacy was especially useful when "comprehending" members of the same species were within earshot—for we must not suppose that the "helpful" commerce that was the seed for this process was an entirely altruistic and noncompetitive affair.
Thus a variety of silent, private, talking-to-oneself behaviors evolved in a social setting in which reciprocally useful communication occurred. It was not necessarily the best imaginable sort of cognitive process. It was relatively slow and laborious (compared to other unconscious cognitive processes) because it had to make use of large tracts of machinery "intended" for other purposes (for audible speech production and comprehension). It was as linear (limited to one topic at a time) as the social communication it evolved from. And it was dependent, at least at the outset, on the public words that composed the social practice.
Suppose such a phenomenon evolved on some planet. Would we call it consciousness? Would we be inclined to include creatures endowed with these "merely behavioral" activities among our conscious and self-conscious brethren? Or would their internal information-Processing activity be just more merely behavioral, unconscious pseudo-. nking? To outward appearance they would be well nigh ^distinguishable from us: cooperative but also devious, communicative ut also secretive, and apt on occasion to sit mumbling and staring into sPace until some cognitive breakthrough occurred.
Would we find some temptation welling in us to deem their internal cognitive activity conscious? If my intuition pump has done its job, you should now be feeling some temptation to judge these creatures conscious. That is all I ask: some temptation.
If you are still skeptical, note that we wouldn't have to restrict these internal activities to talking to oneself silently. Why do people draw pictures and diagrams for their own eyes to look at? Why do composers bother humming or playing their music to themselves for their own benefit? (Goodman 1982) We can suppose that the creatures in our Just So Story would also be able to engage (profitably) in internal diagramming and humming. And they would be just as capable as we are of benefitting from playing an "inner game of tennis." (Gallwey 1979)
The techniques of autostimulation are extremely various. Just as one can notice that stroking oneself in a certain way can produce certain only partially and indirectly controllable but definitely desirable effects (and one can then devote some time and ingenuity to developing and exploring the techniques for producing those desirable effects in oneself), so one can also come to recognize that talking to oneself, making pictures for oneself, singing to oneself, and so forth, are practices that often have desirable effects. Some people are better at these activities than others. Cognitive autostimulation is an acquired and intimately personal technique, with many different styles.
But suppose your deepest intuition is that these imagined creatures would still not be conscious at all—not the way you are! So be it. Then I will change my tack. Recall that this digression about consciousness was inspired by Strawson's example of the psychoanalyst who "restored freedom" to a patient by rendering that agent's behavior intelligible "in terms of conscious purposes." The appeal to conscious (as opposed to unconscious, freedom-impairing) purposes suggested that before the treatment the patient had no freedom because she herself (the famous "conscious self) was uninformed about the wellsprings of her behavior. Once she was made capable of informing herself about these wellsprings, and hence capable of bringing them into the arena of rational consideration and discussion, she was free. But our imagined creatures, once they have their linguistic and reflective information-handling capacities in good shape, would be equally susceptible to persuasion and equally able to engage in rational self-evaluation. They would be equipped to react appropriately when, as Hobbes says, we "represent reasons to them." Isn't that what freedom hinges upon, whether or not it amounts to consciousness?
(Daniel C. Dennett: Elbow Room - The Varieties of Free Will worth Wanting, Clarendon Press, Oxford, 1984.)