From: Subject: Psychologism and Behaviorism Date: Tue, 31 May 2005 00:29:52 +0430 MIME-Version: 1.0 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Location: http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/Psychologism.htm X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000 Psychologism and = Behaviorism

Psychologism and = Behaviorism

Ned = Block

Let psychologism be the doctrine that whether behavior is intelligent = behavior depends on the character of the internal information processing = that=20 produces it. More specifically, I mean psychologism to involve the = doctrine that=20 two systems could have actual and potential behavior typical of = familiar=20 intelligent beings, that the two systems could be exactly alike in their = actual=20 and potential behavior, and in their behavioral dispositions and = capacities and=20 counterfactual behavioral properties (i.e., what behaviors, behavioral=20 dispositions, and behavioral capacities they would have exhibited had = their=20 stimuli differed)--the two systems could be alike in all these ways, yet = there=20 could be a difference in the information processing that mediates their = stimuli=20 and responses that determines that one is not at all intelligent while = the other=20 is fully intelligent.

This paper makes two claims: first, psychologism is true, and thus a = natural=20 behaviorist analysis of intelligence that is incompatible with = psychologism is=20 false. Second, the standard arguments against behaviorism are inadequate = to=20 defeat this natural behaviorist analysis of intelligence or to establish = psychologism.

While psychologism is of course anathema to behaviorists,= 1it also seems wrong-headed to = many=20 philosophers who would not classify themselves as behaviorists. For = example,=20 Michael Dummett says:

If a Martian could learn to speak a human = language, or a=20 robot be devised to behave in just the ways that are essential to a = language=20 speaker, an implicit knowledge of the correct theory of meaning for = the=20 language could be attributed to the Martian or the robot with as much = right as=20 to a human speaker, even though their internal mechanisms were = entirely=20 different. (Dummett,=20 1976)

Dummett's view seems to be that what is relevant to = the=20 possession of a certain mental state is a matter of actual and potential = behavior, and that internal processing is not relevant except to = the=20 extent that internal processing affects actual and potential behavior. I = think=20 that this Dummettian claim contains an important grain of truth, a grain = that=20 many philosophers wrongly take to be incompatible with psychologism. =

This grain of truth can be elucidated as follows. Suppose we meet = Martians,=20 and find them to be behaviorally indistinguishable from humans. We learn = their=20 languages and they learn ours, and we develop deep commercial and = cultural=20 relations with them. We contribute to their journals and enjoy their = movies, and=20 vice versa. Then Martian and human psychologists compare notes, only to = find=20 that in underlying psychological mechanisms the Martians are very = different from=20 us. The Martian and human psychologists soon agree that the difference = could be=20 described as follows. Think of humans and Martians as if they were the = products=20 of conscious design. In any artificial intelligence project, there will = be a=20 range of engineering options. For example, suppose one wants to design a = machine=20 that makes inferences from information fed into it in the form of = English=20 sentences. One strategy would be to represent the information in the = machine in=20 English, and to formulate a set of inference rules that operate on = English=20 sentences. Another strategy would be to formulate a procedure for = translating=20 English into an artificial language whose sentences wear their logical = forms on=20 their faces. This strategy would simplify the inference rules, though at = the=20 computational cost of implementing the translation procedure. Suppose = that the=20 Martian and human psychologists agree that Martians and humans differ as = if they=20 were the products of a whole series of engineering decisions that differ = along=20 the lines illustrated. Should we conclude that the Martians are = not=20 intelligent after all? Obviously not! That would be crude human = chauvinism. I=20 suggest that philosophers reject psychologism in part because they = (wrongly) see=20 psychologism as involving this sort of chauvinism.

One of my purposes in this paper will be to show that psychologism = does not=20 in fact involve this sort of chauvinism.

If I succeed in showing psychologism to be true, I will have provided = aid and=20 comfort to those of us who have doubts about functionalism (the view = that mental=20 states are functional states--states definable in terms of their causal = roles).=20 Doubts about functionalism stem in part from the possibility of entities = that=20 look and act like people (and possess a network of internal states whose = causal=20 roles mirror those of our mental states), but differ from people in = being=20 operated by a network of homunculi whose aim is to simulate the = functional=20 organization of a person.= 2The presence of the homunculi = can be used=20 to argue that the homunculi-heads lack mentality. Defenders of = functionalism are=20 often inclined to ``bite the bullet,'' replying along the following = lines: ``If=20 I were to discover that my best friend and most valuable colleague was a = homunculi-head, that should not lead me to regard him as lacking = intelligence=20 (or other aspects of mentality), since differences in internal goings-on = that do=20 not affect actual or potential behavior (or behavioral counterfactuals) = are not=20 relevant to intelligence.'' If this paper shows psychologism to be true, = it=20 blocks this line of defense of functionalism.

Let us begin the main line of argument by focusing on the well-known = Turing=20 Test. The Turing Test involves a machine in one room, and a person in = another,=20 each responding by teletype to remarks made by a human judge in a third = room for=20 some fixed period of time, e.g., an hour. The machine passes the test = just in=20 case the judge cannot tell which are the machine's answers and which are = those=20 of the person. Early perspectives on the Turing Test reflected the = contemporary=20 view of what it was for something to be intelligent, namely that it act = in a=20 certain way, a way hard to define, but easy enough to recognize.

Note that the sense of ``intelligent'' deployed here--and generally = in=20 discussion of the Turing Test= 3--is not the sense in = which we=20 speak of one person being more intelligent than another. = ``Intelligence'' in the=20 sense deployed here means something like the possession of thought or = reason.=20

One popular way of construing Turing's proposal is as a version of=20 operationalism. ``Being intelligent'' is defined as passing the Turing = Test, if=20 it is administered (or alternatively, a la Carnap: if a system is given = the=20 Turing Test, then it is intelligent if and only if it passes). Construed = operationally, the Turing Test conception of intelligence shares with = other=20 forms of operationalism the flaw of stipulating that a certain measuring = instrument (the Turing Test) is infallible. According to the=20 operationalist interpretation of the Turing Test as a definition of=20 intelligence, it is absurd to ask of a device that passes the Turing = Test=20 whether it is really intelligent, and it is equally absurd to ask = of a=20 device that fails it whether it failed for some extraneous reason, but = is=20 nonetheless intelligent.

This difficulty can be avoided by going from the crude operationalist = formulation to a familiar behavioral disposition formulation. On such a=20 formulation, intelligence is identified not with the property of passing = the=20 test (if it is given), but rather with a behavioral disposition = to pass=20 the test (if it is given). On this behaviorist formulation, failing the = Turing=20 Test is not taken so seriously, since we can ask of a system that fails = the test=20 whether the failure really does indicate that the system lacks = the=20 disposition to pass the test. Further, passing the test is not = conclusive=20 evidence of a disposition to pass it, since, for example, the pass may = have been=20 accidental.

But the new formulation is nonetheless subject to deep difficulties. = One=20 obvious difficulty is its reliance on the discriminations of a human = judge.=20 Human judges may be able to discriminate too well--that is, they = may be=20 able to discriminate some genuinely intelligent machines from humans. = Perhaps=20 the responses of some intelligent machines will have a machinish style = that a=20 good human judge will be able to detect.

This problem could be avoided by altering the Turing Test so that the = judge=20 is not asked to say which is the machine, but rather is asked to say = whether one=20 or both of the respondents are, say, as intelligent as the average = human.=20 However, this modification introduces circularity, since = ``intelligence'' is=20 defined in terms of the judge's judgments of intelligence. Further, even = ignoring the circularity problem, the modification is futile, since the=20 difficulty just crops up in a different form: perhaps human judges will = tend=20 chauvinistically to regard some genuinely intelligent machines as = unintelligent=20 because of their machinish style of thought.

More importantly, human judges may be too easily fooled by mindless = machines.=20 This point is strikingly illustrated by a very simple program (Boden, 1977; Weizenbaum, 1966) (two hundred lines in = BASIC), devised=20 by Joseph Weizenbaum, which can imitate a psychiatrist by employing a = small set=20 of simple strategies. Its major technique is to look for key words such = as=20 ``I,'' ``you,'' ``alike,'' ``father,'' and ``everybody.'' The words are=20 ranked--for example, ``father'' is ranked above ``everybody,'' and so if = you=20 type in ``My father is afraid of everybody,'' the machine will respond = with one=20 of its ``father'' responses, such as ``What else comes to mind when you = think of=20 your father?'' If you type in ``I know everybody laughed at me,'' you = will get=20 one of its responses to ``everybody,'' for example, ``Who in particular = are you=20 thinking of?'' It also has techniques that simultaneously transform = ``you'' into=20 ``I'' and ``me'' into ``you,'' so that if you type in ``You don't agree = with=20 me,'' it can reply: ``Why do you think that I don't agree with you?'' It = also=20 stores sentences containing certain key words such as ``my.'' If = your=20 current input contains no key words, but if you had earlier said = ``My=20 boyfriend made me come here,'' it will ``ignore'' your current remark, = saying=20 instead, ``Does that have anything to do with the fact that your = boyfriend made=20 you come here?'' If all other tricks fail, it has a list of last ditch = responses=20 such as, ``Who is the psychiatrist here, you or me?'' Though this system = is=20 totally without intelligence, it proves remarkably good at = fooling=20 people in short conversations. Of course, Weizenbaum's machine rarely = fools=20 anyone for very long if the person has it in mind to explore the = machine's=20 capacities. But the program's extraordinary success (Weizenbaum's = secretary=20 asked him to leave the room in order to talk to the machine privately) = reminds=20 us that human gullibility being what it is, some more complex (but = nonetheless=20 unintelligent) program may be able to fool most any human judge. = Further, since=20 our tendency to be fooled by such programs seems dependent on our degree = of=20 suspicion, sophistication about machines, and other contingent factors, = it seems=20 silly to adopt a view of the nature of intelligence or thought that so = closely=20 ties it to human judgment. Could the issue of whether a machine in = fact=20 thinks or is intelligent depend on how gullible human interrogators tend = to be?=20

In sum, human judges may be unfairly chauvinist in rejecting = genuinely=20 intelligent machines, and they may be overly liberal in accepting=20 cleverly-engineered, mindless machines.

The problems just described could be avoided if we could specify in a = non-question-begging way what it is for a sequence of responses to = verbal=20 stimuli to be a typical product of one or another style of intelligence. = For=20 then we would be able to avoid the dependence on human powers of = discrimination=20 that lies at the root of the problems of the last paragraph. Let us = suppose, for=20 the sake of argument, that we can do this, that is, that we can = formulate=20 a non-question-begging definition--indeed, a behavioristically = acceptable=20 definition--of what it is for a sequence of verbal outputs to be, as we = shall=20 say, ``sensible,'' relative to a sequence of inputs. Though of course it = is very=20 doubtful that ``sensible'' can be defined in a non-question-begging way, = it will=20 pay us to suppose it can, for as we shall see, even such a = definition=20 would not save the Turing Test conception of intelligence.

The role of the judge in Turing's definition of intelligence is to = avoid the=20 problem of actually specifying the behavior or behavioral dispositions = thought=20 to constitute intelligence. Hence my supposition that ``sensible'' can = be=20 defined in a non-question-begging way amounts to the suggestion that we = ignore=20 one of the usual criticisms of behaviorists-that they cannot specify = their=20 behavioral dispositions in a non-question-begging way. This is indeed an = enormous concession to behaviorism, but it will not play an important = role in=20 what follows.

We can now propose a version of the Turing Test conception of = intelligence=20 that avoids the problems described:

Intelligence (or more accurately, conversational=20 intelligence) is the disposition to produce a sensible sequence of = verbal=20 responses to a sequence of verbal stimuli, whatever they may be.=20

The point of the ``whatever they may be'' is to emphasize that this = account=20 avoids relying on anyone's ability to come up with clever questions; for = in=20 order to be intelligent according to the above-described conception, the = system=20 must be disposed to respond sensibly not only to what the interlocutor=20 actually says, but to whatever he might have said as well. =

While the definition just given is a vast improvement (assuming that=20 ``sensible'' can be adequately defined), it is still a clearly = behaviorist=20 formulation. Let us now review the standard arguments against = behaviorism with=20 an eye towards determining whether the Turing Test conception of = intelligence is=20 vanquished by them.

Probably the most influential argument against behaviorism is due to = Chisholm=20 and Geach (Chisholm,=20 1957). Suppose a behaviorist analyzes someone's wanting an ice cream = cone as=20 his having a set of behavioral dispositions such as the disposition to = grasp an=20 ice cream cone if one is ``presented'' to him. But someone who wants an = ice=20 cream cone will be disposed to grasp it only if he knows it is an = ice=20 cream cone (and not in general if he thinks it is a tube of axle grease = being=20 offered to him as a joke) and only if he does not believe that = taking an=20 ice cream cone would conflict with other desires of more = importance to=20 him (for example, the desire to avoid an obligation to return the = favor). In=20 short, which behavioral dispositions a desire issues in depends on the=20 other mental states of the desirer. And similar points apply to=20 behaviorist analyses of belief and of many other mental states. = Conclusion: one=20 cannot define the conditions under which a given mental state will issue = in a=20 given behavioral disposition without adverting to other mental = states.=20

Another standard argument against behaviorism flows out of the = Chisholm-Geach=20 point. If a person's behavioral dispositions depend on a group of mental = states,=20 perhaps different mental groups can produce the same behavioral=20 dispositions. This line of thought gave rise to the ``perfect actor'' = family of=20 counterexamples. As Putnam=20 (1975b) argued in convincing detail, it is possible to imagine a = community=20 of perfect actors (Putnam's super-super-spartans) who, in virtue of = lawlike=20 regularities, lack the behavioral dispositions envisioned by the = behaviorists to=20 be associated with pain, even though they do in fact have pain. This = shows that=20 no behavioral disposition is necessary for pain, and an exactly = analogous=20 example of perfect pain-pretenders shows that no behavioral disposition = is=20 sufficient for pain either.

Another less important type of traditional counterexample to = behaviorism is=20 illustrated by paralytics and brains in vats. Like Putnam's=20 super-super-spartans, they can have pain without the usual dispositions. =

When I speak of the ``standard objections to behaviorism'' in what = follows, I=20 shall have these three types of objection in mind: the Chisholm-Geach = objection,=20 the perfect actor objection, and the objection based on paralytics and = the=20 like.= 4

1 Do the Standard Objections = to=20 Behaviorism Dispose of Behaviorists Conceptions of Intelligence? =

The three arguments just reviewed are generally and rightly regarded = as=20 decisive refutations of behaviorist analyses of many mental states, such = as=20 belief, desire, and pain. Further, they serve to refute one quite = plausible=20 behaviorist analysis of intelligence. Intelligence is plausibly regarded = as a=20 second order mental property, a property that consists in having first = order=20 mental states--beliefs, desires, etc.--that are caused to change in = certain ways=20 by changes in one another and in sensory inputs. If intelligence is = indeed such=20 a second order property, and given that the behaviorist analyses of the = first=20 order states are false, one can conclude that a plausible behaviorist = view of=20 intelligence is false as well.= 5

But it would be unfair to behaviorism to leave the matter here. = Behaviorists=20 generally mean their dispositions to be ``pure dispositions.'' Ryle=20 (1949), for example, emphasized that ``to possess a dispositional = property=20 is not to be in a particular state or to undergo a particular change.''=20 Brittleness, according to Ryle, is not a cause of breaking, but merely = the=20 fact of breaking easily. Similarly, to attribute pain to a person is = not to=20 attribute a cause or effect of anything, but simply to say what he = would=20 do in certain circumstances. However, the notion just mentioned of=20 intelligence as a second order property is at its most plausible when = first=20 order mental states are thought of as entities that have causal = roles.=20 Since pure dispositions do not have causal roles in any straightforward = sense,=20 the analysis of intelligence as a second order property should seem=20 unsatisfactory to a behaviorist, even if it is the right analysis of=20 intelligence. Perhaps this explains why behaviorists and=20 behaviorist-sympathizers do not seem to have adopted a view of = intelligence as a=20 second order property.

Secondly, an analysis of intelligence along roughly the lines = indicated in=20 what I called the Turing Test conception of intelligence is natural for = the=20 behaviorist because it arises by patching a widely known operationalist=20 formulation. It is not surprising that such a position is popular in = artificial=20 intelligence circles.= 6Further, it seems to be = regarded=20 sympathetically by many philosophers who accept the standard arguments = against=20 behaviorist analyses of beliefs, desires, etc.= 7

Another attraction of an analysis along the lines suggested by the = Turing=20 Test conception of intelligence is that such an analysis can escape = the=20 standard objections to behaviorism. If I am right about this, then = it would=20 certainly be foolish for the critic of behaviorism to regard behaviorism = with=20 respect to intelligence as obliterated by the standard objections, = ignoring=20 analyses along the lines of the Turing Test conception of intelligence. = For=20 these reasons, I will now return to an examination of how well the = Turing Test=20 conception of intelligence fares when faced with the standard = objections.

The Turing Test conception of intelligence offers a necessary and = sufficient=20 condition of intelligence. The standard objections are effective against = the=20 necessary condition claim, but not against the sufficient condition = claim.=20 Consider, for example, Putnam's perfect actor argument. The = super-super-spartans=20 have pain, though they have no disposition to pain behavior. Similarly, = a=20 machine might be intelligent, but not be disposed to act = intelligently=20 because, for example, it might be programmed to believe that acting=20 intelligently is not in its interest. But what about the converse = sort of=20 case? A perfect actor who pretends to have pain seems as = plausible as the=20 super-super-spartans who pretend to lack pain, but this = sort of=20 perfect actor case does not seem to transfer to intelligence. For = how=20 could an unintelligent system perfectly pretend to be intelligent? It = would seem=20 that any system that is that good at pretending to be intelligent = would=20 have to be intelligent. So no behavioral disposition is necessary for=20 intelligence, but as far as this standard objection is concerned, = a=20 behavioral disposition may yet be sufficient for intelligence. A similar = point=20 applies with respect to the Chisholm-Geach objection. The Chisholm-Geach = objection tells us that a disposition to pain behavior is not a = sufficient=20 condition of having pain, since the behavioral disposition could be = produced by=20 a number of different combinations of mental states, e.g., [pain + a = normal=20 preference function] or by [no pain + an overwhelming desire to appear = to have=20 pain]. Turning to intelligent behavior, we see that it normally is = produced by=20 [intelligence + a normal preference function]. But could intelligent = behavior be=20 produced by [no intelligence + an overwhelming desire to appear = intelligent]?=20 Indeed, could there be any combination of mental states and properties = not=20 including intelligence that produces a lawful and thoroughgoing = disposition=20 to act intelligently? It seems not. So it seems that the Chisholm-Geach=20 objection does not refute the claim of the Turing Test conception that a = certain=20 disposition is sufficient for intelligence.

Finally, the standard paralytic and brain in the vat examples are = only=20 intended to apply to claims of necessary conditions--not sufficient=20 conditions--of mental states.

The defect I have just pointed out in the case against the = behaviorist view=20 of intelligence is a moderately serious one, since behaviorists have = tended to=20 focus on giving sufficient conditions for the application of mental = terms=20 (perhaps in part because of their emphasis on the connection between the = meaning=20 of ``pain'' and the circumstances in which we learned to apply it). = Turing, for=20 example; was willing to settle for a ``sufficient condition'' = formulation of his=20 behaviorist definition of intelligence.= 8One of my purposes in this = paper is to=20 remedy this defect in the standard objections to behaviorism by showing = that no=20 behavioral disposition is sufficient for intelligence.

I have just argued that the standard objections to behaviorism are = only=20 partly effective against the Turing Test conception of intelligence. I = shall now=20 go one step further, arguing that there is a reformulation of the Turing = Test=20 conception of intelligence that avoids the standard objections=20 altogether. The reformulation is this: substitute the term = ``capacity''=20 for the term ``disposition'' in the earlier formulation. As mentioned = earlier,=20 there are all sorts of reasons why an intelligent system may fail to be = disposed=20 to act intelligently: believing that acting intelligently is not in its=20 interest, paralysis, etc. But intelligent systems that do not want to = act=20 intelligently or are paralyzed still have the capacity to act=20 intelligently, even if they do not or cannot exercise this capacity. =

Let me say a bit more about the difference between a behavioral = disposition=20 and a behavioral capacity. A capacity to 3D"$need not = result in a=20 disposition to 3D"$unless certain internal conditions are=20 satisfied--say, the appropriate views or motivation or not having curare = in=20 one's bloodstream. To a first approximation, a disposition can be = specified by a=20 set (perhaps infinite) of input-output conditionals.

If 3D"$obtains, then 3D"$, is = emitted
If obtains, then 3D"$, is emitted
and so on.= 9

A corresponding first stab at a specification of a = capacity,=20 on the other hand, would involve mentioning internal states in = the=20 antecedents of the conditionals.

If 3D"$and 3D"$obtain, then 3D"$is emitted
If=20 3D"$and 3D"$obtain, then 3D"$is emitted
and so on, =

where 3D"$and 3D"$are internal states.= 10In humans, such states would = include=20 beliefs and desires and working input and output organs at a minimum, = though a=20 machine could have a capacity the exercise of which is contingent only = on=20 nonmental internal parameters, e.g., whether its fuses are intact.

What I have said about the difference between a disposition and a = capacity is=20 very sketchy, and clarification is needed, especially with regard to the = question of what sorts of internal states are to be specified in the = antecedents=20 of the conditionals. If paralytics are to be regarded as possessing = behavioral=20 capacities, these internal states will have to include specifications of = functioning input and output devices. And if the systems that believe = that=20 acting intelligently is not in their interest are to have the required = capacity,=20 internal states will have to be specified such that if they were to = obtain, the=20 system would believe that acting intelligently is in its interest. = Notice,=20 however, that the behaviorist need not be committed to these = mentalistic=20 description of the internal states; physiological or functional = descriptions=20 will do.= 11

The reader may suspect that the reformulation of behaviorism in terms = of=20 capacities that I have suggested avoids the standard objections to = behaviorism=20 only because it concedes too much. The references to internal=20 states--even under physiological or functional descriptions--may be seen = as too=20 great a concession to psychologism (or other nonbehavioristic doctrines) = for any=20 genuine behaviorist to make. I reply: so much the better for my = purposes, for I=20 intend to show that this concession is not enough, and that the = move from=20 behavioral dispositions to behavioral capacities will not save=20 behaviorism.

I now propose the reformulation suggested by the preceding remarks; = let us=20 call it the neo-Turing Test conception of intelligence.

Intelligence (or, more accurately, conversational = intelligence) is the capacity to produce a sensible sequence of verbal = responses to a sequence of verbal stimuli, whatever they may be.=20

Let us briefly consider the standard objections to behaviorism in = order to=20 show that the neo-Turing Test conception avoids them. First, intelligent = paralytics and brains in vats provide no counterexample, since they do = have the=20 capacity to respond sensibly, though they lack the means to exercise the = capacity. Second, consider the ``perfect actor'' objection. An = intelligent being=20 who perfectly feigns stupidity nonetheless has the capacity to respond = sensibly.=20 Further, as in the disposition case, it would seem that no one could = have the=20 capacity to pretend perfectly to be intelligent without actually being=20 intelligent. Third, the new formulation entirely disarms the = Chisholm-Geach=20 objection. There are many combinations of beliefs and desires that could = cause=20 an intelligent being to fail to be disposed to respond sensibly, = but=20 these beliefs and desires would not destroy the being's capacity = to=20 respond sensibly. Further, as I have mentioned repeatedly, it is hard to = see how=20 any combination of mental states not including intelligence could result = in the=20 capacity to respond in an intelligent manner to arbitrary sequences of = stimuli.=20

One final point. Notice that my concession that ``sensible'' can be = defined=20 in a behavioristically adequate way is not what is responsible = for the=20 fact that the neo-Turing Test conception of intelligence evades the = standard=20 objections. What does the job is first the difficulty of conceiving of = someone=20 who can pretend peffectly to be intelligent without actually being = intelligent,=20 and second, the move from dispositions to capacities.

2 The Argument for = Psychologism and=20 Against Behaviorism

My strategy will be to describe a machine that produces (and thus has = the=20 capacity to produce) a sensible sequence of verbal responses to verbal = stimuli.=20 The machine is thus intelligent according to the neo-Turing Test = conception of=20 intelligence (and also according to the cruder versions of this = conception).=20 However, according to me, a knowledge of the machine's internal = information=20 processing shows conclusively that it is totally lacking in = intelligence.

I shall now describe my unintelligent machine. First, we require some = terminology. Call a string of sentences whose members can be typed by a = human=20 typist one after another in an hour or less, a typable string of = sentences.=20 Consider the set of all typable strings of sentences. Since = English has a=20 finite number of words (indeed, a finite number of typable letter = strings), this=20 set has a very large, but nonetheless finite, number of members. = Consider the=20 subset of this set which contains all and only those strings which are = naturally=20 interpretable as conversations in which at least one party's = contribution is=20 sensible in the sense described above. Call a string which can be = understood in=20 this way a sensible string. For example, if we allot each party = to a=20 conversation one sentence per ``turn'' (a simplification I will continue = to=20 use), and if each even-numbered sentence in the string is a reasonable=20 conversational contribution, then the string is a sensible one. We need = not be=20 very restrictive as to what is to count as sensible. For example, if = sentence 1=20 is ``Let's see you talk nonsense,'' it would be sensible for sentence 2 = to be=20 nonsensical. The set of sensible strings so defined is a finite set that = could=20 in principle be listed by a very large and clever team working for a = long time,=20 with a very large grant and a lot of mechanical help, exercising = imagination=20 and judgment about what is to count as a sensible string.

Presumably the programmers will find that in order to produce really=20 convincing sensible strings, they will have to think of themselves as = simulating=20 some definite personality with some definite history. They might choose = to give=20 the responses my Aunt Bertha might give if she were brought to a room = with a=20 teletype by her errant nephew and asked to answer ``silly'' questions = for a=20 time.

Imagine the set of sensible strings recorded on tape and deployed by = a very=20 simple machine as follows. The interrogator types in sentence A. = The=20 machine searches its list of sensible strings, picking out those that = begin with=20 A. It then picks one of these A-initial strings at random, = and=20 types out its second sentence, call it ``B''. The interrogator = types in=20 sentence C. The machine searches its list, isolating the strings = that=20 start with A followed by B followed by C. It picks = one of=20 these ABC-initial strings and types out its fourth sentence, and = so on.= 12

The reader may be helped by seeing a variant of this machine in which = the=20 notion of a sensible string is replaced by the notion of a sensible = branch of a=20 tree structure. Suppose the interrogator goes first, typing in one of = 3D"$. The=20 programmers produce one sensible response to each of these = sentences, 3D"$. For each=20 of 3D"$, the=20 interrogator can make various replies, so many branches will sprout = below each=20 of 3D"$. Again,=20 for each of these replies, the programmers produce one sensible = response, and so=20 on. In this version of the machine, all the X-initial strings can = be=20 replaced by a single tree with a single token of X as the head = node; all=20 the XYZ-initial strings can be replaced by a branch of that tree = with=20 Y and Z as the next nodes, and so forth. This machine is a = tree-searcher instead of a string-searcher.

So long as the programmers have done their job properly, such a = machine will=20 have the capacity to emit a sensible sequence of verbal outputs, = whatever the=20 verbal inputs, and hence it is intelligent according to the neo-Turing = Test=20 conception of intelligence. But actually, the machine has the = intelligence of a=20 toaster. All the intelligence it exhibits is that of its = programmers.=20 Note also that its limitation to Turing Tests of an hour's length is not = essential. For a Turing Test of any given length, the machine = could in=20 principle be programmed in just the same way to pass a Turing Test of = that=20 length.

I conclude that the capacity to emit sensible responses is not = sufficient for=20 intelligence, and so the neo-Turing Test conception of intelligence is = refuted=20 (along with the older and cruder Turing Test conceptions). I also = conclude that=20 whether behavior is intelligent behavior is in part a matter of how it = is=20 produced. Even if a system has the actual and potential behavior = characteristic=20 of an intelligent being, if its internal processes are like those of the = machine=20 described, it is not intelligent. So psychologism is true.

I haven't shown quite what I advertised initially, since I = haven't=20 shown that the machine could duplicate the response properties of a real = person.=20 But what I have shown is close enough for me, and besides, it doesn't = change the=20 essential point of the example if we imagine the programmers deciding=20 exactly what Aunt Bertha would say on the basis of a = psychological or=20 physiological theory of Aunt Bertha.

We can now see why psychologism is not incompatible with the point = made=20 earlier in connection with the Martian example. The Martian example = suggested=20 that it was doubtful that there would be any single natural kind of = information=20 processing that must be involved in the production of all intelligent = behavior.=20 (I argued that it would be chauvinist to refuse to classify Martians as=20 intelligent merely because their internal information processing = is very=20 different from ours.) Psychologism is not chauvinist because = psychologism=20 requires only that intelligent behavior not be the product of a = (at least=20 one) certain kind of internal processing. One can insist that behavior = which has=20 a certain etiology cannot be intelligent behavior without holding that = all=20 intelligent behavior must have the same ``kind'' of etiology.

The point of the machine example may be illuminated by comparing it = with a=20 two-way radio. If one is speaking to an intelligent person over a = two-way radio,=20 the radio will normally emit sensible replies to whatever one says. But = the=20 radio does not do this in virtue of a capacity to make sensible replies = that=20 it possesses. The two-way radio is like my machine in being a=20 conduit for intelligence, but the two devices differ in that my = machine=20 has a crucial capacity that the two-way radio lacks. In my machine, no = causal=20 signals from the interrogators reach those who think up the responses, = but in=20 the case of the two-way radio, the person who thinks up the responses = has to=20 hear the questions. In the case of my machine, the causal efficacy of = the=20 programmers is limited to what they have stored in the machine before = the=20 interrogator begins.

The reader should also note that my example is really an extension of = the=20 traditional perfect pretender counterexample, since the machine = ``pretends'' to=20 be intelligent without actually being intelligent. Once one notes this, = it is=20 easy to see that a person could have a capacity to respond = intelligently, even=20 though the intelligence he exhibits is not his--for example, if = he=20 memorizes responses in Chinese though he understands only English.= 13An idiot with a photographic = memory,=20 such as Luria's famous mnemonist, could carry on a brilliant = philosophical=20 conversation if provided with strings by a team of brilliant = philosophers.= 14

The machine, as I have described it thus far, is limited to = typewritten=20 inputs and outputs. But this limitation is inessential, and that is = what=20 makes my argument relevant to a full-blooded behaviorist theory of=20 intelligence, not just to a theory of conversational intelligence. = What I=20 need to show to make my point is that the kind of finiteness assumption = that=20 holds with respect to typewritten inputs and outputs also holds with = respect to=20 the whole range of sensory stimulation and behavior. If I can show this, = then I=20 can generalize the idea of the machine I described to an unintelligent = robot=20 that nonetheless acts in every possible situation just as intelligently = as a=20 person.

The sort of finiteness claim that I need can be justified both = empirically=20 and conceptually. The empirical justifications are far too complex to = present=20 here, so I will only mention them briefly. First, I would claim that = enough is=20 now known about sensory physiology to back up the assertion that every = stimulus=20 parameter that is not already ``quantized'' could be quantized without = making=20 any difference with respect to effects on the brain or on behavior, = provided=20 that the ``grain'' of quantization is fine enough. Suppose that all of = your=20 sense organs were covered by a surface that effected an = ``analog-to-digital=20 conversion.'' For example, if some stimulus parameter had a value of = 3D"$units, the surface might = change it to units. Provided that the grain was fine enough (not too many = decimal=20 places are ``lopped off''), the analog-to-digital conversion would make = no=20 mental or behavioral difference. If this is right, then one could take = the=20 output of the analog-digital converter as the relevant stimulus, and so = there=20 would be a finite number of possible sequences of arrays of stimuli in a = finite=20 time.

I am told that a similar conclusion can actually be reached with = respect to=20 any physical system that can be regarded as having inputs and = outputs.=20 The crucial claim here is that no physical system could be an infinitely = powerful amplifier, so given a ``power of amplification,'' one could = impose a=20 corresponding quantization of the inputs that would not affect the = outputs. I=20 don't know enough physics to pursue this line further, so I won't.

The line of argument for my conclusion that I want to rely on is more = conceptual than empirical. The point is that our concept of = intelligence=20 allows an intelligent being to have quantized sensory devices. Suppose, = for=20 example, that Martian eyes are like movie cameras in that the = information that=20 they pass on to the Martian brain amounts to a series of newspaper-like = ``dot''=20 pictures, i.e., matrices containing a large number of cells, each of = which can=20 be either black or white. (Martians are color-blind.) If Martians are = strikingly=20 like us in appearance, action, and even internal information processing, = no one=20 ought to regard their movie camera eyes (and other finitary sense = organs) as=20 showing they are not intelligent. However, note that since there are a = finite=20 number of such ``dot'' pictures of a given grain, there are a finite = number of=20 sequences of such pictures of a given duration, and thus a finite = number=20 of possible visual stimuli of a given duration.

It is easy to see that both the empirical and the conceptual points = support=20 the claim that an intelligent being could have a finite number of = possible=20 sequences of types of stimuli in a finite time (and the same is also = true of=20 responses). But then the stimulus sequences could in principle be = catalogued by=20 programmers, just as can the interrogator's remarks in the machine = described=20 earlier. Thus, a robot programmed along the lines of the machine I = described=20 earlier could be given every behavioral capacity possessed by humans, = via a=20 method of the sort I have already described. In sum, while my remarks so = far=20 have dealt mainly with a behaviorist account of conversational = intelligence,=20 broadening the argument to cover a behaviorist theory of intelligence=20 simpliciter would raise no new issues of principle. In what follows, I = shall=20 return for convenience to a discussion of conversational intelligence. =

By this time, the reader may have a number of objections. Given the = heavy use=20 of the phrase ``in principle'' above, you may feel that what this latest = wrinkle=20 shows is that the sense of ``in principle possible'' in which any = of the=20 machines I described are in principle possible is a bit strange. Or you = may=20 object: ``Your machine's capacity to pass the Turing Test does depend on = an=20 arbitrary time limit to the test.'' Or: ``You are just stipulating a new = meaning=20 for the word `intelligent'.'' Or you may want to know what I would say = if=20 I turned out to be one.

I will now attempt to answer these and other objections. If an = objection has=20 a subscripted numeral (e.g., 3a), then it depends on the immediately = preceding=20 objection or reply. However, the reader can skip any other objection or = reply=20 without loss of continuity.

Objection 1.

Your argument is too strong in that it could be = said of any=20 intelligent machine that the intelligence it exhibits is that of its=20 programmers.

Reply.

I do not claim that the intelligence of = every=20 machine designed by intelligent beings is merely the intelligence of the = designers, and no such principle is used in my argument. If we ever do = make an=20 intelligent machine, presumably we will do it by equipping it with = mechanisms=20 for learning, solving problems, etc. Perhaps we will find general = principles of=20 learning, general principles of problem solving, etc., which we can = build into=20 it. But though we make the machine intelligent, the intelligence = it=20 exhibits is its own, just as our intelligence is no less ours, even if = it was=20 produced mainly by the enormously skillful efforts of our parents.

By contrast, if my string-searching machine emits a clever pun = P, in=20 response to a conversation C, then the sequence CP is = literally=20 one that was thought of and included by the programmers. Perhaps the = programmers=20 will say of one of their colleagues, ``Jones thought of that pun--he is = so=20 clever.''

The trouble with the neo-Turing Test conception of intelligence (and = its=20 predecessors) is precisely that it does not allow us to distinguish = between=20 behavior that reflects a machine's own intelligence, and behavior = that,=20 reflects only the intelligence of the machine's programmers. As I = suggested, only a partly etiological notion of intelligent behavior will = do the=20 trick.

Objection 2.

If the strings were recorded before this year, the = machine=20 would not respond the way a person would to a sentence like ``What do = you think=20 of the latest events in the Mid-East?''

Reply.

A system can be intelligent, yet have no knowledge = of current=20 events. Likewise, a machine can imitate intelligence without=20 imitating knowledge of current events. The programmers could, if = they=20 liked, choose to simulate an intelligent Robinson Crusoe who knows = nothing of=20 the last twenty-five years. Alternatively, they could undertake the much = more=20 difficult task of reprogramming periodically to simulate knowledge of = current=20 events.

Objection 3.

You have argued that a machine with a certain = internal=20 mechanical structure is not intelligent, even though it seems = intelligent in=20 every external respect (that is, in every external respect = examined in=20 the Turing Test). But by introducing this internal condition, aren't you = in=20 effect merely suggesting a linguistic stipulation, a new meaning for the = word=20 ``intelligent''? We normally regard input-output capacities as = criterial=20 for intelligence. All you are doing is suggesting that we adopt a new = practice,=20 involving a new criterion which includes something about what = goes on=20 inside.

Reply.

Jones plays brilliant chess against two of the = world's=20 foremost grandmasters at once. You think him a genius until you find out = that=20 his method is as follows. He goes second against grandmaster 3D"$and first against 3D"$. He notes=20 3D"$'s first move against him, = and then makes=20 the same move against 3D"$. He = awaits 3D"$'s response, and makes the = same move=20 against 3D"$, = and so on. Since=20 Jones's method itself was one he read about in a comic book, Jones's = performance=20 is no evidence of his intelligence. As this example= 15illustrates, it is a feature = of our=20 concept of intelligence, that to the degree that a system's performance = merely=20 echoes the intelligence of another system, the first system's = performance is=20 thereby misleading as an indication of its intelligence. Since my = machine's=20 performances are all echoes, these performances provide no reason = to=20 attribute intelligence to it.= 16

The point is that though we normally ascertain the = intelligence of a=20 system by trying to assess its input-output capacities, it is part of = our=20 ordinary concept of intelligence that input-output capacities can be = misleading.=20 As Putnam has suggested, it is part of the logic of natural kind terms = that what=20 seems to be a stereotypical X can turn out not to be an X = at all=20 if it fails to belong to the same scientific natural kind as the main = body of=20 things we have referred to as X's. (Kripke, 1972; Putnam, 1975a) If Putnam is right about = this, one can=20 never accuse someone of ``changing the meaning'' of a natural kind term = merely=20 on the ground that he says that something that satisfies the standard=20 ``criteria'' for X's is not an X.

Objection 3a.

I am very suspicious of your reply to the last = objection,=20 especially your introduction of the Putnam point. Is it not rather = chauvinist to=20 suppose that a system has to be scientifically like us to be = intelligent?=20 Maybe a system with information processing very unlike ours does not = belong in=20 the extension of our term ``intelligence''; but it is equally true that = we do=20 not belong in the extension of its term ``shmintelligence.'' And = who is=20 to say that intelligence is any better than shmintelligence?

Reply.

I have not argued that the mere fact of an = information=20 processing diference between my machine and us cuts any ice. = Rather, my=20 point is based on the sort of information processing difference = that=20 exists. My machine lacks the kind of ``richness'' of information = processing=20 requisite for intelligence. Perhaps this richness has something to do = with the=20 application of abstract principles of problem solving, learning, etc. I = wish I=20 could say more about just what this sort of richness comes to. But I = have chosen=20 a much less ambitious task: to give a clear case of something that = lacks=20 that richness, but nonetheless behaves as if it were intelligent. If = someone=20 offered a definition of ``life'' that had the unnoticed consequence that = small=20 stationery items such as paper clips are alive, one could refute him by = pointing=20 out the absurdity of the consequence, even if one had no very detailed = account=20 of what life really is with which to replace his. In the same way, one = can=20 refute the neo-Turing Test conception by counterexample without having = to say=20 very much about what intelligence really is.

Objection 4.

Suppose it turns out that human beings, including = you,=20 process information in just the way that your machine does. Would you = insist=20 that humans are not intelligent?

Reply.

I'm not very sure of what I would say about human=20 intelligence were someone to convince me that human information = processing is=20 the same as that of my machine. However, I do not see that there is any=20 clearly and obviously correct response to this question against = which the=20 responses natural for someone with my position can be measured. Further, = none of=20 the more plausible responses that I can think of are incompatible with = what I=20 have said so far.

Assume, for example, a theory of reference that dictates that in = virtue of=20 the causal relation between the word ``intelligence'' and human = information=20 processing, human information processing is intelligent whatever = its=20 nature.= 17Then, if I were convinced that = humans=20 process information in the manner of my machine, I should admit that my = machine=20 is intelligent. But how is this incompatible with my claim that my = machine is=20 not in fact intelligent? Tweaking me with ``What if you turned = out to be=20 one?'' is a bit like tweaking an atheist with ``What if you turned out = to be=20 God?'' The atheist would have to admit that if he were God, then God = would=20 exist. But the atheist could concede this counterfactual without giving = up=20 atheism. If the word ``intelligence'' is firmly anchored to human = information=20 processing, as suggested above, then my position is committed to the=20 empirical claim that human information processing is not like = that of my=20 machine. But this is a perfectly congenial claim, one that is supported = both by=20 common sense and by empirical research in cognitive psychology.

Objection 5.

You keep insisting that we do not process = information in the=20 manner of your machine. What makes you so sure?

Reply.

I don't see how someone could make such an = objection without=20 being somewhat facetious. You will have no difficulty coming up with = responses=20 to my arguments. Are we to take seriously the idea that someone long ago = recorded both what I said and a response to it and inserted both in your = brain?=20 Common sense recoils from such patent nonsense. Further, pick any issue = of any=20 cognitive psychology journal, and you will see attempts at experimental=20 investigation of our information processing mechanisms. Despite the = crudity of=20 the evidence, it tells overwhelmingly against the string-searching idea. =

Our cognitive processes are undoubtedly much more mechanical than = some people=20 like to think. But there is a vast gap between our being more mechanical = than=20 some people like to think and our being a machine of the sort I = described.

Objection 6.

Combinatorial explosion makes your machine = impossible. George=20 Miller long ago estimated (Miller=20 et al., 1960) that there are on the order of 3D"$grammatical=20 sentences 20 words in length. Suppose (utterly arbitrarily) that of = these are semantically well formed as well. An hour-long Turing = Test would=20 require perhaps 100 such sentences. That makes 3D"$strings, a number which=20 is greater than the number of particles in the universe.

Reply.

My argument requires only that the machine be=20 logically possible, not that it be feasible or even nomologically = possible. Behaviorist analyses were generally presented as conceptual = analyses, and it is difficult to see how conceptions such as the = neo-Turing=20 Test conception could be seen in a very different light. Could it be an=20 empirical hypothesis that intelligence is the capacity to emit = sensible=20 sequences of outputs relative to input sequences? What sort of empirical = evidence (other than evidence from linguistics) could there be in = favor=20 of such a claim? If the neo-Turing Test conception of intelligence is = seen as=20 something on the order of a claim about the concept of intelligence, = then the=20 mere logical possibility of an unintelligent system that has the = capacity=20 to pass the Turing Test is enough to refute the neo-Turing Test = conception.

It may be replied that although the neo-Turing Test conception = clearly is not=20 a straightforwardly empirical hypothesis, still it may be=20 quasi-empirical. For it may be held that the identification of=20 intelligence with the capacity to emit sensible output sequences is a=20 background principle or law of empirical psychology. Or it may be = offered=20 as a rational reconstruction (of our vague common sense conception of=20 intelligence) which will be fruitful in future empirical psychological = theories.=20 In both cases, while no empirical evidence could directly support = the=20 neo-Turing Test conception, still it could be held to be part of a = perspective=20 that could be empirically supported as a whole.= 18

This reply would carry some weight if any proponent of the neo-Turing = Test=20 conception had offered the slightest reason for thinking that = such a=20 conception of intelligence is likely to contribute to the fruitfulness = of=20 empirical theories that contain it. In the absence of such a reason = (and,=20 moreover, in the presence of examples that suggest the = contrary--behaviorist=20 psychology and Turingish approaches to artificial intelligence--see=20 footnote 6),=20 why should we take the neo-Turing Test conception seriously as a = quasi-empirical=20 claim?

While this reply suffices, I shall add that my machine may indeed be=20 nomologically possible. Nothing in contemporary physics prohibits the=20 possibility of matter in some part of the universe that is infinitely = divisible.=20 Indeed, whenever the latest ``elementary'' particle turns out not to be = truly=20 elementary, and when the number and variety of its constituents multiply = (as has=20 now happened with quarks), physicists typically entertain the hypothesis = that=20 our matter is not composed of any really elementary = particles.=20

Suppose there is a part of the universe (possibly this one) in which = matter=20 is infinitely divisible. In that part of the universe there need be no = upper=20 bound on the amount of information storable in a given finite space. So = my=20 machine could perhaps exist, its tapes stored in a volume the size of, = e.g., a=20 human head. Indeed, one can imagine that where matter is infinitely = divisible,=20 there are creatures of all sizes, including creatures the size of = electrons who=20 agree to do the recording for us if we agree to wipe out their enemies = by=20 putting the lumps on which the enemies live in one of our particle = accelerators.=20

Further, even if the story of the last paragraph is not nomologically = possible, still it is not clear that the kind of nomological=20 impossibility it possesses is relevant to my objection to the neo-Turing = Test=20 conception of intelligence. For if the neo-Turing Test conception of=20 intelligence is an empirical ``background'' principle or law, it is a = background=20 principle or law of human cognitive psychology, not of = physics.=20 But a situation can contravene laws of physics without contravening laws = of=20 human psychology. For example, in a logically possible world in which = gravity=20 obeyed an inverse cube law instead of an inverse square law, our laws of = physics would be different, but our laws of psychology = might not=20 be.

Now if my machine contravenes laws of nature, these laws are = presumably laws=20 of physics, not laws of psychology. For the question of how much = information can=20 be stored in a given space and how fast information can be transferred = from=20 place to place depends on such physical factors as the divisibility of = matter=20 and the speed of light. Even if the electron-sized creatures just = described=20 contravene laws of physics, still they need not contravene laws of human = psychology. That is, humans (with their psychological laws intact) could = coexist=20 with the little creatures.= 19

But if my machine does not contravene laws of human psychology--if it = exists=20 in a possible world in which the laws of human psychology are the same = as they=20 are here--then the neo-Turing Test conception of intelligence is false = in a=20 world where the laws of human psychology are the same as they are here. = So the=20 neo-Turing Test conception of intelligence cannot be one of the laws of = human=20 psychology.

In sum, the neo-Turing Test conception of intelligence can be = construed=20 either as some sort of conceptual truth or as a kind of psychological = law. And=20 it is false on both construals.

One final point: various sorts of modifications may make a variant of = my=20 machine nomologically possible in a much more straightforward sense. = First, we=20 could limit the vocabulary of the Turing Test to Basic English. Basic = English=20 has a vocabulary of only 850 words, as opposed to the hundreds of = thousands of=20 words in English, and it is claimed that Basic English is adequate for = normal=20 conversation, and for expression of a wide range of ideas. Second, the=20 calculation made above was based on the string-searching version of the = machine.=20 The tree-searching version described earlier, however, avoids enormous = amounts=20 of duplication of parts of strings, and is no more intelligent.

More importantly, the machine as I have described it is designed to = perform=20 perfectly (barring breakdown); but perfect performance is far = better than=20 one could expect from any human, even ignoring strokes, heart attacks, = and other=20 forms of human ``breakdown.'' Humans whose mental processes are = functioning=20 normally often misread sentences, or get confused; worse, any normal = person=20 engaged in a long Turing Test would soon get bored, and his attention = would=20 wander. Further, many, loquacious souls would blather on from the very=20 beginning, occasionally apologizing for not listening to the = interlocutor. Many=20 people would respond more by way of free association to the = interlocutor's=20 remarks than by grasping their sense and giving a considered reply. Some = people=20 might devote nearly every remark to complaints about the unpleasantness = of these=20 interminable Turing Tests. If one sets one's sights on making a machine = that=20 does only as well in the Turing Test as most people would do, one = might=20 try a hybrid machine, containing a relatively small number of trees plus = a bag=20 of tricks of the sort used in Weizenbaum's program.

Perhaps many tricks can be found to reduce the memory load without = making the=20 machine any smarter. Of course, no matter how sophisticated the = memory-reduction=20 tricks we find, they merely postpone, but do not avoid the inevitable=20 combinatorial explosion. For the whole point of the machine is to = substitute=20 memory for intelligence. So for a Turing Test of some length, = perhaps a=20 machine of the general type that I have described will be so large that = making=20 it any larger will cause collapse into a black hole. My point is that = technical=20 ingenuity being what it is, the point at which nomological impossibility = sets in=20 may be beyond what is required to simulate human conversational = abilities.

Objection 7.

The fault of the Turing Test as you describe it is = one of=20 experimental design, not experimental concept. The trouble is that your = Turing=20 Test has a fixed length. The programmers must know the length in = order to=20 program the machine. In an adequate version of the Turing Test, = the=20 duration of any occasion of testing would be decided in some random = manner. In=20 short, the trouble with your criticism is that you've set up a straw = man.

Reply.

It is certainly true that my machine's capacity to = pass=20 Turing Tests depends on there being some upper bound to the length of = the tests.=20 But the same is true of people. Even if we allow, say, twelve = hours=20 between question and answer to give people time to eat and sleep, still, = people=20 eventually die. Few humans could pass a Turing Test that lasted ninety = years,=20 and no humans could pass a Turing Test that lasted five hundred years. = You can=20 (if you like) characterize intelligence as the capacity to pass a Turing = Test of=20 arbitrary length, but since humans do not have this capacity, = your=20 characterization will not be a necessary condition of intelligence, and = even if=20 it were a sufficient condition of intelligence (which I very much = doubt--see=20 below) a sufficient condition of intelligence that humans do not = satisfy=20 would be of little interest to proponents of views like the neo-Turing = Test=20 conception.

Even if medical advances remove any upper bound on the human life = span, still=20 people will die by accident. There is a nonzero probability that, in the = course=20 of normal thermal motion, the molecules in the two halves of one's body = will=20 move in opposite directions, tearing one in half. Indeed, the = probability of=20 escaping such accidental death literally forever is zero. = Consider the=20 ``half-life'' of people in a world in which death is put off as long as = is=20 physically possible. (The half-life for people, as for radioactive = atoms, is the=20 median life span, the time it takes for half to pass away.) Machines of = my sort=20 could be programmed to last for that half-life and (assuming they are no = more=20 susceptible to accidental destruction than people) their median life = span would=20 be as long as that of the median person.

Objection 7a.

Let me try another tack. Cognitive psychologists = and=20 linguists often claim that cognitive mechanisms of one sort or another = have=20 ``infinite capacities.'' For example, Chomsky says that our mechanisms = for=20 understanding language have the capacity to understand sentences of any = length.=20 An upper bound on the length of sentences people can understand in = practice is a=20 matter of interferences due to distraction, boredom, going mad, memory=20 limitations, and many other phenomena, including, of course, death. This = point=20 is often put by saying that under the appropriate idealization (i.e., = ignoring=20 ``interfering'' phenomena of the sort mentioned) we have the capacity to = understand sentences of any length. Now here is my point: under the same = sort of=20 idealization, we presumably have the capacity to pass a Turing Test of = any=20 length. But your string-searcher does not have this capacity, = even under=20 the appropriate idealization.

Reply.

You seem to think you have objected to my claim, = but really=20 you have capitulated to it. I cheerfully concede that there is an = idealization under which we probably have an ``infinite'' capacity that = my=20 machine lacks. But in order to describe this idealization, you will have = to=20 indulge in a kind of theorizing about cognitive mechanisms that would be = unacceptable to a behaviorist.

Consider the kind of reformulation of the neo-Turing Test conception = of=20 intelligence suggested by the idealization objection; it would be = something=20 like: ``intelligence =3D the possession of language-processing = mechanisms such=20 that, were they provided with unlimited memory, and were they directed = by=20 motivational mechanisms that assigned at least a moderately high = preference=20 value to responding sensibly, and were they `insulated' from `stop' = signals from=20 emotion centers, and so forth, then the language-processing mechanisms = would=20 have the capacity to respond sensibly indefinitely.'' Notice that in = order to=20 state such a doctrine, one has to distinguish among various mental = components=20 and mechanisms. As an aside, it is worth noting that these distinctions = have=20 substantive empirical presuppositions. For example, memory might be = inextricably=20 bound up with language-processing mechanisms so as to make nonsense of = talk of=20 supplying the processing mechanisms with unlimited memory. The main = point,=20 however, is that in order to state such an ``idealization'' version of = the=20 neo-Turing Test conception one has to invoke mentalistic notions that no = behaviorist could accept.

Objection 7b.

I believe I can make my point without using = mentalistic=20 notions by idealizing away simply from nonaccidental causes of death. In = replying to Objection 7, you said (correctly) that if medical advances = removed=20 an upper bound on human life, still the median string-searching machine = could do=20 as well as the median person. However, note that if nonaccidental causes = of=20 death were removed, every individual human would have no upper = bound on=20 how long he could go on in a Turing Test. By contrast, any individual=20 string-searching machine must by its very nature have some upper bound = on its=20 ability to go on.

Reply.

What determines how long we can go on in a Turing = Test is not=20 just how long we live, but the nature of our cognitive mechanisms and = their=20 interactions with other mental mechanisms. Suppose, for example, that we = have no=20 mechanisms for ``erasing'' information stored in long term memory. = (Whether this=20 is so is not known.) If we can't ``erase,'' then when our finite = memories are=20 ``used up,'' normal conversational capacity will cease.

If the behaviorist identifies intelligence with the capacity to go on = indefinitely in a Turing Test, idealizing away only from non-accidental = death,=20 then people may turn out not to be intelligent in his sense. Further, = even if=20 people do turn out to satisfy such a condition, it can't be regarded as=20 necessary for intelligence. Beings that go senile within two = hundred=20 years because they lack ``erase'' mechanisms can nonetheless be = intelligent=20 before they go senile.

Of course, the behaviorist could avoid this difficulty by further = idealizing,=20 explicitly mentioning erase mechanisms in his definition of = intelligence. But=20 that would land him back in the mentalistic swamp described in the last = reply.=20

It is worth adding that even if we do have ``erase'' mechanisms, and = even if=20 nonaccidental causes of death were eliminated, still we would have = finite=20 memories. A variant of my string-searcher could perhaps exploit the = finiteness=20 of our memories so as to do as well as a person in an indefinitely long = Turing=20 Test. Suppose, for example, that human memory cannot record more than = two=20 hundred years of conversation. Then one of my string-searchers could = perhaps be=20 turned into a loop-searcher that could go on indefinitely. = Instead of=20 ``linear'' strings of conversation, it would contain circular strings = whose ends=20 rejoin the beginnings after, say, one thousand years of conversation. = The=20 construction of such loops would take much more inventiveness than the=20 construction of ordinary strings. Even if it could be done, such a = machine would=20 seem intolerably repetitious to a being whose memory capacity far = exceeded ours,=20 but human conversation would seem equally repetitious to such a being. =

Here is one final kind of rejoinder to the ``unbounded Turing Test''=20 objection. Consider a variant of my machine in which the programmers = simply=20 continue on and on, adding to the strings. When they need new tape, they = reuse=20 tape that has already been passed by.= 20Note that it is = logically=20 possible for the everextending strings to come into existence by=20 themselves--without the programmers (see footnote 16).=20 Thus not even the capacity to go on indefinitely in a Turing Test is=20 logically sufficient for intelligence.

Continuing on this theme, consider the infinitely divisible matter = mentioned=20 in the reply to Objection 6. It is logically and perhaps nomologically = possible=20 for a man-sized string-searching machine to contain creatures of=20 everdecreasing size who work away making the tapes longer and longer = without=20 bound. Of course, neither of the two machines just mentioned has a = fixed=20 program, but since the programmers never see the stimuli, it is still = the=20 machines and not the programmers that are doing the responding. = Contrast=20 these machines with the infamous ``machine'' of long ago that contained = a midget=20 hidden inside it who listened to the questions and produced the answers. =

Objection 8.

You remarked earlier that the neo-Turing Test = conception of=20 intelligence is widespread in artificial intelligence circles. Still, = your=20 machine cannot be taken as refuting any AI (artificial intelligence) = point of=20 view, because as Newell and Simon point out, in the AI view, ``the task = of=20 intelligence...is to avert the ever-present threat of the exponential = explosion=20 of search.'' (Newell=20 and Simon, 1979) (In exponential explosion of search, adding one = step to the=20 task requires, e.g., 10 times the computational resources, adding two = steps=20 requires 3D"$(3D"$) times the computational = resources, adding three=20 steps requires 3D"$(3D"$) times the computational = resources, etc.) So it=20 would be reasonable for AIers to amend their version of the neo-Turing=20 conception of intelligence as follows:

Intelligence is the capacity to emit sensible = sequences of=20 responses to stimuli, so long as this is accomplished in a way that = averts=20 exponential explosion of search.= 21

Reply.

Let me begin by noting that for a proponent of the = neo-Turing=20 Test conception of intelligence to move to the amended neo-Turing Test=20 conception is to capitulate to the psychologism that I have been = defending. The=20 amended neo-Turing Test conception attempts to avoid the problem = I have=20 posed by placing a condition on the internal processing (albeit a = minimal=20 one), viz., that it not be explosive. So the amended neo-Turing Test = conception=20 does characterize intelligence partly with respect to its internal = etiology, and=20 hence the amended neo-Turing Test conception is psychologistic.

While the amended neo-Turing Test conception is an improvement over = the=20 original neo-Turing Test conception in this one respect (it appeals to = internal=20 processing), it suffers from a variety of defects. One difficulty arises = because=20 there is an ambiguity in phrases such as ``averts the exponential = explosion of=20 search.'' Such phrases can be understood as equivalent to = ``avoids=20 exponential explosion altogether'' (i.e., uses methods that do not = require=20 computational resources that go up exponentially with the ``length'' of = the=20 task) or, alternatively, as ``postpones exponential explosion = long=20 enough'' (i.e., does use methods that require computational resources = that go up=20 exponentially with the ``length'' of the task, but the ``length'' of the = task is=20 short enough that the required resources are in fact available). If it = is=20 postponing that is meant, my counterexample may well be untouched by the = new=20 proposal, because as I pointed out earlier, my machine or a variant on = it may=20 postpone combinatorial explosion long enough to pass a reasonable Turing = Test.=20

On the other hand, if it is avoiding combinatorial explosion=20 altogether that is meant, then the amended neo-Turing Test conception = may brand=20 us as unintelligent. For it is certainly possible that our information=20 processing mechanisms--like those of many AI systems--are ones that = succeed not=20 because they avoid combinatorial explosion altogether, but only because = they=20 postpone combinatorial explosion long enough for practical = purposes.

In sum, the amended neo-Turing Test conception is faced with a = dilemma. If it=20 is postponing combinatorial explosion that is meant, my machine may = count as=20 intelligent. If it is avoiding combinatorial explosion altogether that = is meant,=20 we (or other intelligent organisms) may not count as intelligent. =

Further, the proposed amendment to the neo-Turing Test conception is = an=20 entirely ad hoc addition. The trouble with such ad hoc exclusion of=20 counterexamples is that one can never be sure whether someone will come = up with=20 another type of counterexample which will require another ad hoc = maneuver.

I shall now back up this point by sketching a set of devices that = have=20 sensible input-output relations, but arguably are not intelligent.

Imagine a computer which simulates your responses to stimuli by = computing=20 the trajectories of all the elementary particles in your body. = This=20 machine starts with a specification of the positions, velocities, and = charges (I=20 assume Newtonian mechanics for convenience) of all your particles at one = moment,=20 and computes the changes of state of your body as a function of these = initial=20 conditions and energy impinging on your sensory mechanisms. Of course, = what is=20 especially relevant for the Turing Test is the effect of light from your = teletype monitor on your typing fingers. Now though this takes some = discussion,=20 I opine that a machine that computes your elementary particle = trajectories in=20 this way is not intelligent, though it could control a robot which has = the=20 capacity to behave exactly as you would in any situation. It behaves as = you do=20 when you are doing philosophy, but it is not doing philosophy; rather, = what=20 it is doing is computing elementary particle trajectories so as = to mimic=20 your doing philosophy.

Perhaps what I have described is not nomologically possible. Indeed, = it may=20 be that even if God told us the positions and velocities of all the = particles in=20 your body, no computer could compute the complex interactions, even = assuming=20 Newtonian mechanics. However, notice one respect in which this machine = may be=20 superior to the one this paper has been mainly concerned with: namely, = if it can=20 simulate something for an hour, it may be able to simulate it for a year = or a=20 decade with the same apparatus. For continuing the simulation would be = simply a=20 matter of solving the same equations over and over again. For a wide = variety of=20 types of equations, solving the same equations over and over will = involve no=20 exponential explosion of search. If there is no exponential expansion of = search=20 here, the ad hoc condition added in the objection is eluded, and we are = left=20 with the issues about nomological possibility that we discussed in = Objection 6.=20

The idea of the machine just sketched could be applied in another = machine=20 which is closer to nomological possibility, namely one that simulates = your=20 neurophysiology instead of your elementary particle physics. That = is,=20 this machine would contain a representation of some adequate = neurological theory=20 (say, of the distant future) and a specification of the current states = of all=20 your neurons. It would simulate you by computing the changes of state of = your=20 neurons. Still more likely to be nomologically possible would be a = machine=20 which, in an analogous manner, simulates your psychology. That = is, it=20 contains a representation of some adequate psychological theory (of the = distant=20 future) and a specification of the current states of your psychological=20 mechanisms. It simulates you by computing the changes of state of those=20 mechanisms given their initial states and sensory inputs. Again, if = there is no=20 exponential expansion of search, the modification introduced in the = objection=20 gains nothing.

I said that these three devices are arguably unintelligent, = but since=20 I have little space to give any such arguments, this part of my case = will have=20 to remain incomplete. I will briefly sketch part of one argument.

Consider a device that simulates you by using a theory of your = psychological=20 processes. It is a robot that looks and acts as you would in any = stimulus=20 situation. Instead of a brain it has a computer equipped with a = description of=20 your psychological mechanisms. You receive a certain input, cogitate = about it,=20 and emit a certain output. If your robot doppelganger receives that = input, a=20 transducer converts the input into a description of the input. The = computer uses=20 its description of your cognitive mechanisms to deduce the product of = your=20 cogitations; it then transmits a description of your output to a = mechanism that=20 causes the robot body to execute the output. It is hardly obvious that = the=20 robot's process of manipulation of descriptions of your cogitation is=20 itself cogitation. It is still less obvious that the robot's = manipulation=20 of descriptions of your experiential and emotional processes are = themselves=20 experiential and emotional processes.

To massage your intuitions about this a bit, substitute for the=20 description-manipulating computer in your doppelganger's head a very = small=20 intelligent person who speaks only Chinese, and who possesses a = manual=20 (in Chinese) describing your psychological mechanisms. You get the input = ``Who=20 is your favorite philosopher?'' You cogitate a bit and reply = ``Heraclitus.''=20 Your robot doppelganger on the other hand contains a mechanism that = transforms=20 the question into a description of its sound; then the little man = deduces that=20 you would emit the noise ``Heraclitus,'' and he causes the robot's voice = box to=20 emit that noise. The robot could simulate your input-output relations = (and in a=20 sense, simulate your internal processing, too) even though the person = inside the=20 robot understands nothing of the question or the answer. It seems that = the robot=20 simulates your thinking your thoughts without itself thinking those = thoughts.=20 Returning to the case where the robot has a description-manipulating = computer=20 instead of a description-manipulating person inside it, why should we = suppose=20 that the robot has or contains any thought processes at all?= 22

The string-searching machine with which this paper has been mainly = concerned=20 showed that behavior is intelligent only if it is not the product = of a=20 certain sort of information processing. Appealing to the Martian example = described at the beginning of the paper, I cautioned against jumping to = the=20 conclusion that there is any positive characterization of the type of=20 information processing underlying all intelligent behavior (except that = it have=20 at least a minimal degree of ``richness''). However, what was said in = connection=20 with the Martian and string-searching examples left it open that though = there is=20 no single natural kind of information processing underlying all = intelligent=20 behavior, still there might be a kind of processing common to all=20 unintelligent entities that nonetheless pass the Turing Test = (viz., very=20 simple processes operating over enormous memories). What this last = machine=20 suggests, however, is that it is also doubtful that there will be any=20 interesting type of information processing common to such unintelligent=20 devices.= 23

Bibliography

Block, Ned. 1978a. =

Reductionism.
In=20 Encyclopedia of bioethics. New York, NY: Macmillan.

-----. 1978b.

Troubles with = functionalism.=20
In Perception and cognition: Issues in the foundations of = psychology,=20 ed. C. W. Savage, vol. 9 of Minnesota Studies in the = Philosophy of=20 Science. Minneapolis, MN: University of Minnesota Press.

-----. 1980.

Are absent qualia = impossible?=20
Philosophical Review LXXXIX.

Boden, M. 1977.

Artificial = intelligence.=20
New York: Basic Books.

Chisholm, Roderick. 1957. =

Perceiving, = chap. 11.=20
Ithaca, NY: Cornell University Press.

Dennett, Daniel. 1978.

Brainstorms. =
Cambridge,=20 MA: MIT Press.

Descartes, Ren=E9. = 1985.

The philosophical = writings of=20 Ren=E9 Descartes.
Cambridge, England: Cambridge University = Press.=20
Translated by John Cottingham, Robert Stoothoff, and Dugald Murdoch. =

Donne, John. 1980. =

Paradoxes and = problems.=20
Oxford, England: Clarendon Press.
Edited with an introduction = and=20 commentary by Helen Peters.

Dummett, Michael. 1976. =

What is a theory of = meaning (II).=20
In Truth and meaning, ed. G. Evans and J. McDowell. = London:=20 Oxford University Press.

Fodor, Jerry. 1968.

Psychological = explanation.=20
New York: Random House.

Gunderson, Keith. = 1964.=20

The imitation game.=20
Mind 73(290): 234-245.

Kripke, Saul. 1972.

Naming and neccesity. =
In=20 Semantics and natural language. Dordrecht, Holland: Reidel.

Lycan, William. 1979.

New lilliputian = argument against=20 machine functionalism.
Philosophical Studies 35.

-----. 1981.

Form, function, and = feel.=20
Journal of Philosophy LXXVIII: 24-50.

Miller, George, Eugene Galanter, = and=20 Karl H. Pribram. 1960.

Plans and the = structure of=20 behavior.
New York, NY: Holt, Rinehart, and Winston.

Newell, Alan, and = Herbert Simon.=20 1979.

Computer science as = empirical=20 inquiry: Symbols and search.
Communications of the Association = for=20 Computing Machinery 19.

Putnam, Hilary. 1975a.

The meaning of = `meaning'.
In=20 Language, mind, and knowledge, ed. Keith Gunderson, vol. 7 = of=20 Minnesota Studies in the Philosophy of Science. Minneapolis, MN:=20 University of Minnesota Press.

-----. 1975b.

Mind, language, and = reality.
Cambridge, England: Cambridge University Press.

Rorty, Am=E9lie. 1979.

Philosophy and the = mirror of=20 nature.
Princeton, NJ: Princeton University Press.

Ryle, Gilbert. 1949.

The concept of = mind.=20
London, England: Hutchinson.

Schank, Roger C., and = Robert P.=20 Abelson. 1977.

Scripts, plans, = goals, and=20 understanding: An inquiry into human knowledge structures. =
Hillsdale,=20 NJ: Lawrence Erlbaum Press.

Searle, John R. 1980. =

Minds, brains, and = programs.=20
Behavioral and Brain Sciences 3: 417-457.

Shoemaker, Sydney. 1975. =

Functionalism and = qualia.=20
Philosophical Studies 27.

-----. 1982.

The missing absent = qualia=20 argument--a reply to block.
??? Check. Block has as = forthcoming.

Turing, Alan M. 1950. =

Computing machinery = and=20 intelligence.
Mind LIX(236): 433-460.

Weizenbaum, Joseph. 1966. =

Eliza -- a computer = program for=20 the study of natural language communication between man and machine.=20
Communication of the Association for Computing Machinery = 9(1): 36-45.=20

-----. 1976.

Computer power and = human=20 reason.
San Francisco, CA: W. H. Freeman and Co.

About this document ... =

Psychologism and Behaviorism

This document was generated using the LaTeX2HTML translator Version 2002 = (1.62)

Copyright =A9 1993, 1994, 1995, 1996, Nikos Drakos, = Computer=20 Based Learning Unit, University of Leeds.
Copyright =A9 1997, 1998, = 1999, Ross Moore, Mathematics = Department,=20 Macquarie University, Sydney.

The command line arguments were:
latex2html = -no_navigation -show_section_numbers -split 0=20 Block81-wrap.tex

The translation was initiated by Stuart Shieber on 2002-10-23


Footnotes

...1=20

Indeed, Ryle's The = Concept of=20 Mind (Ryle,=20 1949) is a direct attack on psychologism. Ryle considers what we are = judging=20 ``in judging that someone's performance is or is not intelligent,'' and = he=20 concludes: ``Our inquiry is not into causes ...but into = capacities,=20 skills, habits, liabilities and bents.'' See Jerry Fodor's = Psychological=20 Explanation (Fodor,=20 1968) for a penetrating critique of Ryle from a psychologistic point = of=20 view.

...2=20

See my ``Troubles with = Functionalism'' (Block,=20 1978b). Direct criticisms appear in William Lycan's ``New = Lilliputian=20 Argument against Machine Functionalism'' (Lycan,=20 1979) and ``Form, Function, and Feel'' (Lycan,=20 1981). See also Sydney Shoemaker's ``Functionalism and Qualia'' (Shoemaker,=20 1975), my reply, ``Are Absent Qualia Impossible?'' (Block,=20 1980), and Shoemaker's rejoinder, ``The Missing Absent Qualia = Argument--a=20 Reply to Block'' (Shoemaker,=20 1982).

... Test3=20

Turing himself said = the question=20 of whether the machine could think should ``be replaced by'' the = question=20 of whether it could pass the Turing Test, but much of the discussion of = the=20 Turing Test has been concerned with intelligence rather than = thought.=20 (Turing's paper (1950,=20 this volume) was called ``Computing Machinery and Intelligence''=20 [emphasis added].)

...4=20

While the = Chisholm-Geach objection=20 and the perfect actor objection ought in my view to be considered the = main=20 objections to behaviorism in the literature, they are not on = everybody's=20 list. Rorty=20 (1979, p. 98), for example, has his own list. Rorty and others = make=20 heavy weather of one common objection that I have ignored: that = behaviorism's=20 analyses of mental states are supposed to be analytic or true in virtue = of the=20 meanings of the mental terms. I have ignored analyticity objections in = part=20 because behaviorism's main competitors, physicalism and functionalism, = are often=20 held in versions that involve commitment to analytic truth (for example, = by=20 Lewis and Shoemaker). Further, many behaviorists have been willing to = settle for=20 conceptual connections ``weaker'' than analyticity, and I see no point = in=20 exploring such weakened versions of the thesis when behaviorism can be = refuted=20 quite independently of the analyticity issue.

...5=20

I am indebted here to = Sydney=20 Shoemaker.

...6=20

See Schank=20 and Abelson (1977). See also Weizenbaum's (1976)=20 description of the reaction to his ELIZA=20 program.

...7=20

There is, admittedly, = something=20 odd about accepting a behaviorist analysis of intelligence while = rejecting (on=20 the standard grounds) behaviorist theories of belief, desire, etc. = Dennett's=20 view, as I understand it, comes close to this (see footnote 21),=20 though the matter is complicated by Dennett's skepticism about many = first order=20 mental states. (See Brainstorms (Dennett,=20 1978), especially the Introduction, and Dennett's support of Ryle = against=20 Fodor's psychologism (p. 96). See also Dennett's Mary-Ruth-Sally = parable=20 (Dennett,=20 1978, p. 105).) In discussions among computer scientists who = accept=20 something like the Turing Test conception, the ``oddness'' of the = position=20 doesn't come to the fore because these practitioners are simply not=20 interested in making machines that believe, desire, feel, etc. = Rather,=20 they focus on machines that are intelligent in being able to reason, = solve=20 problems, etc.

...8=20

Turing says:

The game may perhaps = be criticized=20 on the ground that the odds are weighted too heavily against the = machine. If the=20 man were to try and pretend to be the machine he would clearly make a = very poor=20 showing. He would be given away at once by slowness and inaccuracy in=20 arithmetic. May not machines carry out something which ought to be = described as=20 thinking but which is very different from what a man does? This = objection is a=20 very strong one, but at least we can say that if, nevertheless, a = machine can be=20 constructed to play the imitation game satisfactorily, we need not be = troubled=20 by this objection. (Turing,=20 1950, this volume, p. 435)

...9=20

A disposition to would be more revealingly described in terms of conditionals = all of=20 whose consequents are ``3D"$ is emitted.'' But in the cases of = the ``pain=20 behavior'' or ``intelligent behavior'' of interest to the behaviorist, = what=20 output is appropriate depends on the input.

...10=20

Of the inadequacies of = this sort=20 of analysis of dispositions and capacities of which I am aware, the = chief one is=20 that it seems implausible that in attributing a disposition or a = capacity, one=20 commits oneself to an infinite (or even a very large) number of specific = conditionals. Rather, it seems that in saying that 3D"$has the capacity to=20 3D"$, one is saying something quite vague about the sort = of internal=20 and external conditions in which 3D"$would 3D"$. Notice, = however, that it=20 won't do to be completely vague, to analyze ``3D"$ = has the capacity to=20 3D"$'', as ``possibly, 3D"$3D"$s,'' using a notion of = possibility that=20 holds entirely unspecified features of the actual world constant. For = such an=20 analysis would commit its proponents to ascribing too many capacities. = For=20 example, since there is a possible world in which Jimmy Carter has had a = womb=20 and associated paraphernalia surgically inserted, Jimmy Carter (the = actual=20 one) would have the capacity to bear children. There is a difference = between=20 the capacities someone has and the capacities he might have had, = and the=20 analysis of ``3D"$ has the capacity to 3D"$'' as = ``possibly, s=20 '' does not respect this distinction.

...11=20

The departure from = behaviorism=20 involved in appealing to internal states, physiologically or = functionally=20 described, is mitigated somewhat when the point of the previous footnote = is=20 taken into account. The physiological/functional descriptions in a = proper=20 analysis of capacities may be so vague as to retain the behavioristic = flavor of=20 the doctrine.

...12=20

A version of this = machine was=20 sketched in my ``Troubles with Functionalism'' (Block,=20 1978b).

...13=20

This sort of point is = discussed in=20 somewhat more detail at the end of the paper.

...14=20

What I say here should = not be=20 taken as indicating that the standard objections really do = vanquish the=20 neo-Turing Test conception of intelligence after all. If the idiot can = be said=20 to have the mental state [no intelligence + an overwhelming desire to = appear=20 intelligent], the sense of ``intelligence'' used is the ``comparative'' = sense,=20 not the sense we have been concerned with here (the sense in which = intelligence=20 is the possession of thought or reason) If the idiot wants to appear = intelligent=20 (in the comparative sense) and thinks that he can do so by memorizing = strings,=20 then he is intelligent in the sense of possessing (at least minimally) = thought=20 or reason.

Whether one thinks my objection is = really just a=20 variant of the ``perfect actor'' objection depends on how closely one = associates=20 the perfect actor objection with the Chisholm-Geach objection. If we = associate=20 the perfect actor objection quite closely with the Chisholm-Geach = objection, as=20 I think is historically accurate (see Putnam=20 (1975b, p. 324)), then we will take the point of the perfect actor = objection=20 to be that different groups of mental states can produce the same = behavioral=20 dispositions. [mental state x + a normal preferenc function] can = produce=20 the same behavioral disposition as [lack of mental state x + a = preference=20 function that gives infinite weight to seeming to have mental state = x].=20 My machine is not a perfect actor in this sense, since it has no = mental=20 states, and hence no groups of mental states either.

... example15=20

Such examples were = suggested by=20 Dick Boyd and Georges Rey in their comments on an earlier rendition of = this=20 paper. Rey tells me the chess story is a true tale.

...16=20

The reader=20 should not conclude from the ``echo'' examples that what makes my = machine=20 unintelligent is that its responses are echoes. Actually, what makes it=20 unintelligent is that its responses are mere echoes, i.e., its=20 information processing is of the most elementary sort (and the = appearances to=20 the contrary are merely the echoes of genuinely intelligent beings). = Notice that=20 such a machine would be just as unintelligent if it were produced by a = cosmic=20 accident rather than by the long creative labors of intelligent people. = What=20 makes this accidentally produced machine unintelligent is, as before, = that its=20 information processing is of the most elementary sort; the appearances = to the=20 contrary are produced in this case not via echoes, but by a cosmic = accident.=20

...17=20

The theory sketched by = Putnam=20 (1975a) might be taken to have this consequence. Whether it does = have this=20 consequence depends on whether it dictates that there is no = descriptive=20 component at all to the determination of the reference of natural kind = terms. It=20 seems certain that there is some descriptive component to the = determination of=20 the reference of natural kind terms, just as there is some = descriptive=20 component to the determination of the reference of names. There is a = possible=20 world in which Moses was an Egyptian fig merchant who spread tall tales = about=20 himself, but is there a possible world in which Moses was a brick? = Similarly,=20 even if there is a possible world in which tigers are automata, is there = a=20 possible world in which tigers exist, but are ideas? I would argue, = along these=20 lines, that the word ``intelligence'' attaches to whatever natural kind = our=20 information processing belongs to (assuming it belongs to a single = natural kind)=20 unless our information processing fails the minimal descriptive=20 requirement for intelligence (as ideas fail the minimal descriptive = requirement=20 for being tigers). String-searchers, I would argue, do fail to = have the=20 minimal requirement for intelligence.

...18=20

What follows is one = rejoinder for=20 which I only have space for a brief sketch. If intelligence =3D sensible = response=20 capacity (and if the terms flanking the ``=3D'' are rigid), then the=20 metaphysical possibility of my machine is enough to defeat the = neo-Turing=20 Test conception, even if it is not nomologically possible. (The claim = that there=20 are metaphysical possibilities that are not also nomological = possibilities is=20 one that I cannot argue for here.)

What if the neo-Turing Test conception = of=20 intelligence is formulated not as an identity claim, but as the claim = that a=20 certain capacity is nomologically necessary and sufficient for = intelligence? I=20 would argue that if F is nomologically necessary and sufficient = for=20 G, then one of the following holds:

(a)

This nomological = coextensivity is=20 an ultimate law of nature.

(b)

This nomological = coextensivity can=20 be explained in terms of an underlying mechanism.

(c)

3D"$.

In case (c), the claim = is=20 vulnerable to the point of the previous paragraph. Case (a) is obviously = wrong.=20 And in case (b), intelligence must be identifiable with something other = than the=20 capacity to give sensible responses. Suppose, for example, that we can = give a=20 mechanistic account of the correlation of intelligence with sensible = response=20 capacity by showing that intelligence requires a certain sort of = cognitive=20 structure, and creatures with such a cognitive structure have the = required=20 capacity. But then intelligence should be identified with the = cognitive=20 structure and not with the capacity. See my ``Reductionism'' (Block,=20 1978a) for a brief discussion of some of these ideas.

...19=20

It may be objected = that since=20 brute force information processing methods are far more effective in the = world=20 in which matter is infinitely divisible than in ours, the laws of = thought in=20 that world do differ from the laws of thought in ours. But this = objection=20 begs the question, since if the string-searching machine I described = cannot=20 think in any world (as I would argue), the nomological difference = which=20 makes it possible is a difference in laws which affect the = simulation of=20 thought, not a difference in laws of thought.

...20=20

This machine would get = ever larger=20 unless the programmers were allowed to abandon strings which had been = rendered=20 useless by the course of the conversation. (In the tree-searching = version, this=20 would amount to pruning by-passed branches.)

...21=20

I am indebted=20 to Dan Dennett for forcefully making this objection in his role as = respondent to=20 an earlier version of this paper in the University of Cincinnati = Philosophy of=20 Psychology Conference in 1978. Dennett tells me that he advocates the = neo-Turing=20 Test conception as amended above.

...22=20

Much more needs to be = said to turn=20 this remark into a serious argument. Intuitions about homunculi-headed = creatures=20 are too easily manipulable to stand on their own. For example, I once = argued=20 against functionalism by describing a robot that is functionally = equivalent to a=20 person, but is controlled by an ``external brain'' consisting of an army = of=20 people, each doing the job of a ``square'' in a machine table that = describes a=20 person. William Lycan=20 (1981) objected that the intuition that the aforementioned creature = lacked=20 mentality could be made to go away by imagining yourself reduced to the = size of=20 a molecule, and standing inside a person's sensory cortex. Seeing the = molecules=20 bounce about, it might seem absurd to you that what you were watching = was a=20 series of events that constituted or was crucial to some being's = experience.=20 Similarly, Lycan suggests, the intuition that my homunculi-heads lack = qualia is=20 an illusion produced by missing the forest for the trees, that is, by = focusing=20 on ``the hectic activities of the little men..., seeing the = homunculi-head as if=20 through a microscope rather than as a whole macroscopic person.'' (David = Rosenthal made the same objection in correspondence with me.)

While I think that the Lycan-Rosenthal = point does=20 genuinely alter one's intuitions, it can be avoided by considering a = variant of=20 the original example in which a single homunculus does the whole = job, his=20 attention to column 3D"$, of a machine table posted in his compartment = playing=20 precisely the causal role required for the robot he controls to = have .=20 (See ``Are Absent Qualia Impossible?'' (Block,=20 1980) for a somewhat more detailed description of this case.) No = ``forest=20 for the trees'' illusion can be at work here. Nonetheless, the = Lycan-Rosenthal=20 point does illustrate the manipulability of intuitions, and the danger = of=20 appealing to intuition without examining the source of the intuition. = The role=20 of most of the early objections and replies in this paper was to locate = the=20 source of our intuitions about the stupidity of the string-searching = machine in=20 its extremely simple information processing.

Another difficulty with the=20 description-manipulator example is that it may seem that such an example = could=20 be used to show that no symbol manipulation theory of thought = processes=20 (such as those popular in cognitive psychology and artificial = intelligence)=20 could be correct, since one could always imagine a being in which = the=20 symbol-manipulating mechanisms are replaced by homunculi. (John Searle = uses an=20 example of the same sort as mine to make such a case in ``Minds, Brains = and=20 Programs'' (Searle,=20 1980, this volume). See my reply in the same issue.) While I cannot = defend=20 it here, I would claim that some symbol-manipulating homunculi-heads = are=20 intelligent, and that what justifies us in regarding some = symbol-manipulating=20 homunculus-heads (such as the one just described in the text) as = unintelligent=20 is that the causal relations among their states do not mirror the causal = relations among our mental states.

...23=20

Previous versions of = this paper=20 were read at a number of universities and meetings, beginning with the = 1977=20 meeting of the Association for Symbolic Logic. I am indebted to the = following=20 persons for comments on previous drafts: Sylvain Bromberger, Noam = Chomsky, Jerry=20 Fodor, Paul Horwich, Jerry Katz, Israel Krakowski, Robert Kirk, Philip = Kitcher,=20 David Lewis, Hugh Lacey, William Lycan, Charles Marks, Dan Osherson, = Georges=20 Rey, Sydney Shoemaker, George Smith, Judy Thomson, Richard Warner, and = Scott=20 Weinstein.

 


Stuart Shieber 2002-10-23