Why Isn't My Pocket
Calculator a Thinking Thing by Larry
Hauser Minds and Machines, Vol. 3, No. 1 (February,
1993), pp. 3-10. Abstract My pocket calculator ( Introduction The problem of Other Minds is not just about how one knows that other
people are thinking or intelligent beings; it is also about how we know that
rocks and roses aren't. From professor to paramecium to paperweight is a
slippery slope; and a sense of this is one thing motivating Descartes'
notorious views about the mindlessness of lower animals. Thus he defends his
denial that any of them have any mental states at all (1646, p.208) on the
grounds that "there is no reason for believing it of some animals
without believing it of all, and many of them such as oysters and sponges are
too imperfect for this to be credible." Similarly, I suppose, one reason
for denying that any computers have any mental states at all is that there
are likewise some of them, e.g., our pocket calculators, "too imperfect
for us to believe it of them"; or conversely, it may be feared that if
any computing device is rightly credited with any mental abilities at all --
even if 'only' the ability to add -- "then the barriers between mind and
machine have been breached and there is no reason to think they won't
eventually be removed." (Dretske 1985, p.25). Note, here, that the
mental abilities we are tempted (if not practically compelled) to attribute
to computers are precisely those higher rational abilities, e.g.,
calculation, which have traditionally been held to distinguish us from and
exalt us above the lower animals; and it is ironic that so many contemporary
arguments would deny mind to machines on the basis of their lack of faculties
such as emotion and sense-perception that we share with the lower animals,
when traditional arguments sought to deny soul and mind to these animals on
the basis of their lack of faculties such as mathematical calculation which,
more and more it seems, we are coming to share with computers. My inquiry concerns the mental ability -- to calculate -- of my pocket
calculator: call him Cal, for the sake of argument. I suppose most people's
intuitions are that their pocket calculators don't think; yet we all allow --
or speak as if we allow -- that they add, subtract, multiply, and divide. In
short, calculate. The trouble is that now, given the seemingly obvious thesis
that calculation is thinking (indeed, a paradigm case), we have premises for
a valid syllogism whose conclusion contradicts 'our' original intuitions. Calculating is thinking. Not only do the premises seem true on their face, but obvious. The
brunt of my argument will consist in considering various ways in which this
argument might be challenged, and showing that the costs of each of the
strategies proposed is insupportable. Either it turns out we ourselves can't
be said to think or calculate if our performances are judged by the
standards being proposed to rule out Cal; or else it turns out the standards
are such that it is impossible to verify whether anything or anyone (save,
perhaps, oneself) meets them. The main objections I consider attempt to show this syllogism commits
the fallacy of four terms on the grounds that "calculation" is only
equivocally predicable of Cal; that Cal doesn't really calculate
because Cal's performances lack features essential for genuine cognition or
calculation. These features generally fall under the headings of four
traditional "marks of the mental": consciousness, autonomy,
intentionality, and unity. Of these, the last two underwrite the most serious
challenges to Consciousness The argument from consciousness, holds that the essence of thinking is
its subjectivity: there must be something that it's like to be a pocket
calculator for the calculator, or else it's not really calculating.
The trouble with this objection is that it can't be substantiated just
how far (beyond myself) this mysterious 'inner light' of consciousness
extends. This "other minds" reply does not, as Searle (1980, p.422)
jeers, "feign anesthesia": it only requires critics of AI to
consistently apply the criterion they propose to disqualify computers' claims
to think. What the Other Minds Reply says is that if consciousness were
our basis for deciding whether any intelligent seeming thing was really a
thinking subject, then one should have skeptical doubts about other
minds. So, if we don't, and shouldn't, seriously entertain such doubts, this
seems to show that we don't (or shouldn't) appeal to consciousness to decide
what is and isn't thinking. The general argumentative strategy is that no proposed criterion of
thinking is acceptable if it's application leads to the conclusion that
people don't think, or that we have no way of telling this, in cases where we
think we know well enough that they do. The demand that criteria proposed be
applied consistently to human and machine, and not selectively to machines is
unexceptionable: otherwise one begs the question. Autonomy Autonomy objections, like the Objection from Consciousness, touch some
deep chord; and such objections -- that computers lack freedom or wills of
their own, that they "can only do what we tell them" -- are among
the most frequently heard arguments against artificial intelligence.
Autonomy, as a criterion for distinguishing genuine from apparent cognition,
faces problems akin to those which arise for consciousness. If one appeals to
introspection -- if our awareness of and basis for attributing free will or
autonomy is supposed to be phenomenological -- then autonomy objections
inherit all the problems of the Objection from Consciousness: it would be
impossible to know (or even have justified belief) that anything or anyone
(save oneself) really is a thinking subject of mental states. On the other
hand, if we reject the phenomenological criterion of autonomy, as I suspect
we should, the Autonomy Objection becomes even less supportable. With
consciousness of autonomy as the criterion of autonomy, it seems we can never
know that anyone else has it: without consciousness it seems we can't
even know that we ourselves do. Note that the burden of the Autonomy Objector
here is not just to show that there are free acts in some strong
libertarian sense -- though this would be onerous enough -- but to show that
certain acts of ours are free in this sense, and no acts of computers
are, or (presumably) ever could be. I have no idea how -- without appeal to
one's introspective sense of freedom as evidence of actual freedom -- one
might propose to discharge this burden. If a conscious or introspective sense of freedom is supposed to
evidence freedom, the Autonomy Objection can bear no more weight than the
appeal to consciousness it presupposes. Yet without such appeal, our own
claims to be thinking could be no better grounded (and might be worse)
than libertarian metaphysical doctrine. Yet my beliefs that I have beliefs,
desires, and such -- even my belief that you do -- seem nowise so dubious as
that. Intentionality A third line of objection to the claim that computers generally -- and
What Dretske takes to be the missing ingredient -- what we have that
computers lack -- are causal connections between the signs and the things
they signify. Put crudely, the difference between my contentful belief that
dogs are animals and a computer's 'representation' of this same
information -- say by storing a Prolog clause [that says] animal(X):-dog(X)
in RAM, is that my representation came to be, or could be, elicited by the
actual presence -- the actual sights and sounds -- of dogs. It is these
perceptually mediated connections between my tokenings of the English word
"dog" and actual dogs that makes that word signify those animals
for me; and it is for want of such connections that computer representations
-- tokenings, say, of the Prolog atom dog -- lack such signification
for the computer. Here Dretske sets out an Intentionality Objection that
stands or falls independently -- as Searle's formulations, do not -- of
appeals to consciousness. Yet despite the superiority of Dretske's formulation (in providing a
positive account of signification), it has two serious shortcomings. First,
it has less force than Dretske seems to think, even in the most favorable
cases, i.e., of signs for perceptible things such as dogs or (perhaps most
favorably of all) perceptible qualities such as color or pitch. Second, not
all words or signs are as rich in sensory associations as "red" or
"dog". Where signs, such as numerals, represent abstractions such
as numbers, it seems less plausible to think significance requires causal
links to the things signified or any very robust causal-perceptual
links with anything. With regard to the first, most favorable cases, Dretske's account of
reference threatens the conclusion that perceptually deficient humans are
meaning deprived also. Presumably (on this view) someone blind from birth
cannot signify anything by color words; nor the congenitally deaf by words
like "pitch" and "music". Yet, I believe there are good
reasons (see, e.g., Landau & Gleitman 1985) to hold that such persons can
use words to refer to such things despite their lack of perceptual access to
them. Someone blind from birth e.g., could know that Fire engines are red,
and Red is a color, and even that "color" refers, roughly,
to those qualities of things, deriving from their reflective properties,
which the sighted see, but I don't, which stand in a similar relation to
vision as tone to hearing. More dramatically, on Dretske's view, how are
we to avoid the absurd consequence that most of the words in Helen Keller's
books, though signifying things to us, signified nothing (or at least much
less) to Helen Keller? Even for words with very considerable ostensible or
sensuous content, the ability to apply these words to the world on the basis
of this content seems less crucial to signification than Dretske's views
suggest. Perhaps some causal-perceptual associations for some terms in one's
lexicon are prerequisite for signifying anything by any term at all, but
Helen Keller's case seems to suggest these connections needn't be so direct
or robust as Dretske has to require to make his case against (present-day)
machine understanding. But whatever the upshot of these reflections on the most favorable
cases for Dretske -- terms ordinarily rich in ostensive or sensuous
associations -- plainly cases such as the numerals, are less favorable.
Whatever weight Dretske's appeal to the perceptual disabilities of
(present-day) digital computers has against their claims to mean anything by
words such as "dog" and "red" this argument will be
impotent to establish the more general claim Dretske wants -- that none of
the symbols (present-day) computers process signify anything to these
computers -- if Cal's understanding of the numerals he processes, e.g., is
immune to such objections. How is Dretske's causal-perceptual account of
reference supposed to work here, where the referents, numbers, don't seem
capable either of being perceived or entering into causal relations. At this point, the only plausible move for a causal-perceptual theory
of reference seems something like this: in order to mean numbers by numerals
one must be able to apply numerals to items and events in the world, e.g. in
counting; so in order to mean two by '2', say, one must be capable of
reliably tokening '2' when presented various pairs of objects or events. Yet
even if this is correct, and causal-perceptual links are in some sense
required for reference even to numbers, the senses of 'perceptual' and 'in
the world' here, cannot be very robust. Someone in a state of total
(external) sensory deprivation might still count their breaths, or even how
many times they performed a carry in doing a bit of mental addition; and if this
is all that's required, it's clear that it's not enough to rule out If this is right, it means the Intentionality Objection cannot be
sustained across the board, with regard to all the symbols computers process.
Perceptually impoverished as Unity Since Plato, at least, unity has been advanced as a distinguishing
attribute of minds: the idea is that minds are not composed of parts (as
bodies are), but are rather indissoluble units; a claim that not only underwrites
perhaps the most important traditional argument for the immortality of the
soul, but which has been continually in the background of our whole
discussion to this point. Each of the various objections we considered makes
some tacit appeal to unity: each claims that disconnected from some further
mental abilities or events (consciousness, or autonomy, or intentionality) On the other hand, while this may suffice to exclude hard-wired
special purpose devices like This presents us with a conundrum: suppose my lap top computer -- call
her Sparky -- were programmed to emulate What the Unity Objection seems to require -- and offer no prospect
that I can see of providing -- is some account not only of how many
and which other mental abilities a thing must have in order to
calculate (or think), but why. If Cal follows the same addition
procedure as Sparky, and Sparky the same addition procedure as I, then it
seems to me that Cal adds if I do; and when we do,
"calculation" is predicable of both of us in exactly the same
sense, regardless of whatever further mental abilities of mine Sparky lacks,
or whatever further capacities of Sparky's are absent in Cal. Nor is it even
essential that the procedures Conclusion What the preceding arguments show -- I take it -- is that none of the
four traditional marks of the mental considered provide a supportable basis
for denying that Cal calculates in the same sense as you or I; i.e., I have
sought to show that our initial syllogism does not commit the fallacy of four
terms by equivocating on "calculates", its middle. I will conclude
by remarking why the argument -- at least as I intend it, and on its least
tendentious reading -- doesn't equivocate on its major, "thinks",
either. Ordinarily "think" is a generic term for any of several
different mental activities or states. According to Descartes a thing that
thinks is "a thing which doubts, understands, affirms, denies, is
willing, is unwilling, and also imagines and has sensory perceptions"
(1642, p.19); and similarly, my dictionary (Webster's New Collegiate), under
"think", mentions conceive, judge, consider, surmise, expect,
determine, resolve, reason, intend, purpose, reflect, infer, opine and
decide. In this ordinary generic sense of the term, I take it, it's
undeniable that calculating is thinking, and -- if my arguments are sound --
that my pocket calculator calculates and consequently thinks.
References Descartes, R. 1637. Discourse on Method. Translated in
J.Cottingham, R.Stoothoff, and D.Murdoch, The Philosophical Writings of
Descartes, Vol.1. Descartes, R. 1646. "Letter to the Marquess of Descartes, R. 1642. Meditations on First Philosophy. Translated
in J.Cottingham, R.Stoothoff, and D.Murdoch, The Philosophical Writings of
Descartes, Vol.2. Dretske, F. 1985. "Machines and the Mental". Proceedings
and Addresses of the American Philosophical Association (1985), Vol.59.
23-33. Landau, Barbara and Gleitman, Lila. 1985. Language and Experience:
Evidence from the Blind Child. Nagel, T. 1974. "What is it Like to Be a Bat?". Philosophical
Review 83 (1974). 435-450. Sartre, J. P. 1956. Being and Nothingness. Trans. H.Barnes.
Citadel Press: Searle, J. R. 1980. Minds, brains, and programs. Behavioral and
Brain Sciences 3 (1980). 417-424. Turing, A. M. 1950. Computing machinery and intelligence. Mind
LIX (1950). 433-460. Wittgenstein, L. W. 1958. Philosophical Investigations. Basil
Blackwell Ltd.: |