You Can't Argue with a Zombie
By Jaron Lanier
It is impossible to awaken someone
who is pretending to be asleep.
- Navajo proverb1
The Zombies Among Us
It turns out that it is possible to distinguish a zombie2 from a person. A
zombie has a different philosophy. That is the only difference. Therefore,
zombies can only be detected if they happen to be philosophers. Dennett3 is
obviously a zombie.
Zombies and the rest of us do not have a symmetrical relationship.
Unfortunately, it is only possible for non-zombies to observe the tell-tale
sign of zombie-hood. To zombies, everyone looks the same.
Arguing with zombies is generally futile, of course. But I have found it to
be a valuable experience for two reasons. First, I believe that zombies are
having a significant indirect influence on cultural and political thought,
and I wish to thwart them. And then, there's a lot to be learned from zombies;
they are useful, at the very least, as conversation pieces.
Part One-
Entering Zombie Culture: Playing the
Intuition Pump Game4
Zombies are concerned for some of us. They believe we are zombies too, but
that we are confused, so they produce literary devices to help us think like
zombies. For example, Dennett proposes a thing called a zagnet, which is like
a magnet in every way except for some ineffable difference. He then suggests
that consciousness (the old kind5) in people is a superfluous hypothetical
quality just like zagnet-hood, and that philosophy gets so much easier
without it. It doesn't occur to him that a zagnet like me might write its own
essay, but, then again, he's a zombie. He can only think about zagnets from
the outside.
The zombie/zagnet dialog has surpassed any other in my experience as a source
of passionate lack of communication. Only a zombie like Dennett could write a
book called "Consciousness Explained" that doesn't address
consciousness at all. Only zombies could respond with a book called
"Dennett and His Critics" which doesn't criticize Dennett at all.
What other topic could drive a fine physicist into strange the strange
speculations of Roger Penrose? I feel as though I'm able to occasionally
communicate across the zombie/zagnet gap, because I'm in the unusual position
of being a zagnet and a computer scientist, a sadly rare combination. It's
easy to see why computer scientists are so ready to embrace zombiehood.
Wouldn't you love it if a major school of philosophy claimed that what you
did was uniquely at the root and the apex of everything?
My Arguments with Zombies
I faced a class comprised mostly of zombies at Dartmouth a few years ago. Here is what I told
them. I decided to push a tired old intuition pump just a little harder than
usual to see if a few more drops of intuition would squirt out. Some of the
zombies stirred as a result and I wondered if there might be a cure after all
for zombie-hood.
I started with the usual sort of brain-replacement yarn. Your neurons are
replaced one-by-one with silicon devices. That sort of thing. Young
zombies-in-training assume that nothing fundamental will have changed if they
are turned to silicon.
We then transferred our brains into software. Each neuron was now replaced by
a software expression and they all connected together functionally in the
same way as they did when they were mushy.
The zombies still felt at one with this proposed zombie-on-a-disk. It is worth
pausing for a moment and noting that accepting one's ontological equivalence
to some data on a disk does not necessarily banish the demons of vitalism.
Zombies might still imagine their data interacting with biological humans (as
we see in the Star Trek character "Data"). They might still turn to
the natural world for confirmation, relying on that old ritual of vitalism,
the Turing Test.
Harder core zombies are ready to leave all that behind and imagine living on
a disk in which they only interact with other minds and environmental
elements that also exist solely as software. It is here that we must ask a
question that seems obvious to me, but seems to shock zombies: What makes
this software exist? What makes the computer that it runs on exist?
There can be only one proper basis to judge the existence of computers and
software. We should be able to confirm their existence empirically, using the
same scientific method we use to study the rest of the natural world. As it
turns out, we cannot do that, for reasons that I will make clear later in
this paper. We are the only measure of the existence of computers. So the
assertion that computers and software exist is a stealthy conveyor of rampant
vitalism and mystical dualism.
How Far Can You Stretch a Zombie?
Back to my class of eager young zombies:
I asked them if it mattered to them what kind of computer their software
selves would run on. No, they replied, it doesn't matter. All computers are
considered to be equivalent by virtue of the Church-Turing Hypothesis6. If
they and their classmates were implemented on a vacuum tube computer, or on a
computer made of mechanically-linked Lego blocks, they would still feel the
occasional rush of adrenaline as a desired mate strolled by, and the agony of
a parental visit.
Zombies are obviously quite flexible. So let's build a truly unusual sort of
computer for them to inhabit. We first record a bunch of data from the
natural world, such as the trajectories in a meteor shower. We then rely on
massive quantities of computation and luck to construct just the right
computer that happens to read the meteor shower as a program, such that it
happens to be equivalent to a zombie's brain.
In other words: When a natural phenomenon, like a meteor shower, is measured,
it turns into a string of numbers. The program that runs a computer (the
object code7) is also a string of numbers, so we have two similar items. The
string of numbers that runs a particular computer has to perfectly follow the
rules of that computer or the computer will crash. But if you can find the
matching computer, any particular string of numbers can run as a program8. In
fact, for any string of numbers, you can in theory find or construct many
computers, each of which will run the same string of numbers as a different
program. So one computer might read the meteor shower and end up doing your
taxes as a result, while another might calculate racetrack odds from exactly
the same "object code". If your brain is functionally equivalent to
a computer program, there is no reason a meteor shower can't be that program,
if you take the trouble to find the right computer to run it.
Does even the possibility9 of this computer give the meteor shower
consciousness, if only for a moment10?
Of course a zombie would probably object that this hypothetical computer
isn't functional at all. Even if it interprets the meteor shower as having
the functionality of a brain, that could only be true for a limited period of
time. Certainly after a very short while Newton and Einstein would take over again
and the brain would dissipate. Furthermore, there could be no functional
relationship with anything outside of the meteor shower. Is the meteor shower
not even a zombie?
If my meteor shower doesn't seem functional enough for you, widen your
search. Just gather more data, and find an even harder-to-design computer
that will interpret it as a mega-program that is equivalent to not only your
brain, but also to your brain's surroundings, including other brains. Widen
further to gain enough object code to implement a lifetime.
Zombies will sometimes remain unmoved by this idea because it might seem as
though we still have to do the work of building a computer in order for the
meteor shower to be considered as a program. Maybe something magical does
happen when a program is read by a real computer. Well, if you're patient
enough, you can even find a real computer hiding in a meteor shower.
Now we're going to conduct a slightly different kind of search; an even
harder search than the previous ones. This time we are looking for a computer
(which we'll call Ralph) such that, when you point it at our overworked
meteor shower, it implements an emulation of itself (which we'll call Fred),
as well all the brains and their environment.
Once we're done there'll be two computers looking at the same meteor shower;
the one we've actually built (Ralph) and the emulation11 of it (Fred). From
an empirical point of view they are equally prime suspects for being the
"real", functional computer, and they will be observable for the
same period of time.
Time is only one aspect of the subjectivity of computers, but let's look at
it a little more closely. A Macintosh functions more-or-less predictably
during its useful life. Ralph and Fred last for less time, perhaps. But the
Macintosh will seem to be a more legitimate computer to us in part because it
matches our time frame better. We can borrow some fast moving trains from
Einstein to illustrate this idea with another image. Suppose you visit a
train yard where trains are moving at wildly different speeds. Some move
so quickly that they blur into continuity. If you start moving very quickly
to keep up with some of the fast trains, the stationary ones fall away into a
blur. Only those trains moving at about the same speed as you look like
trains. In the same way, a Macintosh user interface responds slowly enough
for us to catch the images, and the machine generally keeps running for a few
years, and this makes it sensible within a human time frame. Ralph and Fred
might be coherent for only a fraction of a second, and thus seem less like
computers to us, but if we could "move fast enough", the Macintosh
would disappear and Ralph and Fred would come into focus.
Zombies sometimes object that a computer that someone built is different from
one that is simply detected. I suppose they might believe in Ralph more than
Fred. It's a good sign when zombies think this; they are on their way to
being cured. This is a fine example of vitalism, and it warms my heart.
Let's suppose you actually did physically build Ralph, in order to bring into
focus the brains living in the meteor shower, as well as a Ralph clone (Fred)
looking at those brains. The meteor shower isn't affected if you turn off
Ralph12, so Ralph is a part of the system that is not functional. Since a
program can't tell if it's being read, it should exist equally well even if
the computer is only hypothetical, right?
I know I'm pushing the intuition pump very hard here, so let's leave Ralph
and Fred behind and approach this last point from another angle.
Let's suppose you run a more normal program (not a meteor shower) that
implements the functional equivalent of your brain, a bunch of other people's
brains, and the surrounding environment, so that you and the rest of the
brains can have lots of experiences together. (This is the condition in which
my test zombies thought that nothing fundamental would have changed; they'd
still experience themselves and each other as if they were flesh.) You save a
digital record, on the same disk that holds the program, of everything that
happens to all of you. Now the experiences "pre-exist" on the disk.
Take the disk out of the computer. Is this free-floating disk version of you
still having experiences? After all, the information is all there. Why is
this information sanctified into some higher state of being by having a
processor just look at it? After all, the experiences have already been
recorded, so the processor can do no new computation. A much simpler process
that just copied the disk would perform exactly the same function13 as
running your brain a second time.
The meteor shower arguments hint that computers might not be as objectively
present as zombies like to assume, but the question needs further
examination.
I claim that to a Martian, a Macintosh is the same sort of thing as a toaster
or a rock. In order to perceive information, you have to put it in a cultural
context, and that re-opens the can of worms that zombies have been trying to
solder shut. Could "information" just be a shell game that hides
the nut of old-style consciousness?
Can We Build a Computer Detection
Machine?
Now, a zombie might object that there must be some method for the objective
analysis of a proposed computer-object that would confirm its computer-hood.
If you don't want to rely on a mushy brained human's intuition, you'll have
no choice but to bring a confirmed computer on stage to analyze your proposed
computer. (If you do rely on a mushy human, you've retreated to vitalism. ) Even
if you somehow came up with a confirmed computer, you would have to have a
rigorously objective answer to the question: What would make an object a
non-computer?
If we want a computer detection test to be rigorous, we should assume that we
are attempting to discover a truly alien computer, or that an alien is trying
to discover our computers. Our first problem is that if we were presented
with an alien computer we might not be able to figure out where to stick our
probes, but even worse, we might not have a clue that we should try.
Any purported computer we study can be fully understood as a
non-computational complex phenomenon. Computer science is unnecessary to
explain the behavior of computers. Computers are simply pieces of the
physical universe obeying physical laws. Everything a specific, physical
computer can be observed to do14 can be understood without having to think of
it as a computer. What makes a computer a computer is our way of thinking
about its potential, not its observed actuality.
This is not a trivial point. Computers have been given the ontological kid
glove treatment. Relativity is necessary to explain the observed universe,
while computer science is not.15
If we designed a test that could detect an alien computer, then that test
could also find computers and their programs wherever we chose to look16
(even in a meteor shower), so long as we looked hard enough. This is not what
you call a useful detector.
If computers are to definitely exist we should know that we could someday build
an instrument to find them. Scientific instruments can lack accuracy, but
they must be able to distinguish between
phenomena. If there was no conceivable device that could distinguish heat
from other phenomena like gravity, for example, heat would not be a useful
concept, and science would pursue a parameter that could be measured. No one
has thus far been able to define a notion like "complexity"
sufficiently well that we could someday hope to build a complexity detector.
Even if we got to that point, a computer detector would be unable (for the
reasons stated above) to distinguish proposed computers from other examples
of "ordered complexity" or "intricately bundled
causality"17.
What is amazing to me is that even zombies on the whole aren't quite willing
to drop the idea of something special called "consciousness". It's
just that they want zombie-think to own it. Such zombies start using the word
"emergent". They say that computers differ from one another in that
some of them have this consciousness thing "emerge". David Chalmers
argues that all action in the universe is at least a little computational,
and the right computation give rise to consciousness, so consciousness is
everywhere, but in varying degrees. I like the hippie-ish egalitarianism of
it; that even a thermometer gets to be a "little" conscious.
But where are the computers in Chalmers' universe? A computer-detector would
not find a single computer in a test location because it would always find an
infinite18 number of them. As soon as we cordoned off a piece of this
universe and called it a particular computer, we'd also have identified the
infinite superset of universe pieces that contained it, as well as the
possible subsets that were functionally equivalent (by some measure or other).
The conclusion has to be that the choice of where to cordon is arbitrary.
Furthermore, and most importantly, if we are interested in computers that are
similar but not identical to the one first cordoned, then we can always
identify an even huger swarm of computers that are co-present. If computation
gives rise to consciousness according to Chalmers' scheme, then each emerged
consciousness would in fact carry with it an infinite swarm of
consciousnesses that contained it, or almost contained it. All sense of
discreteness and locality for consciousness is lost if it is only a byproduct
of computation and computation is ubiquitous. Hypothesizing an infinite cloud
of slightly different consciousnesses floating around each person seems like
an ultimately severe violation of Occam's razor.
Other zombies might argue that computers are mathematical objects, so they
don't need to be confirmed empirically. I would argue that any particular
computer does need to be confirmed. Otherwise we are in a state in which all
possible computers always exist, even more of them than would exist in
Chalmer's universe.
There are other possible arguments19, perhaps, but I think ultimately an
honest zombie will have to accept that any particular observation of
computation is a non-confirmable human interpretation of events that can be
more objectively described in other ways.
When Zombies Get Weird
The sample argument above will not be unfamiliar to hard-core zombies and
their antagonists. What interests me most is the ultimate position that
zombies arrive at when this argument is driven to its conclusion. After
abolishing ontological distinctions based on human epistemological
difficulties, zombies invent new ontologies for the benefit of computers.
Inside every zombie is a weird new kind of dualist.
The new weird dualism can take a number of forms, distinguished by the choice
of meaningless code words, such as "emergent" or
"semantics". But the hallmark of zombie dualism is the belief in
the independent, objective existence of information and computers.
I am certainly not trying to convince zombies that they exist in some special
way, that they might have a sense of experience. By now I know better. What I
would like them to consider, rather, is that they are granting to the process
of computation not only a type of indisputable objective existence that it
probably doesn't have, but also a magical ability to confer ontological
properties onto yet other objects. I'd like zombies to consider that this
purported ability is even more bizarre and insupportable than the
phenomenological reportage of non-zombie experience.
Zombies believe in something called information, and also in the existence of
objects called computers. Zombies are so quick to criticize the notion of
old-style consciousness as being the worst sort of murky, sentimental
dualism, but they themselves are zagnetizing the universe with these new
ineffable concepts.20
If the universe were populated solely by zombies, there would be no
computers. Computers cannot make each other exist, because they cannot even
recognize each other.
There also would be no information. Information is another thing that only
exists by virtue of experience. (My old catch phrase: Information is
Alienated Experience.) Zombies owe us zagnets a great debt for making their
information exist.
Part Two: The Zombie Effect
If I'm Not a Zombie, What Kind of Science Do I Like?
Zombies probably think that I am a mystical dualist of some stripe. I can
accept that21, but I don't act like a mystical dualist. I am enthused by
progress in neuroscience. I don't foresee any brick wall that would prevent a
scientific examination of the minutiae of thought and behavior. In fact, I'm
thrilled to think about brains. I must appear to be a monstrous anti-zombie
to the zombies; someone who claims to have ineffable subjective experience
and yet acts just like them.
Let's imagine a society in the future in which neuroscience has gotten as
good as, say, quantum electrodynamics is today, that is to say essentially
complete within its framework. Would every educated person be a zombie? Would
the consciousness debate still exist? Would it have any practical
consequences?
This is an entertaining future to imagine. Suppose you could buy a brain
self-examination unit where you could stimulate any firing pattern in any set
of your own neurons at will. My working assumption is that I would experience
a complete correlation between the objective state of my brain and the
content of my experience. (I think this machine would be fun, when used
gingerly. I want one!22) And of course that means that inside every zagnet's
brain would be seen some little gizmo comprising the thoughts of
self-experience.
Even in this hard zombie future, any person who experiences him/herself as a
non-zombie would find that the philosophical issues remain unchanged. The
study of the brain might yield a perfect theory of what is experienced, but
not of experience itself. To a zombie, of course, the philosophical problem
didn't exist before and still will not.
So, if the consciousness problem has little consequence and will not yield to
further physical study, why do zagnets like me care about it? I might ask the
same question of some of the zombies.
Zombies delight in annoying us zagnets and seem to have an irrational need to
wield rationality as a prank. Titles like "The Astonishing
Hypothesis" and "Consciousness Explained" do not reflect the
falsifiable aspects of their authors' work, but rather are intended to assert
an authority over the reader's own intuition. Arrogance is always a bad
strategy in science. In philosophy I suppose it's fine23, but here we are
dealing with a combination of the two and we should work out some practical
etiquette.
Teaching Zombies Good Behavior
Zombies and zagnets often do not seem to speak the same language. They
stumble over each other like an anthropologist's case study in cultural
misunderstanding. Most zagnets in the arts and humanities cannot even imagine
the hard zombie positions taken by writers like Dennett. Zombies often cannot
even find a stated position in zagnet rhetoric. Marvin Minsky, a zombie I'm
very fond of, once said to me that he didn't like non-science fiction
literature, because nothing seemed to happen. One gets the same sense reading
Dennett's appraisals of thinkers like Searle and Nagel. He doesn't just
disagree, he doesn't even agree that anything's been said.
I can think of two consequences of the consciousness debate that matter to me
currently. One is that it would be pleasant for non-zombies to have a
philosophy that does not require that we ignore our own experience of
existence. The other is that zombies have come up with a batch of metaphors
that are radiating out in the world at large and are having an effect on
politics and culture.
An overly dry metaphysics
inevitably trickles down to a narrow reductionism in many practical
instances, even though in theory it need not. An example is found in the
design philosophy of computer systems. Convinced by zombies of the
ontological equivalence of people and computers, a generation of software
designers is asking users to shrink to the level of so-called
"intelligent agents"24. For another example, we have the narrow
application of Darwin,
as he's been zombified by Dawkins and Dennett, to human affairs in Robert
Wright's The Moral Animal, and even in a degenerate work like The Bell Curve.
Then there is the strange abrogation of human agency in favor of algorithms
that is found in some current political rhetoric. Newt Gingrich believes that
it is counterproductive to try to do anything about problems, such as fund
basic research or feed poor children, because only the algorithm of
economic/technological evolution can achieve anything. The one exception he
is willing to propose is to give computers to the poor.
If indeed we could be certain that the zombies were right, we could endure
the bad metaphors that they are radiating, but as long as there is one zagnet
around (count me in!) there is uncertainty. Should philosophers be held
partly responsible for the damaged reflections of their ideas that are
hammered into an approximate fit with the practical world by people who have
to make practical choices? Absolutely. If not, then what kind of
responsibility for anything can philosophers ever accept?
Philosophy will take on an even more heightened importance if an era of
successful neuroscience comes into being. Definitions of personhood will be
required at every turn to guide the design of our medicine, our information
systems, and our laws. It is critical that this philosophy acknowledge the
whole of the human experience, and not just that of the zombies.
It is true that if zombies have a weakness for excessive reductionism,
zagnets are prone to succumb to superstitions which can be equally pernicious.
This is what zombies seem to fear most, that zagnets will insist on believing
in fantastic creatures like homunculi25. Some of us zagnets probably will,
but not all of us. A zagnet can accept the state of the brain as being the
same thing as the content of the mind and still wonder in awe about the
nature of experience itself.
There is a tightrope to be walked, between reductionism and superstition. I
propose that if we can keep on that tightrope in our philosophy, the
metaphors we radiate will be more useful and beautiful.
Part Three:
What Might Philosophy Be Like After the Aliens Arrive?
It is easier to criticize than to construct a new opinion worthy of defense.
But since we do seem to be epistemologically impoverished creatures prone to illusion,
criticism is important. Nonetheless, one of the rewards of my persistent
arguments with zombies has been a thumbnail sketch of a way to think about
consciousness that might perhaps be satisfying to both zombies and zagnets.
I'll state it here. It's a sort of a cross between Plato and a radio dial.
Despite my demonstrated skepticism about the existence of computers, I am
clearly able to recognize and use them. I am even using one to write this
paper. Something curious is going on with computers, and I think it is
exactly as curious as the existence of language.
Here's my thought: Consciousness is the choice of which abstractions we
experience, out of an infinite number of ways of slicing the continuity of
the universe. It's the feeling of existence that is the choice26.
A zombie might object that it is your neurons and what they're up to that
make this choice. My counter-argument has already been stated: Brains don't
exist on their own any more than computers do. It is layers of abstraction,
known sometimes as concepts, platonic forms, cultural context, or words,
which make a brain, or the thought-processes in it, exist.
So consciousness is like a radio with a dial that might be marked
"qualia" or "semantics", that selects from an infinity of
equally available "layers of abstraction". Without the cosmic
qualia dial, a brain, or a thought, is just another utterly arbitrary slice
of the continuous causality that is the universe.
Levels of abstraction, like language and computation, are only singled out
for existence because we experience them. Or rather, out of an infinity of
equally valid layers of abstraction that potentially co-exist, experience
illuminates a specific layer. Our brains and all their activity can be fully
understood using only the ideas of physics (unless something really weird and
mystical is going on). We don't need to be able to detect words or computers
to explain every little thing that people and machines do. One brain, or many
brains, can be understood as just another vector field27.
Importantly, this does not mean that any possible layer of abstraction can be
applied to any piece of the universe. If that were the case, then the
universe itself wouldn't have a function in this philosophy. The universe in
this theory is one that does have some particularity; an affinity for a
particular infinity of possible slicings. Without consciousness it wouldn't
be sliced, however; it would just be a continuity.
Even the neuronal activity that corresponds to our experience of abstractions
like "abstraction" and "computation" can be completely
understood without reference to those concepts28. We don't need chemistry,
biology, or psychology to "understand" a thought scientifically and
empirically, but we do need them to recognize a thought.
Nature doesn't have nouns, and indeed the more nouns we make use of in our
science, the less complete and accurate our science becomes. Current physical
theories like QED that conceive of the world as a near-continuity are the
ones that are most successfully verified by experiment. They also have the
greatest universality. Every time a noun is added, such as "atom",
"molecule", "cell", or "organism", science
becomes more convenient for us, but less generally applicable to the universe
(as well as generally less accurate when tested empirically). The reason for
this is that nouns, while necessary for science, are also arbitrary.
Words and Thoughts: Should Zagnets
Discriminate?
Can I believe that thoughts are no more than patterns of activity in a brain
and still be a Zagnet? The "qualia dial" formulation presented here
is optimistic about the progress of neuroscience and doesn't assume that any
veil will indefinitely continue to shroud the mechanisms of thought. But it
is that very acceptance of the ontological equivalence of thoughts and
ordinary objects that forces us to ask why we experience these particular
thoughts and things and not some other slicing of the universe.
I once read a satirical piece in which aliens came to Earth and decided that cars
were intelligent life forms that made use of disposable guidance systems
called "people". The fable would seem to be, at this point,
obsolete. It joins the "Chinese Room" and many other zagnet thought
experiments in its anachronistic treatment of the human head as a black box.
Modern zagnets will need a new collection of thought experiments that assume
advanced neuroscience complete with instruments that render the inner
workings of the brain both visible and comprehensible. If we push this
framework far enough, it turns around to favor zagnets.
Let's join some alien scientists with superb measurement equipment that makes
the human brain appear to be transparent, with all neuronal states exposed.
Furthermore, our aliens have ultra-fast computers tied in to this equipment
that perform generalized pattern correlation algorithms on everything that is
measured.
Can these well-equipped imaginary
scientists detect human language? We think language is the most obvious
interpretation of what our brains do together because it's the interpretation
we experience. My claim is that if truly unbiased aliens were observing my
neuronal patterns as I write about language, they would lack the reference
point of my experience to choose from the infinity of interpretations
available. Even watching my neurons as I write these sentences wouldn't give
them a clue about which of the infinity of available layers of abstraction is
the one I am experiencing. So, for instance, they wouldn't necessarily slice
the world up into words and objects the way we do. They might think
vertically and sort all phenomena they came across in strange alien/Platonic
terms, according to connectedness and shape, for instance. In this case they
might fail to distinguish what we'd call a real chair from a neuronal thought
of a chair.
A thought of a chair and a real chair are both no more than temporary
perturbations in the vector field. To an alien they might have more in common
than most perturbations. Each might only be noticed (by an alien) in conjunction
with the other, and they reflect each other in a way that might place them in
the same Platonic class, if aliens happen to think that way. They are
distributed in different ways in space and are stable for different amounts
of time, but my aliens are pretty weird and these differences seem trivial to
them. It's a toss up which they would "single out" first; an
otherwise unconnected set of transient physical objects that we would call
"chairs", or the distributed correlation across many transient brain-objects
that constitutes the word "chair". And they might never detect
either because neither are special.
In general, whenever discreteness enters a definition of a thing, objectivity
is lost. The discreetly existing version of what a chair is, the everyday
definition "chair", does not have objectivity. Neither do neurons.
The Zagnet Pitch
I've been framing this idea as I would in introducing it to zombies, because
I expect them to be the toughest customers29. I'd introduce the concept to a
zagnet in this way: Phenomenological experience has to have some specificity.
This usually leads zagnets into trouble because they end up imagining a
dualistic alternate plane of existence where the specificity is held. The
"qualia dial" avoids that problem because it adds no content at
all. All the particulars, including all the details about how a qualia choice
"feels" are held in the physical world, particularly in neuron
states. It is the dial, however, that has singled out a particular slicing of
the universe, in which we experience the neurons in our particular brains,
instead of their momentary coherence with a plate of pasta, for instance.
The qualia dial validates zagnets while still letting the universe exist
independently. Zagnets frequently end up having to deny the existence of the
objective universe in order to exist themselves. Sometimes, to get around
this problem, zagnets propose that consciousness is a part of the natural
world, just not the part that zombies are competent at observing. Taking this
approach, zagnets can run but they can't hide. Eventually, some grandson of
Dennett might be insulting Penrose-style zagnets with quantum measurement
devices and Searle-style zagnets with group-mind detectors even though today
we believe such devices to be impossible.
The qualia dial gives both subjectivity and objectivity their due.
Qualia Dial Conclusion
In brief: Abstractions are superfluous to the world but they are experienced
by zagnets anyway (I can confirm this). Experience is the mooring of the
floating, ephemeral world of words, brains, and computers.
I have found this philosophical framework to be helpful because it not only
acknowledges experience but gives it a function. At the same time it doesn't
propose that anything ineffable should prevent scientific inquiry about the
brain and mind. And, it provides a constant motivation to consider alternate
frameworks of thought, alternate layers of abstraction. It also avoids a
Cartesian trap door with homunculi peering out, because it conceives of
experience as an epistemological necessity that is mysterious, certainly, but
that does not require any additional mechanism inside it. It's a clean kind
of dualism that doesn't mystify the empirical world, ever. It doesn't grant
excessive primacy to language. And it conforms to the experience of being
alive.

Footnotes
1 At least I'm told it's a Navajo proverb by John Perry Barlow.
2 Zombies have no internal experience. They are unconscious, but give no
obvious externally measurable evidence of that fact. Zombies have played a
distinguished role as fodder in the rhetoric around the mind/body problem and
consciousness research. There has been much debate about whether a true
zombie could exist, or if internal subjective experience inevitably colors
either outward behavior or events in the brain in some way.
3 Daniel Dennett, author of Consciousness Explained and many other
zombie manifestos.
4 Dennett calls his thought experiments "intuition pumps".
5 The meaning of the term consciousness has been subjected to a tug-of-war
lately. It used to mean "subjective, ineffable experience", and now
it might mean "a part of a program that models other parts and can
exercise executive control". I like to use the word
"experience" to refer to the subjective experience of experience,
which is the thing that makes consciousness into a hard problem.
6 The term "computer" can mean a number of things. For the purpose
of this paper, "computer" will mean a practical object that can
exist, such as a Macintosh. The Church-Turing Hypothesis suggests that there
is a ceiling of capability above which no computer made of ordinary materials
can rise. This ceiling is defined by an "ideal computer", called a
Turing Machine, which is like an ordinary computer, but with infinite memory.
The hypothesis is treated, in general, as a truth by computer scientists.
Penrose and others are interested in quantum computation because it might
result in a computer that does more than a Turing Machine can. The
Church-Turing Hypothesis "trickles down" in most debates about
consciousness to a similar assertion that as finite computers become very
large, they should be treated increasingly as being functionally equivalent
to each other. This is why the brain is seen by many as a large computer. If
either the pure or trickled hypotheses turn out to be false, nothing in these
arguments really changes; the bar is simply raised to a new level
corresponding to the new ideal computer.
7 "Object code" is the kind of program that a computer operates
from, as opposed to source code, which is written by people. Source code has
to be converted into object code before a computer can do anything. Each
different kind of computer uses a different, incompatible type of object
code. DNA is understood by some biologists as a type of object code. A
computer without any object code to run is inert, as is a specimen of code
for which the proper computer cannot be found. I am here suggesting that any
arbitrary piece of nature might turn out to be object code for some possible computer.
8 Two different pieces of object code can have exactly the same effect, and
are said to be two different implementations of the same program. From a
functionalist viewpoint, the two are identical. For example, if two versions
of a program behave identically on Macintosh and Windows machines, there are
then two different pieces of object code that are functionally equivalent.
From a functionalist standpoint, there could be many different pieces of
object code, running on the same or different computers, that could be
equivalent to your brain. I am here suggesting a particular one, which
happens to be a meteor shower.
9 You could trivially construct a computer to treat any sufficiently large
data set as object code to emulate any program simply by including a big
"lookup table". A lookup table would simply map whatever data you
found into the data you want (which in this case would be object code to run
your brain). This feels like cheating. It seems as though it isn't magical
enough. To get magical, we'd need to construct a computer that doesn't have
any information built in to it that reflects advanced knowledge of your
brain. Fine. Then you have to search through the space of possible computers
(defined here as finite state machines) until you find the first one that
works properly. Finding such a computer is similar to cracking a very, very
large cryptographic code. It might not be practical, but it is theoretically
possible. If zombies wish to dispute this, they'll have to join ranks with
Penrose and seek a fabulous quantum element in the brain that would cause it
to evade such a search.
10 There could be many different computers that each interpret the same
meteor shower data as a different brain, thus giving this arbitrary bit of
nature a very rich inner life.
11 An emulation is a program running on a computer that simulates the
existence of another computer. For instance, Macintosh computers can run
emulators that seem to be IBM-type machines "living inside" the
physical Macintosh. A Macintosh could also certainly run a program that
simulates a copy of itself. There is a potential problem of an infinite
regress, of course, but that can be easily avoided in my example by having
the emulation be incomplete; it will not include a further interior emulation
of itself.
12 It might be "effected" if you turned off your measuring
instruments, but it will not be perturbed by the status of your computer that
runs the data as a program. Even Schrodinger's cat wouldn't be affected by
THAT computer.
13 If you try this argument on zombies this is the point at which they
suddenly renounce functionalism.
14 I'm assuming that our Martian's instruments can record the internal states
of the transistors in the computer's chips.
15 Other fields of science like chemistry and biology are also not needed to
explain the observed universe, but those frameworks of understanding are
recognized to function only within limited parameters. No one would claim
that chemistry alone can explain the Sun's source of energy, for instance.
Because of its limited scope, chemistry isn't ontologically challenging while
physics, being necessary, is. I believe that this is the reason so many
physicists end up as zagnets.
16 The secret to finding functioning computers (not just free-standing programs)
wherever you look is in choosing pieces of the universe which are exerting
influence on one another over time (this isn't hard). For instance, in my
large meteor shower, all the meteors exert some gravitational pull on each
other, so they are causally linked. If you searched hard enough, you could
find a computer which read the relative motions of the meteors over a
specific period of time as a record of the changing states of many minds, and
the process of communication between them. In this case, an alien could not
only find a computer program in a meteor shower, but a tangibly functional
one.
Zombies sometimes object to this argument by saying that a "real"
computer doesn't have to know what it will do in advance, while my
constructed computers do. This is a remarkable argument, because it asserts
that deterministic computers exercise a mystical kind of free will. Or
Zombies might argue that computers are different from meteor showers because
they have a special, practical relationship with their surroundings. These
kinds of arguments are touching because they are a rehashing of the most
sentimental old zagnet arguments for the specialness of people.
17 It should be pointed out that many objects which aren't needed for an
understanding of the universe can still be detected by instruments. An
example would be chemicals (since chemistry is a theoretically unneeded,
though immensely practical, layer of abstraction above physics). This
suggests an ontological spectrum reflecting the dispensability of things.
Chemicals could be said to "exist" more than computers do, and
energy to exist even more, since it is both measurable and a less dispensable
concept.
18 I've decided to use the word "infinity" in the vernacular sense
in parts of this essay to make it more accessible to non-technical readers.
It should be taken to mean "unbounded".
19 Zombies, changing the rules of the game, might suggest that we could
detect a computer objectively by redefining it as the "best
fitting" or "most efficient" finite state machine to explain
the behavior of a selected piece of the universe (in this case the piece we
call a computer). Accordingly, aliens would recognize our computers because
our interpretation is in fact the best one available, and all who seek will
arrive at approximately the same point of view. I think schemes such as this
are really only hiding some step in which human guidance would be needed (not
because humans have the best point of view, but because we don't). Obviously
there is the matter of selecting a piece of the universe, which in itself
might be the conveyor of "semantics". Even if the aliens can choose
the right slice without human assistance, this problem is different from the
meteor shower example above. In that problem, we searched for a particular
computer, but in this one we'd have to find the best possible computer that
included as much as possible of the meteor shower. In other words, we'd have
to look at all of the computers hiding in nature instead of just finding one
of them. This gives us an unbounded problem instead of a large finite one. It
is just like hoping for an algorithm that you could feed a bunch of data into
and then be rewarded with the best possible scientific theory to explain the
data.
20 I was both bolstered and disappointed (I wanted to publish this idea
first!) to learn that Searle has also argued that computation is not
intrinsic to nature, in "The Rediscovery of the Mind". Searle's
position is actually a little different from mine, in that he doesn't entirely
dismiss the idea that some kind of computer could have an objective existence
in the right context, and he doesn't view computers as being similar to other
phantoms like language.
21 And I would say they're even worse dualists. My dualism is cleanly defined
by the existence of two different epistemological channels, the empirical and
the subjective. Theirs is cloaked in weird fantasies of imaginary objects
like information with undefined properties like "semantics".
22 I am told by my friends who have experimented with psychedelics they have
experienced this correlation, where every aspect of experience is radically
altered by changes in the physical brain. What is notable to me is that
experience itself continues during these radical "trips". This is,
once again, why I choose to use the word "experience" instead of
"consciousness". Consciousness is something which is said to exist
in altered states, where experience is a thing, as I understand it, without
state.
23 As this essay demonstrates.
24 See my essay "Agents of Alienation".
25 Or Penrose's quantum computations.
26 I'm not suggesting a "free-will" or conscious kind of choice. It
is rather an implicit choice that has been made in the act of perception.
27 Vector fields are the mathematical way of expressing the continuous aspect
of the universe.
28 Does subjectivity disappear if you're thorough enough? This is what some
zombies believe. If enough well-instrumented alien scientists studied enough
situations on Earth, would they eventually weed out, perhaps relying on an
evolutionary process, all of the possible but more awkward interpretations of
what's going on here? Would they eventually "parse" our world the
way we do, into people with brains using words to refer to objects, because
that interpretation is the easiest? This is similar to the idea addressed in
footnote 19. I would argue that aliens who learn to think like us must have
cheated and gotten a hint or two to find their way.
Some recent speculation concerned with "Complexity", coming from
Stuart Kaufman, Brian Goodwin, and other, suggests that forms in the universe
are limited to a far smaller variety than we might have thought, following
the contours of a new class of mathematical objects, such as the
"catastrophes". What I currently think is that even if this turns
out to be right, it doesn't mean that the number of possible "layers of
abstraction" would be similarly reduced. A limited variety of
territories does not imply a limited number of maps. In fact, this thinking
might be very compatible with the idea of the objective universe stated here
("an affinity for a particular infinity of possible slicings").
29 Zombies will probably ask whether there is one dial per person, or one for
the universe. I would reply that dials exist in "epistemological space"
not physical space, so that question is not sensible.
|