One of the most engaging after-lunch conversations of my life was when Robin Hanson sat me down and gave me the cryonics version of the Drake Equation. The Drake Equation multiplies seven variables together in order to calculate the number of civilizations in our galaxy with which communication is possible. The Hanson Equation, similarly, multiplies a bunch of factors together in order to calculate how many expected years of life you will gain by signing a contract to freeze your head when you die.
During his presentation, I noticed that Robin spent almost all of his time on various scientific sub-disciplines and the trajectory of their progress. On these matters, I was fairly willing to defer to his superior knowledge (with the caveat that perhaps his enthusiasm was carrying him away). What disturbed me was when I realized how low he set his threshold for success. Robin didn’t care about biological survival. He didn’t need his brain implanted in a cloned body. He just wanted his neurons preserved well enough to “upload himself” into a computer.
To my mind, it was ridiculously easy to prove that “uploading yourself” isn’t life extension. “An upload is merely a simulation. It wouldn’t be you,” I remarked. “It would if the simulation were accurate enough,” he told me.
I thought I had him trapped. “Suppose we uploaded you while you were still alive. Are you saying that if someone blew your biological head off with a shotgun, you’d still be alive?!” Robin didn’t even blink: “I’d say that I just got smaller.”
The more I furrowed my brow, the more earnestly he spoke. “It all depends on what you choose to define as you,” he finally declared. I said: “But that’s a circular definition. Illogical!” He didn’t much care.
Then I attacked him from a different angle. If I’m whatever I define as me, why bother with cryonics? Why not “define myself” as my Y-chromosome, or my writings, or the human race, or carbon? By Robin’s standard, all it takes to vastly extend your life is to identify yourself with something highly durable.
His reply: “There are limits to what you can choose to identify with.” I was dumbstruck at the time. But now I’d like to ask him, “OK, then why don’t you spend more time trying to overcome your limited ability to identify with durable things? Maybe psychiatric drugs or brain surgery would do the trick.”
I’d like to think that Robin’s an outlier among cryonics advocates, but in my experience, he’s perfectly typical. Fascination with technology crowds out not just philosophy of mind, but common sense. My latest cryonics encounter was especially memorable. When I repeated my standard objections, the advocate flatly replied, “Those aren’t interesting questions.” Not interesting questions?! They’re common sense, and they go to the heart of the cryonic dream.
Blog posts to giggle over; read the comments too.
Robin Hanson responds to Caplan:
Bryan, you are the sum of your parts and their relations. We know where you are and what you are made of; you are in your head, and you are made out of the signals that your brain cells send each other. Humans evolved to think differently about minds versus other stuff, and while that is a useful category of thought, really we can see that minds are made out of the same parts, just arranged differently. Yes, you “feel,” but that just tells you that stuff feels, it doesn’t say you are made of anything besides the stuff you see around and inside you.
The parts you are made of are constantly being swapped for those in the world around you, and we can even send in unusual parts, like odd isotopes. You usually don’t notice the difference when your parts are swapped, because your mind was not designed to notice most changes; your mind was only designed to notice a few changes, such as new outside sights and sounds and internal signals. Yes you can feel some changed parts, such as certain drugs, but we see that those change how your cells talk to each other. (For some kinds of parts, such as electrons, there really is no sense in which you contain different elections. All electrons are a pattern in the very same electron field.)
We could change your parts even more radically and your mind would still not notice. As long as the new parts sent the same signals to each other, preserving the patterns your mind was designed to notice, why should you care about this change any more than the other changes you now don’t notice? Perhaps minds could be built that are very sensitive to their parts, but you are not one of them; you are built not to notice or care about most of your part details.
Your mind is huge, composed of many many parts. It is even composed of two halves, your right and left brain, which would continue to feel separately if we broke their connection. Both halves would also feel they are you. It is an illusion that there is only “one” of you in your head that feels; all your mind parts feel, and synchronize their feelings to create your useful illusion of being singular. We might be able to add even more synchronized parts and have you still feel singular.
We have taken apart people like you Bryan, and seen what they are made of. We don’t understand the detailed significance of all signals your brain cells send each other, but we are pretty sure that is all that is going on in your head. There is no mysterious other stuff there. And even if we found such other stuff, it would still just be more stuff that could send signals to and from the stuff we see. You’d still just be feeling the signals sent, because that is the kind of mind you are.
Accept it and grab a precious chance to live longer, or reject it and die. Consider: if your “common sense” had been better trained via a hard science education, you’d be less likely to find this all “obviously” wrong. What does that tell you about how much you can trust your initial intuitions?
Caplan responds to Hanson:
If Robin’s right, then teaching me more hard science will reduce my confidence in common sense and dualist philosophy of mind. I dispute this. While I don’t know the details that Robin thinks I ought to know, I don’t think that learning more details would predictably change my mind. So here’s roughly the bet I would propose:
1. Robin tells me what to read.
2. I am honor-bound to report the effect on my confidence in my own position.
3. If my confidence goes down, I owe Robin the dollar value of the time he spent assembling my reading list.
4. If my confidence goes up, Robin owes me the dollar value of the time I spent reading the works on his list.
Since I’m a good Bayesian, Robin has a 50/50 chance of winning – though I’d be happy to make the stakes proportional to the magnitude of my probability revision.
With most people, admittedly, term #2 would require an unreasonably high level of trust. But I don’t think Robin can make that objection. We’re really good friends – so good, in fact, that he has seriously considered appointing me to enforce his cryonics contract! If he’s willing to trust me with his immortality, he should trust me to honestly report the effect of his readings on my beliefs.
I don’t think Robin will take my bet. Why not? Because ultimately he knows that our disagreement is about priors, not scientific literacy. Once he admits this, though, his own research implies that he should take seriously the fact that his position sounds ridiculous to lots of people – and drastically reduce his confidence in his own priors.
I’m sympathetic to Hanson’s response, and I think Caplan’s position is mostly voodoo in philosophy drag, but let’s be clear that there are a couple different things going on here when we ask about the transformations under which I should consider myself to have “survived.”
The first question is whether it’s somehow uniquely rational to identify your “self” with a particular unique physical brain and body. To dramatize it as Bryan does, cribbing from my old prof Derek Parfit: Suppose that via some kind of Star Trek replication or some combination of cloning, highly advanced brain scanning, and neuron-etching nanotech, scientists create a precise physical duplicate of you. Just as your duplicate is waking up—so let’s be clear, there are now two extremely similar but clearly distinct loci of conscious experience in the room—you’re told (ever so sorry) that as an unfortunate side-effect of the process, your original body (you’re assured you are the original) is about to die. Should you be alarmed, or should you consider your copy’s survival, in effect, a means by which you survive?
The gut intuition Bryan wants to work with—the crucial “common sense” move—is that, by stipulation, there are, after all two of you who now have separate experiences, emotions, physical sensations, etc., and who could each survive and go on to live perfectly good (and very different) lives. And you could certainly lament that you won’t both get that chance. But I think it’s a serious mistake to imagine that this settles the questions about what we have, unfortunately, chosen to call “personal identity,” a property which even in more ordinary circumstances bears little resemblance to its logical homonym. There is ample reason to think that a single brain and body can, and perhaps routinely does, support multiple simultaneous streams of conscious experience, and as Robin points out, it’s not as though “your” physical body is composed of the same matter it was a decade ago.
In reality, our ordinary way of talking about this leads to a serious mistake that Robin implicitly points out: We imagine that there’s some deep, independent, and binary natural fact of the matter about whether “personal identity” is preserved—whether Julian(t1) is “the same person” as Julian(t2)—and then a separate normative question of how we feel about that fact. Moreover, we’re tempted to say that in a sci-fi hypothetical like Bryan’s, we can be sure identity is not preserved, because logical identity (whose constraints we selectively import) is by definition inconsistent with there being two, with different properties, at the same time. And this is just a mistake. The properties in virtue of which we say that I am “the same person” I was yesterday reflect no unitary natural fact; we assert identity as a shorthand that serves a set of pragmatic and moral purposes. Whether it’s true depends intrinsically on the concerns and purposes of the user. A chemist and a geologist will mean quite different things when they ask, pointing at a lake, “is that the same body of water we noted a decade ago?” The answer may be “yes” in one sense and “no” in another, because what they mean by “same” is implicitly indexed to their different concerns and purposes. Bryan’s flip reply—that one could thereby achieve immortality by “deciding” to identify with something permanent—misses the point: That there may be no independent fact of the matter about identity does not entail there are no facts about what’s worth caring about.The whole motive for arguing against his material-continuity standard is precisely that he has seized upon a criterion of intertemporal personal identity that does not really matter very much.
UPDATE: Will Wilson at PomoCon
UPDATE #2: Julian Sanchez