I, Me, Mine, I Me, Mine, I Me, Mine

Bryan Caplan:

One of the most engaging after-lunch conversations of my life was when Robin Hanson sat me down and gave me the cryonics version of the Drake Equation.  The Drake Equation multiplies seven variables together in order to calculate the number of civilizations in our galaxy with which communication is possible.  The Hanson Equation, similarly, multiplies a bunch of factors together in order to calculate how many expected years of life you will gain by signing a contract to freeze your head when you die.

During his presentation, I noticed that Robin spent almost all of his time on various scientific sub-disciplines and the trajectory of their progress.  On these matters, I was fairly willing to defer to his superior knowledge (with the caveat that perhaps his enthusiasm was carrying him away).  What disturbed me was when I realized how low he set his threshold for success.  Robin didn’t care about biological survival.  He didn’t need his brain implanted in a cloned body.  He just wanted his neurons preserved well enough to “upload himself” into a computer.

To my mind, it was ridiculously easy to prove that “uploading yourself” isn’t life extension.  “An upload is merely a simulation.  It wouldn’t be you,” I remarked.  “It would if the simulation were accurate enough,” he told me.

I thought I had him trapped.  “Suppose we uploaded you while you were still alive.  Are you saying that if someone blew your biological head off with a shotgun, you’d still be alive?!”  Robin didn’t even blink: “I’d say that I just got smaller.”

The more I furrowed my brow, the more earnestly he spoke.  “It all depends on what you choose to define as you,” he finally declared.  I said: “But that’s a circular definition.  Illogical!”  He didn’t much care.

Then I attacked him from a different angle.  If I’m whatever I define as me, why bother with cryonics?  Why not “define myself” as my Y-chromosome, or my writings, or the human race, or carbon?  By Robin’s standard, all it takes to vastly extend your life is to identify yourself with something highly durable.

His reply: “There are limits to what you can choose to identify with.”  I was dumbstruck at the time.  But now I’d like to ask him, “OK, then why don’t you spend more time trying to overcome your limited ability to identify with durable things?  Maybe psychiatric drugs or brain surgery would do the trick.”

I’d like to think that Robin’s an outlier among cryonics advocates, but in my experience, he’s perfectly typical.  Fascination with technology crowds out not just philosophy of mind, but common sense.  My latest cryonics encounter was especially memorable.  When I repeated my standard objections, the advocate flatly replied, “Those aren’t interesting questions.”  Not interesting questions?! They’re common sense, and they go to the heart of the cryonic dream.

Tyler Cowen

Blog posts to giggle over; read the comments too.

Robin Hanson responds to Caplan:

Bryan, you are the sum of your parts and their relations.  We know where you are and what you are made of; you are in your head, and you are made out of the signals that your brain cells send each other.  Humans evolved to think differently about minds versus other stuff, and while that is a useful category of thought, really we can see that minds are made out of the same parts, just arranged differently.  Yes, you “feel,” but that just tells you that stuff feels, it doesn’t say you are made of anything besides the stuff you see around and inside you.

The parts you are made of are constantly being swapped for those in the world around you, and we can even send in unusual parts, like odd isotopes.  You usually don’t notice the difference when your parts are swapped, because your mind was not designed to notice most changes; your mind was only designed to notice a few changes, such as new outside sights and sounds and internal signals.  Yes you can feel some changed parts, such as certain drugs, but we see that those change how your cells talk to each other.  (For some kinds of parts, such as electrons, there really is no sense in which you contain different elections.  All electrons are a pattern in the very same electron field.)

We could change your parts even more radically and your mind would still not notice.  As long as the new parts sent the same signals to each other, preserving the patterns your mind was designed to notice, why should you care about this change any more than the other changes you now don’t notice?  Perhaps minds could be built that are very sensitive to their parts, but you are not one of them; you are built not to notice or care about most of your part details.

Your mind is huge, composed of many many parts.  It is even composed of two halves, your right and left brain, which would continue to feel separately if we broke their connection. Both halves would also feel they are you.  It is an illusion that there is only “one” of you in your head that feels; all your mind parts feel, and synchronize their feelings to create your useful illusion of being singular.  We might be able to add even more synchronized parts and have you still feel singular.

[...]

We have taken apart people like you Bryan, and seen what they are made of.  We don’t understand the detailed significance of all signals your brain cells send each other, but we are pretty sure that is all that is going on in your head.  There is no mysterious other stuff there.  And even if we found such other stuff, it would still just be more stuff that could send signals to and from the stuff we see.  You’d still just be feeling the signals sent, because that is the kind of mind you are.

Accept it and grab a precious chance to live longer, or reject it and die.  Consider: if your “common sense” had been better trained via a hard science education, you’d be less likely to find this all “obviously” wrong.  What does that tell you about how much you can trust your initial intuitions?

Caplan responds to Hanson:

If Robin’s right, then teaching me more hard science will reduce my confidence in common sense and dualist philosophy of mind.  I dispute this.  While I don’t know the details that Robin thinks I ought to know, I don’t think that learning more details would predictably change my mind.  So here’s roughly the bet I would propose:

1. Robin tells me what to read.
2. I am honor-bound to report the effect on my confidence in my own position.
3. If my confidence goes down, I owe Robin the dollar value of the time he spent assembling my reading list.
4. If my confidence goes up, Robin owes me the dollar value of the time I spent reading the works on his list.

Since I’m a good Bayesian, Robin has a 50/50 chance of winning – though I’d be happy to make the stakes proportional to the magnitude of my probability revision.

With most people, admittedly, term #2 would require an unreasonably high level of trust.  But I don’t think Robin can make that objection.  We’re really good friends – so good, in fact, that he has seriously considered appointing me to enforce his cryonics contract!  If he’s willing to trust me with his immortality, he should trust me to honestly report the effect of his readings on my beliefs.

I don’t think Robin will take my bet.  Why not?  Because ultimately he knows that our disagreement is about priors, not scientific literacy.  Once he admits this, though, his own research implies that he should take seriously the fact that his position sounds ridiculous to lots of people – and drastically reduce his confidence in his own priors.

Julian Sanchez:

I’m sympathetic to Hanson’s response, and I think Caplan’s position is mostly voodoo in philosophy drag, but let’s be clear that there are a couple different things going on here when we ask about the transformations under which I should consider myself to have “survived.”

The first question is whether it’s somehow uniquely rational to identify your “self” with a particular unique physical brain and body. To dramatize it as Bryan does, cribbing from my old prof Derek Parfit: Suppose that via some kind of Star Trek replication or some combination of cloning, highly advanced brain scanning, and neuron-etching nanotech, scientists create a precise physical duplicate of you. Just as your duplicate is waking up—so let’s be clear, there are now two extremely similar but clearly distinct loci of conscious experience in the room—you’re told (ever so sorry) that as an unfortunate side-effect of the process, your original body (you’re assured you are the original) is about to die.  Should you be alarmed, or should you consider your copy’s survival, in effect, a means by which you survive?

The gut intuition Bryan wants to work with—the crucial “common sense” move—is that, by stipulation, there are, after all two of you who now have separate experiences, emotions, physical sensations, etc., and who could each survive and go on to live perfectly good (and very different) lives.  And you could certainly lament that you won’t both get that chance.  But I think it’s a serious mistake to imagine that this settles the questions about what we have, unfortunately, chosen to call “personal identity,” a property which even in more ordinary circumstances bears little resemblance to its logical homonym. There is ample reason to think that a single brain and body can, and perhaps routinely does, support multiple simultaneous streams of conscious experience, and as Robin points out, it’s not as though “your” physical body is composed of the same matter it was a decade ago.

In reality, our ordinary way of talking about this leads to a serious mistake that Robin implicitly points out: We imagine that there’s some deep, independent, and binary natural fact of the matter about whether “personal identity” is preserved—whether Julian(t1) is “the same person” as Julian(t2)—and then a separate normative question of how we feel about that fact.  Moreover, we’re tempted to say that in a sci-fi hypothetical like Bryan’s, we can be sure identity is not preserved, because logical identity (whose constraints we selectively import) is by definition inconsistent with there being two, with different properties, at the same time. And this is just a mistake. The properties in virtue of which we say that I am “the same person” I was yesterday reflect no unitary natural fact; we assert identity as a shorthand that serves a set of pragmatic and moral purposes. Whether it’s true depends intrinsically on the concerns and purposes of the user. A chemist and a geologist will mean quite different things when they ask, pointing at a lake, “is that the same body of water we noted a decade ago?” The answer may be “yes” in one sense and “no” in another, because what they mean by “same” is implicitly indexed to their different concerns and purposes. Bryan’s flip reply—that one could thereby achieve immortality by “deciding” to identify with something permanent—misses the point: That there may be no independent fact of the matter about identity does not entail there are no facts about what’s worth caring about.The whole motive for arguing against his material-continuity standard is precisely that he has seized upon a criterion of intertemporal personal identity that does not really matter very much.

UPDATE: Will Wilson at PomoCon

UPDATE #2: Julian Sanchez

About these ads

3 Comments

Filed under Go Meta, Science, Technology

3 responses to “I, Me, Mine, I Me, Mine, I Me, Mine

  1. Few points:

    (1) It is writtten: “Consider: if your “common sense” had been better trained via a hard science education, you’d be less likely to find this all “obviously” wrong. What does that tell you about how much you can trust your initial intuitions?” Do you have a statistical figure for the ratio of “believers in cryonics” over “people who have had a hard science education”?

    (2) Cryonics seems to me a lot like the lottery fallacy. Lottery is defined as a tax on stupidity, because the odds you’ll win are same or worse than the odds you have an unknown relative who’ll bequeath you millions, or that you’ll be sucked into a multibillion dollar class action lawsuit that wins, etc. So you don’t need to spend anything to maintain those kinds of odds of a big payoff.
    Similarly, in the case of cryonics, if a future civ is advanced enough to re-construct you from a temperature “cracked” brain (specially with the Keystone Cops procedures currently used) then why not simply assume they could reconstruct you from DNA sample (which some facilities do preserve separately) – but then, why not go even further and assume they have perfected time travel and will come back and pull you out of this 21st century vale of tears, and on and on? Once you cross the into fantasy, why draw an arbitrary line? If you can wish for a million dollars why not ten million or a hundred million?

    (3) The emphasis on preserving the head/body is to me, far from evidence of advanced rational thinking, carries rather more than a whiff of Christian allegory.

    (4) Finally, nobody considers what the people of the future may have as their moral values and standards of entertainment. For all we know, they might get their rocks off by reanimating cryonicoids so that they can film and kill them in torture-porn snuff flicks (to be marketed in the 25th century as movies titled: Thaw 1, Thaw 2, … , Thaw 6!)
    Don’t tell me that advanced technology guarantees advanced moral values, look at the German Nazis.

  2. Pingback: What We’ve Built This Weekend « Around The Sphere

  3. Luke

    Tabby Cat,

    1) So often “common sense” is really just the Dunning-Kruger effect talking. Unless you’ve researched all of the relevant technical topics, it is highly unlikely that your argument is anywhere near as good as you think it is, even if you are literate in the hard sciences. Cryonicists are not stupid people, as a general rule, and have probably thought of your criticism already.

    2) The possibility of “winning the cryonics lottery” does not have to be very high to justify investing in it given that the value of it is sufficiently high. The problem with actual lotteries is not that the probability is low but that the payout is not high enough given the probability. There’s no magical “line” of fantasy versus fact to be crossed in this respect just because there are big numbers involved. Nonetheless, you have not stated how high the chances would be in order for the practice to be justifiable. It is certainly not reasonable to require them to be ~100%, given that the costs of cryonics are far less than the value of a human life. If a human life is worth a five times the cost of cryonics, that implies that 20% is acceptable.

    3) I am not aware of any Christian allegory involving reanimation of frozen remains. Christian resurrection is explicitly supernatural in nature, and indeed is held up as proof of God’s divinity because of the fact that it is impossible for humans to resurrect the dead. In fact cryonicists are generally against the notion of cryonics patients being labeled “dead”. Rather it could be said that the very use of the binary term “dead” to refer to patients whose neural connections are not completely obliterated is evidence of mystical and superstitious thinking.

    4) Immortality under good conditions is certainly far more likely than reanimation for torture purposes, given that technological and social progress are not entirely independent. (Note the effects of communications technology on exposing corruption in various governments, e.g.) Nonetheless the anticipated threat of being tortured upon reanimation, however weak, is a compelling motive for cryonicists to support social norms and laws against torture. The fact that it exists in addition to any other such motive actually argues in favor of everyone becoming a cryonicist as a deterrent of long-term societal degradation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s