In considering the pros and cons of cryonics, has anyone addressed the possibility of being revived in an unpleasant future, for instance as a "torture the infidel" exhibit in a theme park of a theocratic state? I had some thoughts on the issue but figured I would see what else has been written previously.

 

New Comment
26 comments, sorted by Click to highlight new comments since:

This has been discussed here. Enoosti:

Yes, the paper clip reference wasn't the only point I was trying to make; it was just a (failed) cherry on top. I mainly took issue with being revived in the common dystopian vision: constant states of warfare, violence, and so on. It simply isn't possible, given that you need to keep refilling dewars with LN2 and so much more; in other words, the chain of care would be disrupted, and you would be dead long before they found a way to resuscitate you. And that leaves basically only a sudden "I Have No Mouth" scenario; i.e. one day it's sunny, Alcor is fondly taking care of your dewar, and then BAM! you've been resuscitated by that A.I. I guess I just find it unlikely that such an A.I. will say: "I will find Yvain, resuscitate him, and torture him." It just seems like a waste of energy.

This has also been discussed here. humpolec:

What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

It has also been discussed here. Michael Vassar:

It would take quite a black swan tech to undo all the good from tech up to this point. UFAI probably wouldn't pass the test, since without tech humans would go extinct with a smaller total population of lives lived anyway. Hell worlds seem unlikely. 1984 or the Brave New World (roughly) are a bit more likely, but is it worse than extinction? I don't generally feel that way, though I'm not sure.

[-]knb50

I've thought about this too. I was traumatized by reading I Have No Mouth, and I Must Scream as a kid, and I still shudder whenever I think of that story.

I know it is silly, and there is no plausible reason such an evil AI would come into existence. But even so, it really reinforces how awful a world with advanced technology can be (immortality + complete knowledge of psychology/neurology = eternal, perfect suffering). I find that I fear those hell scenarios a lot more than I appreciate the various eutopia scenarios I've seen described. If Omega offered me a ticket to Eutopia with a 1 in a Million chance of winding up in I Have No Mouth, I don't think I would take it.

Maybe its all the talk about Unfriendly AI here, but Ellison's story was also my first thought to the question- What if its a bad future?

This is like saying "Abortion is a good thing, because you could be killing the next Hitler". Unless you think that the expected value of additional lifespan is negative, it would only affect the cost-benefit tradeoffs, and even then not very much(unless your guesses about the future are wildly different from my own). Cost-benefits are important - in fact, they're my primary point of disagreement with the pro-cryonics consensus around here - but this is a very small lever to be trying to move a lot of weight with.

[-][anonymous]20

This is like saying "Abortion is a good thing, because you could be killing the next Hitler".

Perhaps this is more like asking because we know human lives include suffering and end in death, is it moral to bring new humans into being? if possible future suffering is a reason to stay dead, then guaranteed present suffering is a reason not to have children. This isn't an argument I'm advancing, but it is relevant. I'm not an anti-natalist but I admire some writers who are anti-natalists.

If someone is arguing that life is a net bad, and that this means that cryonics is a poor investment, then I will at least concede that their arguments are consistent(even if them being alive to make those arguments is not). But I don't think that argument is being made. Arguing from a tiny possibility of a very bad outcome is playing very obviously to human emotional biases, and the result almost always needs to be discounted significantly to account for the low probability.

I find it somewhat unlikely that a society will put the effort into resurrecting me just to do bad things to me. There are cheaper and easier ways to make humans. I suppose in the unlikely event that an evil AI accidentally killed off all of the humans it could torture, it might be cheaper to revive me to torture than to build a human from scratch, but it still seems somewhat dubious.

I suppose a UFAI with a goal like "Make all humans happy" might resurrect me, to increase net happiness. Still sounds better than being worm-food, overall.

I think it's more likely that the cryonics company will just lose funding/be outlawed/suffer a natural disaster than something like the above, but in those cases I won't be around to know the difference anyway.

If there is a big enough change in the species, I can see you being revived for experimental purposes.

If a theocratic state wanted to torture the infidels, I don't see how being a human popsicle is going to make the situation any worse.

But that the dread of something after death,— The undiscover’d country, from whose bourn No traveller returns,—puzzles the will, And makes us rather bear those ills we have Than fly to others that we know not of?

The problem is the chance of being revived into a life worse than death. If a life of torture is preferable to you than dying this is a risk you don't really care about, but if you're worried about it you should take it into account.

Sure, take it into account but take into account the probability of that as well. I tend to look on the dark side of things, but have recently taken to heart that the probability of The Bad Thing is usually quite low.

Maybe some crazy people will snatch me up in the night and torture me too, but I'm not going to lose sleep over the possibility because of it's low probability.

The idea is that everyone who wasn't frozen got a chance to see it coming and convert, maybe two or three times as winds shifted?

Or maybe the frozen people won't have let their opinions slip as the winds shifted. They'll see the theocratic takeover as a fait accompli, won't be on the record as opposing it, and so will be able to declare their allegiance to the Great Ju Ju and avoid torture altogether.

Or maybe, being from the past will confer a special honor and status with the Great Ju Ju, so that it will be extra wonderful to be a thawed human popsicle.

We can play outlandish maybes til the cows come home. Averaging over my probabilities for all hypothetical futures, I'd rather be alive than not 500 years from now.

Too many arguments in the world of the form "but what if Horrible Scenario occurs?" If Horrible Scenario occurs, I'll be fucked. That's the answer. Can't deny it. But unless you have information to share that significantly increases my probabilities for Horrible Scenarios, merely identifying bad things that could happen is neither a productive nor a fun game.

The initial question was just meant to open the issue of future negatives, and having gotten some feedback on how the issue had been discussed before, I gave the bulk of my thoughts in a reply to my initial post.

What I consider much more realistic possibilities (more realistic than benign, enlightened resurrection) are being revived with little regard to brain damage and to serve the needs of an elite. I laid it out in my other response in this thread (I don't know how to link to a particular comment in a thread, but search for 'When I started this thread'.)

I've certainly seen this sort of idea being discussed by people in personal conversation but never seen it discussed as far as I'm aware in much of the literature on cryonics- there's very little anti-cryo lit out there.

The argument is weak, since it implies that a society will be vindictive and extremely technologically advanced. This doesn't seem to be a common option. The more common version of this argument talks about reviving people for slave labor, and in that version of the argument, the same essential problem is even more severe.

I don't recall seeing this issue brought up before, but it seems that given the sort of medical advances required for cryonics to work, it's more likely that you would be revived in a "good" future than a "bad" one. This intuition is just based on the general trend that societies with higher levels of technology tend to be better ones to live in.

Larry Niven's 1970s stories about "corpsicles" discuss a couple situations in which cryonics patients might be stripped of legal rights and mistreated after death. Beware of fictional evidence, and all that.

My personal con re: cryonics actually comes from the opposite direction, though. I like living in part because I can do (small) things which improve the world and because my mind is relatively unique. But a world in which society has a combination of technology, wealth and altruism sufficient for reviving cryonics patients en masse is less likely to need me to help it or to bolster my mental demographic. Unless I come into unexpected wealth, I'd prefer to spend discretionary money on things that improve the odds of creating such a world (savings for emergencies and my kids' education, charities, "buying" free time for less-remunerated research, etc) rather than on buying a ticket just to enjoy it after it's here.

My bigger worry is more along the lines of "What if I am useless to the society in which I find myself and have no means to make myself useful?" Not a problem in a society that will retrofit you with the appropriate augmentations/upload you etc. and I tend to think that is more likely that not, but what if, say, the Alcore trust gets us through a half-century-long freeze and we are revived, but things have moved more slowly than one might hope, yet fast enough to make any skill sets I have obsolete? Well, if the expected utility of living is sufficiently negative I could kill myself and it would be as if I hadn't signed up for cryonics in the first place, so we can chalk that up as a (roughly) zero utility situation. So in order to really be an issue, I would have to be in a scenario where I am not allowed to kill myself or be re-frozen etc. Now, if I am not allowed to kill myself in a net negative utility situation (I Have no Mouth and I Must Scream) that is a worst case scenario, and seems exceedingly unlikely (though I'm not sure how you can get decent bounds for that).

So my quick calculation would be something like: P("expected utility of living is sufficiently negative upon waking up")*P("I can't kill myself" | "expected utility of living is sufficiently negative upon waking up") = P("cryonics is not worth it" | "cryonics is successful")

It's difficult to justify not signing up for cryonics if you accept that it is likely to work in an acceptable form (this is a separate calculation). AFAICT there are many more foreseeable net positive or (roughly) zero utility outcomes than foreseeable net negative utility outcomes.

Interesting. The referenced discussions often assume the post-singularity AI (which for the record I think very unlikely). The development of that technology is likely to be, if not exactly independent, only loosely correlated with the technology for cryonic revival, isn't it?

Certainly you have to allow for the possibility of cryonic revival without the post-singularity AI, and I think we can make better guesses about the possible configurations of those worlds than post-AI worlds.

I see the basic pro-cryonics argument as having the form of Pascal's wager. Although the probability of success might be on the low side (for the record, I think it is very low), the potential benefits are so great that it is worth it. The cost is paid in mere money. But is it?

In my main post I used the "torture by theocracy" example as an extreme, but I think there are many other cases to worry about.

Suppose that among a population of billions, there are a few hundred people who can be revived. The sort of society we all hope for might just revive them so they can go on to achieve their inherent potential as they see fit. But in societies that are just a bit more broken than our own, those with the power to cause revival may have self-interest very much in mind. You can imagine that the autonomy of those who are revived would be seriously constrained, and this by itself could make a post-revival life far from what people hope. The suicide option might be closed off to them entirely; if they came to regret their continued existence they might well be unable to end it.

Perhaps the resurrected will have to deal with the strange and upsetting limitations that today's brain damage patients face. Perhaps future society will be unable to find a way for revived people to overcome such problems, and yet keep them alive for hundreds of years -- they are just too valuable as experimental subjects.

Brain damage aside, what value will they have in a future society? They will have had unique and direct knowledge of life in a bygone century, including its speech patterns and thought patterns. I think modern historians would be ecstatic at the prospect of being able to observe or interview pockets of people from various epochs in history, including ancient ones (ethical considerations aside).

Perhaps they will be valued as scientific subjects and carefully insulated from any contaminating knowledge of the future world as it has developed. That might be profoundly boring and frustrating.

Perhaps the revived will be confined in "living museums" where they face a thousand years re-enacting what life was like in 21st century America -- perhaps subject to coercion to do it in a way that pleases the masters.

If the revived people are set free, what then? Older people in every age typically shake their heads in dismay at changes in the world; this effect magnified manyfold might be profoundly unsettling -- downright depressing, in fact.

One can reasonably object that all of these are all low-probability. But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?

In deciding in favor of cryonic preservation, I don't think the decision can be near costs traded off against scenarios of far happiness. There's far misery to consider as well.

But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?

Adele_L in a comment in this thread:

based on the general trend that societies with higher levels of technology tend to be better ones to live in

While I admit that a theocratic torturing society seems less likely to develop the technology to revive people, I'm not at all sure that an enlightened one is more likely to do so than the one I assumed as the basis of my other examples. A society could be enlightened in various ways and still not think it a priority to revive frozen people for their own sake. But a society could be much more strongly motivated if it was reviving a precious commodity for the selfish ends of an elite. This might also imply that they would be less concerned about the risk of things like brain damage that would interfere with the revivee's happiness but still allow them to be useful for the reviver's purposes.

[-][anonymous]00

Yeah this has been brought up. Someone thought a bad (near-friendly) future was more likely than a friendly future, and that being frozen is asking to be tortured by a future supersadist.

I'm not sure what to think...

One thing is for sure: whether in a good way or a bad way, cryonics is your ticket to adventuretown! Books and movies agree.

When I started this thread, I wasn't quite sure where it was going to end up. But here's what I see as the most powerful argument:

An enlightened, benign future society might revive you to let you live life to your full potential, for your sake -- when it is convenient for them. But a future society that has morality in line with some pretty good present ones (not the very best) might see you as a precious commodity to revive for the ends of the elite. An enlightened society would not revive you if you were going to be miserable with serious brain damage, but a less enlightened society would have few qualms about that. Even if revived intact, you would still serve the ends of the elite and might well be prevented from taking your own life if you found it miserable.

I judge the latter scenario much more likely than the former. If so, cryonic preservation's appeal would be much less -- it might even be something you would pay to get out of!

You who are cryonics enthusiasts who are also committed to the LW method should think about this. Maybe you will judge the probabilities of the future scenarios differently, but there are strong cognitive biases at work here against an accurate analysis.

Immortality is still possible. We might be subjects in an experiment, and when we croak our brains might be uploaded by the compassionate experimenters. Maybe the theists are right (there sure are a lot of them) and maybe the ones are right who preach universal salvation. You can still have hope, but it doesn't rest on spending large sums on freezing your brain.

Immortality is still possible. [...] You can still have hope, but it doesn't rest on spending large sums on freezing your brain.

How does this follow? Even your most powerful argument/worst-case scenario has immortality as its outcome, just not completely on your own terms. To what extent are we not "[serving] the ends of the elite" and "prevented from taking [our] own life if [we] found it miserable" even now?

Even your most powerful argument/worst-case scenario has immortality as its outcome

By "possible", I meant that we can imagine scenarios (however unlikely) where we will be immortal. Cryonics also relies on scenarios (admittedly not quite as unlikely) where we would at least have much longer lives, though not truly immortal. If being alive for a thousand years with serious brain damage still strikes you as much preferable to death, then I agree that my argument does not apply to you.

To what extent are we not "[serving] the ends of the elite" and "prevented from taking [our] own life if [we] found it miserable" even now?

In the US today, as a person of no particular import to the government, I feel I have considerable freedom to live as I want, and no one is going to stop me from killing myself if I choose. If on some construal I inevitably serve the elite today, I at least have a lot of freedom in how I do that. Revived people in a future world might be of enough interest that they could be supervised so carefully that personal choice would be severely limited and suicide would be impossible.