Resurrection through simulation: questions of feasibility, desirability and some implications

Could a future superintelligence bring back the already dead?  This discussion has come up a while back (and see the somewhat related); I'd like to resurrect the topic because ... it's potentially quite important.

Algorithmic resurrection is a possibility if we accept the same computational patternist view of identity that suggests cryonics and uploading will work.  I see this as the only consistent view of my observations, but if you don't buy this argument/belief set then the rest may not be relevant.

The general implementation idea is to run a forward simulation over some portion of earth's history, constrained to enforce compliance with all recovered historical evidence.  The historical evidence would consist mainly of all the scanned brains and the future internet.  

The thesis is that to the extent that you can retrace historical reality complete with simulated historical people and their thoughts, memories, and emotions, to this same extent you actually recreate/resurrect the historical people.

So the questions are: is it feasible? is it desirable/ethical/utility-efficient?  And finally, why may this matter?

Simulation Feasibility

A few decades ago pong was a technical achievement, now we have avatar.  The trajectory seems to suggest we are on track to photorealistic simulations fairly soon (decades).  Offline graphics for film arguably are already photoreal, real-time rendering is close behind, and the biggest remaining problem is the uncanny valley, which really is just the AI problem by another name.  Once we solve that (which we are assuming), the Matrix follows.  Superintelligences could help.

There are some general theorems in computer graphics that suggest that simulating an observer optimized world requires resources only in proportion to the observational power of the observers.  Video game and film renderers in fact already rely heavily on this strategy.

Criticism from Chaos:  We can't even simulate the weather more than a few weeks in advance.

Response: Simulating the exact future state of specific chaotic systems may be hard, but simulating chaotic systems in general is not.  In this case we are not simulating the future state, but the past.  We already know something of the past state of the system, to some level of detail, and we can simulate the likely (or multiple likely) paths within this configuration space, filling in detail.

Physical Reversibility Criticism: The AI would have to rewind time, it would have to know the exact state of every atom on earth and every photon that has left earth.

Response: Yes the most straightforward brute force way to infer the past state of earth would be to compute the reverse of all physical interactions and would require ridiculously impractical amounts of information and computation.  The best algorithm for a given problem is usually not brute force.  The specifying data of a human mind is infinitesimal in comparison, and even a random guessing algorithm would probably require less resources than fully reversing history.

Constrained simulation converges much faster to perfectly accurate recovery, but by no means is full perfect recovery even required for (partial) success.  The patternist view of identity is fluid and continuous.  

If resurrecting a specific historical person is better than creating a hypothetical person, creating a somewhat historical person is also better, and the closer the better.

Simulation Ethics

Humans appear to value other humans, but each human appears to value some more than others.  In general humans typically roughly value themselves the most, then kin and family, followed by past contacts, tribal affiliations, and the vaguely similar.

We can generalize this as a valuation in person-space which peaks at the self identity-pattern and then declines in some complex fashion as we move away to more distant locales and less related people.

If we extrapolate this to a future where humans have the power to create new humans and or recreate past humans, we can infer that the distribution of created people may follow the self-centered valuation distribution.

Thus recreating specific ancestors or close relations is better than recreating vaguely historical people which is better than creating non-specific people in general.

Suffering Criticism:  An ancestral simulation would recreate a huge amount of suffering.

Response: Humans suffer and live in a world that seems to suffer greatly, and yet very few humans prefer non-existence over their suffering.  Evolution culls existential pessimists.

Recreating a past human will recreate their suffering, but it could also grant them an afterlife filled with tremendous joy.  The relatively small finite suffering may not add up to much in this consideration.  It could even initially relatively enhance subsequent elevation to joyful state, but this is speculative.

The utilitarian calculus seems to be: create non-suffering generic people who we value somewhat less vs recreate initially suffering specific historical people who we value more.  In some cases (such as lost love ones), the moral calculus weighs heavily in favor of recreating specific people.  Many other historicals may be brought along for the ride.

Closed Loops

The vast majority of the hundred billion something humans who have ever lived share the singular misfortune of simply being born too early in earth's history to be saved by cryonics and uploading.

Recreating history up to 2012 would require one hundred billion virtual brains.  Simulating history into the phase when uploading and virtual brains become common could vastly increase the simulation costs.

The simulations have the property that they become more accurate as time progresses.  If a person is cryonically perserved and then scanned and uploaded, this provides exact information.  Simulations will converge to perfect accuracy at that particular moment in time.  In addition, the cryonic brain will be unconscious and inactive for a stretch.  

Thus the moment of biological death, even if the person is cryonically preserved, could be an opportune time to recycle simulation resources, as there is no loss of unique information (threads converged).

How would such a scenario effect the Simulation Argument?  It would seem to shift probabilities such that more (most?) observer moments are in pre-uploading histories, rather than in posthuman timelines.  I find this disquieting for some reason, even though I don't suspect it will effect my observational experience.

 

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 6:56 AM
Select new highlight date
Rendering 50/57 comments  show more

The primary problem and question is whether a pattern-identical version of you with different causal antecedents is the same person. Believing that uploading works, in virtue of continuing the same type of pattern from a single causal antecedent, does not commit you to believing that destruction followed by a whole new causal process coincidentally producing the same pattern, is continuity of consciousness.

Many will no doubt reply saying that what I am questioning is meaningless, in which case, I suppose, it works as well as anything does; but for myself I do not dismiss such things as meaningless until I understand exactly how they are meaningless.

An upload will become a file, a string of bits. Said file could then be copied, or even irreversibly mixed if you prefer, into many such files, which all share the same causal antecedents. But we could also create an identical file through a purely random process, and the randomly-created file and the upload file are logically/physically/functionally identical. We could even mix and scramble them if desired, but it wouldn't really matter because these are just bits, and bits have no intrinsic history tags. You have spent some time dismantling zombie arguments, and it would seem there is an analog here: if there is no objective way, in practice/principle, to differentiate two minds (mindfiles), then they are the same in practice/principle.

On the other hand, I doubt that creating a human-complexity mindfile through a random process will be computationally tractable anytime soon, and so I agree that recreating the causal history is the likely path.

But if you or I die and then one or the other goes on to create a FAI which reproduces the causal history of earth, it will not restore our patterns through mere coincidence alone.

Curious though, as I wouldn't have predicted that this would be your disagreement. Have you written something on your thoughts on this view of identity?

Interesting, thanks. He brings up some good points which I partly agree with, but he seems to be only considering highly exact recreations, which I would agree are unfeasible. We don't need anything near exactness for success though.

This leads to a first tentative argument against reconstruction based on external data: we are acquiring potentially personality-affecting information at a fairly high rate during our waking life, yet not revealing information at the same high rate. The ratio seems to be at least 1000:1.

True, but much of the point of our large sensory cortices is to compress all the incoming sensory information down into a tiny abstract symbolic stream appropriate for efficient prediction computations.

A central point would be the inner voice: our minds are constantly generating internal output sentences, only a fraction of which are externally verbalized. The information content of the inner voice is probably some of the most crucial defining information for reconstructing thoughts, and it is very compact.

That's my short reply on short notice. I'll update on Anders points and post a longer reply link here later.

I kind of wonder if there might be better ways of retrieval than simulation. There's a lot of interstellar dust out there, and an expanding sphere of light from the earth for every moment of history is interacting (very weakly) with that dust.

Thus if we were to set up something like a dyson shell at 50AU from the sun designed to maximize computational energy collection and also act as an extremely powerful telescope, I have to wonder if we could collect enough data from interstellar space that could be analyzed to produce an accurate recording of human history. New data would constantly be coming in from further dust about each moment of history.

If a solar system centered version turns out to be too weak, that could be a motive (as if one is needed) for colonizing the galaxy. Perhaps converting each star of the galaxy into a hyper-efficient computer (which would take tens if not hundreds of thousands of years to reach them all) would enable us to effectively analyze dust particles from the intergalactic void. Or perhaps by targeting large stars further out on the spiral arms as power sources for telescopes, we could get a picture with reduced interference.

Could this all be converted into actual history (and thus recreatable minds) without basically instantiating the pain that happened? That's a bit hazy for me. The physics extrapolations would involve a complex set of heuristics for comparative analysis of diverse data streams, and trying to narrow down the causal chain based on new data as it becomes available. However it wouldn't exactly be simulation in the sense we're used to thinking of it.

Obviously when resurrecting the minds, you would want to avoid creating traumatized and anti-social individuals. There would probably be an empirically validated approach to making humans come back "whole" and capable of integrating into the galactic culture while retaining their memories and experiences to sufficient degree that they are indisputably the same person.

Recreating accurate historical minds entails recreating accurate history, complete with traumatized and anti-social individuals. We should be able to 'repair' and integrate them into future posthuman galactic culture after they are no longer constrained to historical accuracy (ie, after death), but not so much before. There may be some leeway, but each person's history is entwined with the global history. You can't really change too much without changing everything and thus getting largely different people.

This, is perhaps related to my favoring the unorthodox 1/2 answer to the Sleeping Beauty problem but is anyone else pretty sure that simulating a suffering person doesn't change the amount of suffering in the world? This is not an argument that "simulations don't have feelings"-- I just think that the number of copies of you doesn't have moral significance (so long as that number is at least 1). I'm pretty happy right now-- I don't think the world would be improved significantly if there were a server somewhere running a few hundred exact copies of my brain state and sensory input. I consider my identity to include all exactly similar simulations of me and the quantity of those simulations in no way impacts my utility function (until you put us in a decision problem where how many copies of me actually matters). I am not concerned about token persons I'm concerned about the types. What people care about is that there be some future instantiation of themselves and that that instantiation be happy.

Historical suffering already happened and making copies of it doesn't make it worse (why would the time at which a program is run possibly matter morally?). Moreover, it's not clear why the fact that historical people no longer exist should make a bit of difference in our wanting to help them. In a timeless sense they will always be suffering-- what we can do is instantiate an experienced end to that suffering (a peaceful afterlife).

If you combine this with a Big World (e.g. eternal inflation) where all minds get instantiated then nothing matters. But you would still care about what happens even if you believed this is a Big World.

Why shouldn't we be open to the possibility that a Big World renders all attempts at consequentially altruistic behavior meaningless?

Even if I'm wrong that single instantiation is all that matters it seems plausible that what we should be concerned with is not the frequency with which happy minds are instantiated but the proportion of "futures" it which suffering has been relieved.

Hmm. I don't really disagree that qualia is dupilicated, it's more that I'm not sure I care about qualia instantiations rather than types of qualia (confusing this, of course, is uncertainty about what is meant by qualia). His ethical arguments I find pretty unpersuasive but the epistemological argument requires more unpacking.

Ahh thanks. I agree with your train of thought.

I thought that the same slippery slope argument for identity from patternism entails that details are unimportant, but that view is perhaps less common here than I would have thought.

Is "patternism" a private word that you use to refer to some constellation of philosophic tendencies you've personally observed, or is it a coherent doctrine described by others (preferably used by proponents to self-describe) in a relatively public way? It sounds like something you're using in a roughly descriptive way based on private insights, but google suggests a method in comparative religion or Goertzel's theory of that name...

I thought I first heard that term from Kurzweil in the TSIN or his earlier work, but I've read or skimmed some of Goertzel's writing, so perhaps I picked it up from there. I'm realizing the term probably has little meaning in philosophy, but suggests computationalism and or functionalism.

For politico-philosophical l stuff, I kind of like the idea of taking the name that people who half-understand a mindset apply from a distance to distinguish from all the other mind sets that they half-understand... in which case the best term I know is "cybernetic totalism".

However, in this case the discussion isn't a matter of general mindset but actually is a falsifiable scientific/engineering question from within the mindset: how substrate independent is the mind? My sense is that biologists willing to speculate publicly think the functionality of the mind is intimately tangled up with the packing arrangements of DNA and the precise position of receptors in membranes and so on. I suspect that its higher than that, but also I don't think enough people understand the pragmatics of substrate independence for there to be formal politico-philosophic labels for people who cherish one level of abstraction versus another.

I remember and loved Jarod Lanier's piece where he coined that term, and I considered myself a cybernetic totalist (and still do). It just doesn't exactly role off the tongue.

At some point in college I found Principa Cybernetica, and I realized I had found my core philosophical belief set. I'm not sure what you call that worldview though, perhaps systemic evolutionary cyberneticism?

Patternist at least conveys that the fundamental concept is information patterns.

However, in this case the discussion isn't a matter of general mindset but actually is a falsifiable scientific/engineering question from within the mindset: how substrate independent is the mind?

Yes!

My sense is that biologists willing to speculate publicly think the functionality of the mind is intimately tangled up with ..

They may, and they may or may not be correct, but in doing so they would be speculating outside of their domain of expertise.

The questions of which level of abstraction is relevant is also a scientific/engineering question, and computational neuroscience already has much to say on that, in terms of what it takes to create simulations and or functional equivalents of brain components.

Suffering Criticism: An ancestral simulation would recreate a huge amount of suffering.

Response: Humans suffer and live in a world that seems to suffer greatly, and yet very few humans prefer non-existence over their suffering. Evolution culls existential pessimists.

Recreating a past human will recreate their suffering, but it could also grant them an afterlife filled with tremendous joy. The relatively small finite suffering may not add up to much in this consideration. It could even initially relatively enhance subsequent elevation to joyful state, but this is speculative.

Even if the future joy of the recreated past human would outweigh that of the suffering (s)he endured while being recreated, all else being equal it would be even better to create entirely new kinds of people, who wouldn't need to suffer at all, from scratch.

The first people to become immortal and to be able to simulate others, will want to simulate ("revive") their own loved ones who died just before immortality was developed.

These people, once resurrected and integrated into society, will themselves want to resurrect their own loved ones who died a little earlier than that.

And so on until most, if not all, of humanity is simulated.

Yes this.

An interesting consequence of this is historical drift: my recreation of my father would differ somewhat from reality, my grandfather more so, and so on. This wouldn't be a huge concern for any of us though, as we wouldn't be able to tell the difference. As long as the reconstructions pass interpersonal turing tests, all is good.

I know I prefer to exist now. I'd also like to survive for a very long time, indefinitely. I'm also not even sure the person I'll be 10 or 20 years from now will still be significantly "me". I'm not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I'd prefer not to suffer, but over that, there's a certain amount of suffering I'm ready to endure if I have to in order to stay alive.

Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already? Also, what if we created people who could suffer, but who'd be happy with it? Would such a life be worthy? Is the fact that suffering is bad something universal, or a quirk of terran animals neurology? Pain is both sensory information and the way this information is used by our brain. Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.

Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already?

From the point of view of those who'll actually create the minds, it's not a choice between somebody who exists already and a new mind. It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.

One might also invoke Big Universe considerations to say that even the "new" kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they'll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole "this mind has existed once, so it should be given priority over a one that hasn't" argument doesn't make a lot of sense.

Maybe we should distinguish between the information and the unpleasant sensation it brings to us. Eliminating the second may make sense, so long as you know chopping your leg off is most often not a good idea.

Yes. See also David Pearce's notion of beings who've replaced pain and pleasure with gradients of pleasure - instead of having suffering as a feedback mechanism, their feedback mechanism is a lack of pleasure.

Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. But why would these have a priority over those who exist already?

From the point of view of those who'll actually create the minds, it's not a choice between somebody who exists already and a new mind. It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.

I'm proposing to create these minds, if I survive. Many will want this. If we have FAI, it will help me, by its definition.

I would rather live in a future afterlife that has my grandparents in it than your 'better designs'. Better by whose evaluation? I'd also say that my sense of 'better' outweighs any other sense of 'better' - my terminal values are my own.

One might also invoke Big Universe considerations to say that even the "new" kind of a mind has already existed in some corner of the universe

I could care less about some corner of the universe that is not casually connected to my corner. The big world stuff isn't very relevant: this is a decision between two versions of our local future: one with people we love in it, and one without.

Those who will actually create the minds will want to rescue people in the past, so they can reasonably anticipate being rescued themselves. Or differently put, those who create the minds will want the right answer to "should I rescue people or create new people" to be "rescue people".

There's a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history. I suspect the latter are enough more interesting to be created first. We might move on to creating the populations of interesting alternate histories, as well as randomly selected worlds and so forth down the line.

Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication. Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it's hard to say how common they would be throughout the universe -- thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.

There's a big difference between recreating an intelligence that exists/existed large numbers of lightyears away due to sheer statistical chance, and creating one that verifiably existed with high probability in your own history.

What difference is that?

Beings who only experience gradients of pleasure might be interesting, but since they already likely have access to immortality wherever they exist (being transhuman / posthuman and all) it seems like there is less utility to trying to resurrect them as it would only be a duplication.

I don't understand what you mean by "only a duplication".

Naturally evolved beings lacking the capacity for extreme suffering could be interesting, but it's hard to say how common they would be throughout the universe -- thus it would seem unfair to give them a priority in resurrection compared to naturally evolved ones.

This doesn't make any sense to me.

Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child's well-being?

What difference is that?

There's a causal connection in one case that is absent in the other, and a correspondingly higher distribution in the pasts of similar worlds.

I don't understand what you mean by "only a duplication".

Duplication of effort as well as effect with respect to other parts of the universe. Meaning you are increasing the numbers of immortals and not granting continued life to those who would otherwise be deprived of it.

Suppose that you were to have a biological child in the traditional way, but could select whether to give them genes predisposing them to extreme depression, hyperthymia, or anything in between. Would you say that you should make your choice based on how common each temperament was in the universe, and not based on the impact to the child's well-being?

We aren't talking about the creation of random new lives as a matter of reproduction, we're talking about the resurrection of people who have lived substantial lives already as part of the universe's natural existence. If you want to resurrect the most people (out of those who have actually existed and died) in order to grant them some redress against death, you are going to have to recreate people who, for physically plausible reasons, would have actually died.

I am disappointed that this has not spawned more principled objections. Morally speaking, the creating people from scratch is far, far worse than resurrecting existing people, even if the existing people experience some suffering in the course of the resurrection.

Your entire argument seems to be based on the "Impersonal Total Principle;" an ethical principle that states that all that matters is the total amount of positive and negative experiences in the world, other factors like the identity of the people having those experiences are not ethically important. I consider this principle to be both wrong and gravely immoral and will explain why in detail below.

When developing moral principles what we typically do is take certain moral intuitions we have, assume that they are being generated by some sort of overarching moral principle, then try to figure out what that principle is. If the principle is correct (or at least a step in the right direction) then other moral intuitions will probably also generate it, if it isn't then they probably won't.

The IPT was developed by Derek Parfit as a proposed solution to the Nonidentity problem. It happens to give the intuitively correct answer to the problem, but generates so many wrong answers in so many other scenarios that I believe it is obviously wrong.

For instance, the Nonidentity Problem has an instance where one child's life will be better than the other because of reduced capabilities. I came up with a version of the problem where the children have the same capabilities, but one has a worse life than the other because they have more ambitious preferences that are harder to satisfy. In that instance it doesn't seem obvious at all to me that we should chose the one with the better life. Plus, imagine an iteration of the NIP where the choice is unhealthy triplets or a healthy child. I think most people would agree that a woman who picks unhealthy triplets is doing something even worse than the woman who picks one unhealthy child in the original NIP. But according to the IPT she's done something better.

Then there are issues like the fact that the IPT suggests there's nothing wrong with someone dying if a new person is created to replace them who will have as good a life as they did. And of course, there is the repugnant conclusion.

But I think the nail in the coffin for IPT is that people seem to accept the Sadistic Conclusion. People regularly harm themselves and others in order to avoid having more children, and they seem to regard this as a moral duty, not a selfish one.

So IPT is wrong. What do I propose to replace it? Not average utilitarianism, that's just as crazy. Rather, I'd replace it with a principle that a small population with higher utility per person is generally better than a large population with lower utility per person, even if the total amount of utility is larger.

Now, I understand you're a personal identity skeptic. That's okay. I'm perfectly willing to translate this principle into phrasing that makes no mention of "persons" or people being "the same." Here goes: It is better to create sets of experiences that are linked in certain ways (ie, memory, personality, etc.). It is better to create experiences that are linked in this way, even if the total amount of positive experiences is lower because of this. It may even be better to create some amount of negative experiences if doing so allows you to make sure more of the experience sets are linked in certain ways.

So there you have it. I completely totally reject the moral principle you base your argument on. It is a terrible principle that does not derive from human moral intuitions at all. Everyone should reject it.

I also want to respond to the other points you've made in this thread but this is getting long, so I'll reply to them separately.

Your entire argument seems to be based on the "Impersonal Total Principle;" an ethical principle that states that all that matters is the total amount of positive and negative experiences in the world, other factors like the identity of the people having those experiences are not ethically important.

Your wording suggests that I would assume the ITP, which would then imply rejecting the value of identity. But actually my reasoning goes in the other direction: since I don't find personal identity to correspond to anything fundamental, my rejection of it causes me to arrive at something ITP-like. But note that I would not say that my rejection of personal identity necessarily implies ITP: "the total amount of positive and negative experience is all that matters" is a much stronger claim than a mere "personal identity doesn't matter". I have only made the latter claim, not the former.

That said, I'm not necessarily rejecting the ITP either. It does seem like a relatively reasonable claim, but that's more because I'm skeptical about the alternatives for ITP than because ITP itself would feel that strongly convincing.

I came up with a version of the problem where the children have the same capabilities, but one has a worse life than the other because they have more ambitious preferences that are harder to satisfy. In that instance it doesn't seem obvious at all to me that we should chose the one with the better life.

To me, ambitious preferences sound like a possible good thing because they might lead to the world becoming better off on net. "The reasonable man adapts himself to his environment. The unreasonable man adapts his environment to himself. All progress is therefore dependent upon the unreasonable man." That does provide a possible reason to prefer the child with the more ambitious preferences, if the net outcome for the world as a whole could be expected to be positive. But if it can't, then it seems obvious to me that we should prefer creating the non-ambitious child.

Then there are issues like the fact that the IPT suggests there's nothing wrong with someone dying if a new person is created to replace them who will have as good a life as they did.

Even if we accepted IPT, we would still have good reasons to prefer not killing existing people: namely that society works much better and with much lower levels of stress and fear if everyone has strong guarantees that society puts a high value on preserving their lives. Knowing that you might be killed at any moment doesn't do wonders for your mental health.

And of course, there is the repugnant conclusion.

I stopped consdering the Repugnant Conclusion a problem after reading John Maxwell's, Michael Sullivan's and Eliezer's comments to your "Mere Cable Channel Addition Paradox" post. And even if I hadn't been convinced by those, I also lean strongly towards negative utilitarianism, which also avoids the Repugnant Conclusion.

Here goes: It is better to create sets of experiences that are linked in certain ways (ie, memory, personality, etc.). It is better to create experiences that are linked in this way, even if the total amount of positive experiences is lower because of this. It may even be better to create some amount of negative experiences if doing so allows you to make sure more of the experience sets are linked in certain ways.

While this phrasing indeed doesn't make any mention of "persons", it still seems to me primarily motivated by a desire to create a moral theory based on persons. If not, demanding the "link" criteria seems like an arbitrary decision.

Your wording suggests that I would assume the ITP, which would then imply rejecting the value of identity. But actually my reasoning goes in the other direction: since I don't find personal identity to correspond to anything fundamental, my rejection of it causes me to arrive at something ITP-like. But note that I would not say that my rejection of personal identity necessarily implies ITP: "the total amount of positive and negative experience is all that matters" is a much stronger claim than a mere "personal identity doesn't matter". I have only made the latter claim, not the former.

I have the same reductionist views of personal identity as you. I completely agree that it isn't ontologically fundamental or anything like that. The difference between us is that when you concluded it wasn't ontologically fundamental you stopped caring about it. I, by contrast, just replaced the symbol with what it stood for. I figured out what it was that we meant by "personal identity" and concluded that that was what I had really cared about all along.

That does provide a possible reason to prefer the child with the more ambitious preferences, if the net outcome for the world as a whole could be expected to be positive. But if it can't, then it seems obvious to me that we should prefer creating the non-ambitious child.

I can't agree with this. If I had the choice between a wireheaded child who lived a life of perfect passive bliss, or a child who spent their life scientifically studying nature (but lived a hermitlike existence so their discoveries wouldn't benefit others), I would pick the second child, even if they endured many hardships the wirehead would not. I would also prefer not to be wireheaded, even if the wireheaded me would have an easier life.

When considering creating people who have different life goals, my first objective is of course, making sure both of those people would live lives worth living. But if the answer is yes for both of them then my decision would be based primarily on whose life goals were more in line with my ideals about what humanity should try to be, rather than whose life would be easier.

I suppose I am advocating something like G.E. Moore's Ideal Utilitarianism, except instead of trying to maximize ideals directly I am advocating creating people who care about those ideals and then maximizing their utility.

Even if we accepted IPT, we would still have good reasons to prefer not killing existing people: namely that society works much better and with much lower levels of stress and fear if everyone has strong guarantees that society puts a high value on preserving their lives.

I agree, but I also think killing and replacing is wrong in principle.

I stopped consdering the Repugnant Conclusion a problem after reading John Maxwell's, Michael Sullivan's and Eliezer's comments to your "Mere Cable Channel Addition Paradox" post.

I did too, but then I realized I was making a mistake. I realized that the problem with the RC was in it's premises, not it's practicality. I ultimately realized that the Mere Addition Principle was false, and that that is what is wrong with the RC.

While this phrasing indeed doesn't make any mention of "persons", it still seems to me primarily motivated by a desire to create a moral theory based on persons.

No, it is motivated a desire to create a moral theory that accurately maps what I morally value, and I consider the types of relationships we commonly refer to as "personal identity" to be more morally valuable than pretty much anything. Would you rather I devise a moral theory based on stuff I didn't consider morally valuable?

If not, demanding the "link" criteria seems like an arbitrary decision.

You can make absolutely anything sound arbitrary if you use the right rhetoric. All you have to do is take the thing that I care about, find a category it shares with things I don't care about nearly as much, and then ask me why I am arbitrarily caring for one thing over the other even though they are in the same category.

For instance, I could say "Pain and pleasure are both brain states. It's ridiculously arbitrary to care about one brain state over another, when they are all just states that occur in your brain. You should be more inclusive and less arbitrary. Now please climb into that iron maiden."

I believe personal identity is one of the cornerstones of morality, whether you call it by that name, or replace the name with the things it stands for. I don't consider it arbitrary at all.

No, it is motivated a desire to create a moral theory that accurately maps what I morally value, and I consider the types of relationships we commonly refer to as "personal identity" to be more morally valuable than pretty much anything. Would you rather I devise a moral theory based on stuff I didn't consider morally valuable?

Of course you should devise a moral theory based on what you consider morally valuable; it just fails to be persuasive to me, since it appeals to moral intuitions that I do not share (and which thus strike me as arbitrary).

Continued debate in this thread doesn't seem very productive to me, since all of our disagreement seems to come down to differing sets of moral intuitions / terminal values. So there's not very much to be said beyond "I think that X is valuable" and "I disagree".