Posts

Sorted by New

Wiki Contributions

Comments

Answer by Shamash31

I think the simplest way to answer this is to introduce a new scenario. Let's call it Scenario 0. Scenario 0 is similar to Scenario 1, but in this case your body is not disintegrated. The result seems pretty clear: you are unaffected and continue living life on earth. Other yous may be living their own lives in space but it isn't as if there is some kind of metaphysical consciousness link that connects you to them.

And so, in scenarios 1 and 2, where the earth-you is disintegrated, well, you're dead. But not to worry! The normal downsides of death (pain, inability to experience new things, sadness of those left behind) do not apply! As far as the physical universe is concerned (i.e. as far as reality and logic are concerned) there are now two living beings that both perceive themselves as having once been you. Their connection to the original you is no less significant than the connection between the you that goes to sleep and the you that wakes up in the morning.

EDIT: I realize this does not actually answer Questions 3 and 4. I don't have time to respond to those right now but I will in a future edit.

EDIT 2: The approach I'd take with Q3 and Q4 would be to maximize total wealth of all clones that don't get disintegrated.

Let X be how much money is in my wallet and L is the ticket price. In scenario 1, the total wealth is 2X without the lottery or 2(X-L)+100 with the lottery. We buy the lottery ticket if 2(X-L)+100 > 2X, the inequality can be simplified to -2L + 100 > 0, which is further simplified to L < 50. The ticket is only worth buying if it costs less than $50.

For Q4 we should have a similar formula but we have three clones in the end rather than two, so I would only buy the ticket if it cost less than $33.34.

As a whole, I find your intuition of a good future similar to my intuition of a good future, but I do think that once it is examined more closely there are a few holes worth considering. I'll start by listing the details I strongly agree with, then the ones I am unsure of, and then the ones I strongly disagree with. 

Strongly Agree

  • It makes sense for humans to modify their memories and potentially even their cognitive abilities depending on the circumstance. The example provided of a worldbuilder sealing off their memories to properly enjoy their world from an inhabitant's perspective seems plausible. 
  • The majority of human experience is dominated by virtual/simulated worlds

Unsure

  • It seems inefficient for this person to be disconnected from the rest of humanity and especially from "god". In fact, the AI seems like it's too small of an influence on the viewpoint character's life. 
  • The worlds with maximized pleasure settings sound a little dangerous and potentially wirehead-y. A properly aligned AGI probably would frown on wireheading.

Strongly Disagree

  • If you create a simulated world where simulated beings are real and have rights, that simulation becomes either less ethical or less optimized for your utility. Simulated beings should either be props without qualia or granted just as much power as the "real" beings if the universe is to be truly fair. 
  • Inefficiency like creating a planet where a simulation would do the same thing but better seems like an untenable waste of resources that could be used on more simulations. 
  • When simulated worlds are an option to this degree, it seems ridiculous to believe that abstaining from simulations altogether would be an optimal action to take in any circumstance. Couldn't you go to a simulation optimized for reading, a simulation optimized for hot chocolate, etc.? Partaking of such things in the real world also seems to be a waste of resources

I might update this comment if anything else comes to mind. 

By the way, if you haven't already, I would recommend you read the Fun Theory sequence by Eliezer Yudowsky. One of the ways you can access it is through this post: 

https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence

"Seduced by Imagination" might be particularly relevant, if this sort of thing has been on your mind for a while. 

I've read this post three times through and I still find it confusing. Perhaps it would be most helpful to say the parts I do understand and agree with, then proceed from there. I agree that the information available to hirers about candidates is relatively small, and that the future in general is complicated and chaotic.

I suppose the root of my confusion is this: won't a long-term extrapolation of a candidate's performance just magnify any inaccuracies that the hirer has mistakenly inferred from what they already know about the candidate? Isn't the most accurate information about the candidate here in the present rather than a low-confidence guess about the future? 

It's also unclear what all the questions in each step are meant to be assisting with. Are you really saying that, based on a candidate's application and short interview(s), you can make meaningful predictions about questions like "Who, other than Jamie, paid the price for their struggle, and in what way?" or "Where are they moving to? Do they keep in touch?". From my point of view, trying to generate weighted probabilities for every possible outcome just seems a lot less practical than merely comparing an applicant's resume and interview directly against another applicant's.

While one's experience and upbringing are highly impactful on their current mental state, they are not unique in that regard. There are a great number of factors that lead to someone being what they are at a particular time, including their genetics, their birth conditions, the health of their mother during pregnancy, and so on. It seems to me that the claim that "everyone is the same but experiencing life from a different angle" is not really saying much at all, because the scope of the differences two "angles" may have is not bounded. You come to the same conclusion later on in your post, but you take a different path to get there, so I thought my own observation might be helpful. 

On your next point, [Zombies! Zombies?](https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies) is an excellently written post on the subject that I agree with. I think it may change your opinion, especially on the claim that a p-zombie's brain and a conscious brain are physically identical. 

Your loose definition of consciousness -- "the ability to think and feel and live in the moment"-- clearly does not apply to inanimate objects, at least not every single inanimate object. Ultimately, yes, we are all made of particles, we are not in disagreement about that. But to say that everything is conscious essentially renders the word "conscious" to be totally meaningless. 

Sure, everyone and everything is constantly being changed and recycled, I don't disagree there. I do think, personally, that some patterns of matter are more important than others. 

I don't see how your takeaway follows from your claims. Are you saying that I should treat rocks with kindness, because rocks are essentially the same as me? And what does it mean to leave things better? In a different, more common context, I can generally agree with ideas like "treat people, including yourself, with kindness and empathy" or "leave the world better than you found it" but the reasons I believe in those ideas comes from somewhere completely different. 

Ultimately, if seeing the world this way helps you to be a happier, healthier person, then I can't say that you should or shouldn't keep seeing things this way. But I do think that you could find much more consistent and rational reasons to justify your morality. 

Consider the following thought experiment: You discover that you've just been placed into a simulation, and that every night at midnight you are copied and deleted instantaneously, and in the next instant your copy is created where the original once was. Existentially terrified, you go on an alcohol and sugary treat binge, not caring about the next day. After all, it's your copy who has to suffer the consequences, right? Eventually you fall asleep. 

The next day you wake up hungover as all hell. After a few hours of recuperation, you consider what has happened. This feels just like waking up hungover before you were put into the simulation. You confirm that the copy and deletion did occur. It is confirmed. Are you still the same person you were before?

You're right that it's like going to sleep and never waking up, but Algon was also right about it being like going to sleep and waking up in the morning, because from the perspective of "original" you those are both the same experience. 

Shamash280

Shortly after the Dagger of Detect Evil became available to the public, Wiz's sales of the Dagger of Glowing Red skyrocketed.

There are a few ways to look at the question, but by my reasoning, none of them result in the answer "literally infinite."

From a deterministic point of view, the answer is zero degrees of freedom, because whatever choice the human "makes" is the only possible choice he/she could be making. 

From the perspective of treating decision-making as a black box which issues commands to the body, the amount of commands that the body can physically comply with is limited. Humans only have a certain, finite quantity of nerve cells to issue these commands with and through. Therefore, the set of commands that can be sent through these nerves at any given time must also be finite.

Shamash130

While I am not technically a "New User" in the context of the age of my account, I comment very infrequently, and I've never made a forum-level post. 

I would rate my own rationality skills and knowledge at slightly above the average person but below the average active LessWrong member. While I am aware that I possess many habits and biases that reduce the quality of my written content, I have the sincere goal of becoming a better rationalist. 

There are times when I am unsure whether an argument or claim that seems incorrect is flawed or if it is my reasoning that is flawed. In such cases, it seems intuitive to write a critical comment which explicitly states what I perceive to be faulty about that claim or argument and what thought processes have led to this perception. In the case that these criticisms are valid, then the discussion of the subject is improved and those who read the comment will benefit. If the criticisms are not valid, then I may be corrected by a response that points out where my reasoning went wrong, helping me avoid making such errors in the future.

Amateur rationalists like myself are probably going to make mistakes when it comes to criticism of other people's written content, even when we strive to follow community guidelines. My concern with your suggestions is that these changes may discourage users like me from creating flawed posts and comments that help us grow as rationalists. 

Load More