In reply to:

In the year 2020, cousin_it downloads some software for running predictor sims. He wants to optimize his future conduct -test which particular minor mind-hack would be the most beneficial, given his individual psychological makeup. So he runs many ancestor sims of himself poisoned with different irrational mind-hacks to see who wins in the past. I am one such sim, poisoned in an unknown manner; what should I do?

I have precommitted as strongly as I can to never run simulations of myself which are worse off than the version the simulation was based on. This might fall out as a consequence of UDT, but with a time-varying utility function I'm not really sure.

In general, self-copying and self-simulation require extreme care. They might be able to affect subjective experience in a way that goes back in time. The rules of subjective experience, if any, are non-transferrable (you can't learn them from someone else who's figured them out, even in principle) and might not be discoverable at all.

Humans can't easily precommit to anything at all, and even if they could, it'd be incredibly stupid to try without thinking about it for a very very long time. I'm surprised at how many people don't immediately see this.

Is it rational to be religious? Simulations are required for answer.

What must a sane person1 think regarding religion? The naive first approximation is "religion is crap". But let's consider the following:

Humans are imperfectly rational creatures. Our faults include not being psychologically able to maximally operate according to our values. We can e.g. suffer from burn-out if we try to push ourselves too hard.

It is thus important for us to consider, what psychological habits and choices contribute to our being able to work as diligently for our values as we want to (while being mentally healthy). It is a theoretical possibility, a hypothesis that could be experimentally studied, that the optimal2 psychological choices include embracing some form of Faith, i.e. beliefs not resting on logical proof or material evidence.

In other words, it could be that our values mean that Occam's Razor should be rejected (in some cases), since embracing Occam's Razor might mean that we miss out on opportunities to manipulate ourselves psychologically into being more what we want to be.

To a person aware of The Simulation Argument, the above suggests interesting corollaries:

  1. Running ancestor simulations is the ultimate tool to find out, what (if any) form of Faith is most conducive to us being able to live according to our values.
  2. If there is a Creator and we are in fact currently in a simulation being run by that Creator, it would have been rather humorous of them to create our world thus that the above method would yield "knowledge" of their existence.

 


1: Actually, what I've written here assumes we are talking about humans. Persons-in-general may be psychologically different, and theoretically capable of perfect rationality.

2: At least for some individuals, not necessarily all.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 8:32 PM
Select new highlight date
All comments loaded

First I was like, "oh no, another one claims that faith could be rational without showing why it is". Then I parsed the simulation part and clicked "upvote" because it showed me a new and scary possibility.

In the year 2020, cousin_it downloads some software for running predictor sims. He wants to optimize his future conduct - test which particular minor mind-hack would be the most beneficial, given his individual psychological makeup. So he runs many ancestor sims of himself poisoned with different irrational mind-hacks to see who wins in the past. I am one such sim, poisoned in an unknown manner; what should I do?

Times like these I regret that I can't write good sci-fi in English because it's not my first language.

Times like these I regret that I can't write good sci-fi in English because it's not my first language.

I find that hard to believe; your non-native-speaker status is not apparent from your comments.

Clarification: I find it hard to believe that's a limitation on writing good sci-fi, not that English is not your first language.

Writing quality fiction takes more facility with language than than writing quality nonfiction (like posts) does. (Not that nativeness is an absolute barrier: English was Nabokov's third language, IIRC.)

A cynic could observe that readers of genre fiction are comparatively less demanding in this respect, though.

Very off topic, but I've actually often wondered why there don't seem to be any non-native speakers writing commercial fiction in English, given how massively larger the English-speaking market is compared to that of a lot of the smaller European languages. Nabokov is literally the only example I can think of.

Well, fiction writing generally isn't the sort of thing one enters into for the wonderful market. You do it because you love it and are somehow good enough at it that it can pay the bills. Probably you write your first novel in your spare time with very low expectations of it ever being published.

So why write in anything but your favorite language? And while, proficiency-wise, people like Nabokov and Conrad exist, chances are that you aren't one of them. (That said, there are probably more non-native writers of note than you think. How many of my favorite authors have red hair? I have no idea.)

Thing is, I read almost no fiction in Finnish and quite a lot in English. There isn't much of a tradition of speculative fiction in Finnish that isn't just copying stuff already done in English. So if I were to write a SF or a fantasy story, I'd seriously consider whether I could do it in English, because for me those kinds of stories are written in English and then maybe poorly translated into Finnish.

I'm sure few people match up to Nabokov or Conrad (whose non-nativeness I didn't know about), but I find it odd that I don't know of any contemporary writers who are even trying to write in English without being native speakers. I'm sure there are ones I don't know about, so any examples of currently active published non-native English fiction writers are welcome.

because for me those kinds of stories are written in English and then maybe poorly translated into Finnish.

Sounds like an opportunity! I wonder if it would be more valuable to translate (with the royalties that implies) or to just rip off?

An opportunity for what? Genre literature is already translated into Finnish, and the publishers with something that isn't Harry Potter or Lord of the Rings are mostly able to stick around, but probably aren't making big profits. Finnish-written SF or fantasy is mostly crap and sinks without a trace. A good indication of the general level of quality is that for the first 20 years of the Atorox prize for the best Finnish SF short story, two outlier authors won ten of the 20 awards. No one has even been able to earn a full-time living writing SF in Finnish, not to mention growing rich.

Of course there's nothing preventing someone from writing stuff on par with Ted Chiang and Jeff VanderMeer in Finnish, but why bother? The book could be a cult classic but probably not a mainstream hit, and there aren't enough non-mainstream Finnish-speaking buyers for a book to earn someone a living.

I don't even know who buys all the translated Finnish SF books. I've read a bunch of those from the library, but almost all of the books I've bought have been in English. Why bother with translations that are both more expensive than the original paperbacks and have clunkier language?

Times like these I regret that I can't write good sci-fi in English because it's not my first language.

Why do you have to write it in English?

In the year 2020, cousin_it downloads some software for running predictor sims. He wants to optimize his future conduct -test which particular minor mind-hack would be the most beneficial, given his individual psychological makeup. So he runs many ancestor sims of himself poisoned with different irrational mind-hacks to see who wins in the past. I am one such sim, poisoned in an unknown manner; what should I do?

I have precommitted as strongly as I can to never run simulations of myself which are worse off than the version the simulation was based on. This might fall out as a consequence of UDT, but with a time-varying utility function I'm not really sure.

In general, self-copying and self-simulation require extreme care. They might be able to affect subjective experience in a way that goes back in time. The rules of subjective experience, if any, are non-transferrable (you can't learn them from someone else who's figured them out, even in principle) and might not be discoverable at all.

Humans can't easily precommit to anything at all, and even if they could, it'd be incredibly stupid to try without thinking about it for a very very long time. I'm surprised at how many people don't immediately see this.

I don't believe your decision follows from UDT. If you have a short past and a long future, knowledge gained from sims may improve your future enough to pay off the sims' suffering.

This post does not seem to contribute much. As nawitus pointed out, this post does a good enough job of distinguishing between instrumental and epistemic rationality.

While it seems obvious that in some cases, a false belief will have greater utility than a true one (I can set up a contrived example if you need one), it's a devil's bargain. Once you've infected yourself with a belief that cannot respond to evidence, you will (most likely) end up getting the wrong answer on Very Important Problems.

And if you've already had your awakening as a rationalist, I'd like to think it would be impossible to make yourself honestly believe something that you know to be false.

Yes, the irony in the last statement is intended.

I think the mistaken assumption here is that you can actually choose to have faith. Certainly you can choose to repeat the words "I have faith". You can even say those words inside your head. You can even attend religious services. That is not the same as actually believing in your religion.

I think this essentially is what Orwell called "Doublethink", and it seems to explain much of the religious behavior I personally have seen.

This post is based on the (very common) mistake of equating religious practice and religious faith. Religion is only incidentally about what you believe; the more important components are community and ritual practice. From that perspective, it is a lot easier to believe that religion can be beneficial. What you think about the Trinity, for instance, is less important than the fact that you go to Mass and see other members of your community there and engage in these bizarre activities together.

There is an enormous blindspot about society in the libertarian/rationalist community, of which the above is just one manifestation.

No, I very clearly am aware of those two things as separate things. (Though I could have been clearer about this in my post.)

It is not obvious that faith couldn't be psychologically useful, also separately from practice.

I know some individuals that I believe would be worse off if they were to have a crisis of faith and lose their religion. And while I can't be sure and have never run any tests to find out, I think that they really believe, not just with belief in belief. By the way, none of these are particularly intelligent people.

But I have a hard time imagining someone intelligent and rational who would be better off deceiving themself and gaining faith. Adopting a religion where you are allowed to fake it (like Risto suggests) would almost certainly be better. Sometimes I adopt foma to help me through the day, but I don't take them seriously.

Of course, it's easy to imagine situations where they would be better off mouthing faith, such as kidnap and interrogation by fundamentalist terrorists, or daily life in a lot of societies (past and present) where rationality is undervalued. But I don't think that this is what you mean.

According to these definitions, it could be instrumentally rational to be religious for some subset of people, but not epimestically rational.

I don't think simulations help. Once you start simulating yourself to arbitrary precision, that being would have the same thoughts as you, including "Hey, I should run a simulation", and then you're back to square one.

More generally, when you think about how to interact with other people, you are simulating them, in a crude sense, using your own mind as a shortcut. See empathic inference.

If you become superintelligent and have lots more computing resources, then your simulations of other minds themselves become minds, with experiences indistinguishable from yours, and make the same decisions, for the same reasons. What's worse, the simulations have the same moral weight! See EY's nonperson predicates.

(This has inspired me to consider "Virtualization Decision Theory", VDT, which says, "Act as though setting the output of yourself in a simulation run by beings deciding how to interact with a realer version of you that you care about more.")

Here are my earlier remarks on the simulated-world / religion parallel.

Belief in the concept of a time-continuous "self" might be an example of an article of Faith that is useful for humans.

(Most people believe in a time-continuous self anyway, they just don't realize it's something that current best physics tells us there's no evidence for the existence of.)

In what ways would the world look different to me if my TCS did/did not exist?

Can't think of any such way.

Similarly, the existence or non-existence of some sorts of inactive Gods doesn't affect your observations in any way.

Occam's Razor would eliminate both those Gods and a time-continuous self, though.

(But personally, I propose that we may have faith in a time-continuous self, if it is sufficiently useful psychologically. And that it's an open question whether there are other supernatural things that at least some should also have faith in.)

I didn't downvote this post, but I can't say I endorse seeing more posts like it. The concept of this post is one of the least interesting in a huge conceptspace of decision theory problems, especially decision theory problems in an ensemble universe. To focus on 'having faith' and 'rationality' in particular might seem clever, but it fails to illuminate in the same way that e.g. Nesov's counterfactual mugging does. When you start thinking about various things simulators might do, you're probably wrong about how much measure is going to be taken up by any given set of simulations. Especially so once you consider that a superintelligence is extremely likely to occur before brain emulations and that a superintelligence is almost assuredly not going to be running simulations of the kind you specify.

Instead of thinking "What kind of scenario involving simulations could I post to Less Wrong and still be relevant?", as it seems to me you did, it would be much better to ask the more purely curious question "What is the relative power of optimization processes that would cause universes that include agents in my observer moment reference class to find themselves in a universe that looks like this one instead of some other universe?" Asking this question has led me to some interesting insights, and I imagine it would interest others as well.

The basic idea is sound - If you really think religion is good/bad for X, the best proof would be to run a simulation and observe outcomes for X. I interpret the downvoting to -10 as a strong collective irrational bias against religion, even in a purely instrumental context.

The corollaries are distracting.

Don't some interpretations of neopagan magic have a bit of the same idea as the religion thing here? The idea is that there isn't assumed to be any supernatural woo involved, the magic rituals just exist as something that an observing human brain will really glom onto, and will end up doing stuff that it is able to but otherwise might not have done.

I think Eric S. Raymond and Alan Moore have written about magic from this outlook. Chaos magic with its concept of belief as a tool might also be relevant.

Yes; Anders Sandberg is a paragon of rationality, but is (or was, last time I asked him about it) also a neopagan, in the way just described.

I'm not convinced that the religious have any particular advantages wrt akrasia and such things.

A conversion is sure to give you a great deal of energy & willpower temporarily, but ultimately that's a finite resource.

The main advantage the religious have is supportive community. That is where rationalists really fall down, although I think LW is a step in the right direction.

I think the question "Is it rational to be religious?" is one that deserves critical attention and testing, but talk of ancestor simulations completely demolishes the point. Any entity capable of creating an actual ancestor simulation--a fully-modeled "Matrix" populated with genuinely human-equivalent sentient Sims--is an entity for whom the results of such a test would be irrelevant and obsolete. The premise, that some form of Faith might be useful or even necessary for rational humans to maximally act in accordance with their values, is not applicable for a posthuman being.

The technology for creating a real ancestor simulation would almost certainly exist in a context of other technologies that are comparably advanced within their fields. If the computer power exists to run a physics engine sufficient to simulate a whole planet and its environs, complete with several billion human-level consciousnesses, the beings who possess that power would almost certainly be able to enhance their own cognitive and psychological capacities to the point that Faith would no longer be necessary for them, even if it might be for us here and now, or for the Sims in the ancestor simulation. A creator of ancestor simulations would for all practical intents and purposes be God, even in relation to his/her own universe. With molecular nanotechnology, utility fogs, programmable matter, and technologies we can't even imagine, conjuring a burning bush or a talking snake or a Resurrection would be child's play.

Proposing ancestor simulations as a way to test the usefulness of Faith is like saying, "Let's use a TARDIS to go watch early space-age planets and see if rockets or solar sails are the best way for us to explore the universe!"

On the other hand, we do already possess computer platforms that are fairly good at emulating other human-level intelligences, and we routinely create plausible, though limited world-simulations. These are "human brains" and "stories," respectively. So one way to partially examine and test to determine whether or not it could be rational to be religious would be to write a story about a rational person who adopts a Faith and applies it to maximally operate according to his or her values.

Then, present the story to people who believe that Faith, and people who don't. Is the story itself believable? Do the other minds processing the simulation (story) judge that it accurately models reality? Unfortunately this method cannot simultaneously generate billions of fully-realized simulated lives so that a wide variety of Faiths and life-circumstances under which they are used can be examined. Instead, the author would have to generate what they consider to be a plausible scenario for a rational person adopting a Faith and write a genuinely believable story about it. To serve as an effective test, the story would have to include as many realistic circumstances adverse to the idea as possible, in the same way that the secret to passing the 2-4-6 Test is to look for number sets that produce a "no." It could not be written like a fictional Utopia in which the Utopia works only because everyone shares the author's beliefs and consistently follows them.

Eliezer's story Harry Potter and the Methods of Rationality does a mirror-opposite of this, providing a story-test for the question, "Would the Sequences still be applicable even under the extreme circumstance of being catapulted into the Harry Potter universe?" Some of the best moments in this story are where Harry's rationalist world-view is strained to the utmost, like when he sees Professor McGonagall turn into a cat. A reader who finds the story "believable" (assuming sufficient suspension-of-disbelief to let the magic slide) will come away accepting that, if the Sequences can work even in a world with flying broomsticks and shape-shifting witches, they'll probably work here in our rather more orderly and equation-modelable universe.

So, a "So-And-So and the Methods of Faith" story might, if well-written, be able to demonstrate that Faith could be a valid way of programming the non-rational parts of our brain into helping us maximally operate according to our values.

Another method of testing (perhaps a next step) would be to adopt the techniques of Chaos Magic and/or Neuro-Linguistic Programming and try out the utility of Faith (perhaps testing different Faiths over set periods of time) in one's own life. Or better still: get the funding for a proper scientific study with statistically-sufficient sample sizes, control-groups, double-blind protocols, etc..

This post definitely has problems, but given the fairly interesting discussion it appears to have prompted, does not seem to deserve being in the minus-double digits.

I think the central point is that the practical value of faith is more of an empirical question than a logical one. The central problem, of course, is that for a real person to accept some propositions on faith requires a (likely significant) dent in their overall rationality. The question is not about the value of faith, but about the tradeoffs made to obtain it; a complexity which is not really addressed here, and which may prove so entwined as to be impossible to arrive at deliberately.

In other words, once you've chosen a certain amount of rationality, many paths of faith are closed to you. Conversely, once you've chosen a certain amount of faith, some levels of rationality are closed to you.