Followup toA Premature Word on AI, The Modesty Argument

Once, during the year I was working with Marcello, I passed by a math book he was reading, left open on the table.  One formula caught my eye (why?); and I thought for a moment and said, "This... doesn't look like it can be right..."

Then we had to prove it couldn't be right.

Why prove it?  It looked wrong; why take the time for proof?

Because it was in a math book.  By presumption, when someone publishes a book, they run it past some editors and double-check their own work; then all the readers get a chance to check it, too.  There might have been something we missed.

But in this case, there wasn't.  It was a misprinted standard formula, off by one.

I once found an error in Judea Pearl's Causality - not just a misprint, but an actual error invalidating a conclusion in the text.  I double and triple-checked, the best I was able, and then sent an email to Pearl describing what I thought the error was, and what I thought was the correct answer.  Pearl confirmed the error, but he said my answer wasn't right either, for reasons I didn't understand and that I'd have to have gone back and done some rereading and analysis to follow.  I had other stuff to do at the time, unfortunately, and couldn't expend the energy.  And by the time Pearl posted an expanded explanation to the website, I'd forgotten the original details of the problem...  Okay, so my improved answer was wrong.

Why take Pearl's word for it?  He'd gotten the original problem wrong, and I'd caught him on it - why trust his second thought over mine?

Because he was frikkin' Judea Pearl.  I mean, come on!  I might dare to write Pearl with an error, when I could understand the error well enough that it would have seemed certain, if not for the disagreement.  But it didn't seem likely that Pearl would concentrate his alerted awareness on the problem, warned of the mistake, and get it wrong twice.  If I didn't understand Pearl's answer, that was my problem, not his.  Unless I chose to expend however much work was required to understand it, I had to assume he was right this time.  Not just as a matter of fairness, but of probability - that, in the real world, Pearl's answer really was right.

In IEEE Spectrum's sad little attempt at Singularity coverage, one bright spot is Paul Wallich's "Who's Who In The Singularity", which (a) actually mentions some of the real analysts like Nick Bostrom and myself and (b) correctly identifies me as an advocate of the "intelligence explosion", whereas e.g. Ray Kurzweil is designated as "technotopia - accelerating change".  I.e., Paul Wallich actually did his homework instead of making everything up as he went along.  Sad that it's just a little PDF chart.

Wallich's chart lists Daniel Dennett's position on the Singularity as:

Human-level AI may be inevitable, but don’t expect it anytime soon. "I don’t deny the possibility a priori; I just think it is vanishingly unlikely in the foreseeable future."

That surprised me.  "Vanishingly unlikely"?  Why would Dennett think that?  He has no obvious reason to share any of the standard prejudices.  I would be interested in knowing Dennett's reason for this opinion, and mildly disappointed if it turns out to be the usual, "We haven't succeeded in the last fifty years, therefore we definitely won't succeed in the next hundred years."

Also in IEEE Spectrum, Steven Pinker, author of The Blank Slate - a popular introduction to evolutionary psychology that includes topics like heuristics and biases - is quoted:

When machine consciousness will occur:  "In one sense—information routing—they already have. In the other sense—first-person experience—we'll never know."

Whoa, said I to myself, Steven Pinker is a mysterian?  "We'll never know"?  How bizarre - I just lost some of the respect I had for him.

I disagree with Dennett about Singularity time horizons, and with Pinker about machine consciousness.  Both of these are prestigious researchers whom I started out respecting about equally.  So why am I curious to hear Dennett's reasons; but outright dismissive of Pinker?

I would probably say something like, "There are many potential reasons to disagree about AI time horizons, and no respectable authority to correct you if you mess up.  But if you think consciousness is everlastingly mysterious, you have completely missed the lesson of history; and respectable minds will give you many good reasons to believe so.  Non-reductionism says something much deeper about your outlook on reality than AI timeframe skepticism; someone like Pinker really ought to have known better."

(But all this presumes that Pinker is the one who is wrong, and not me...)

Robert Aumann, Nobel laureate and original inventor of the no-disagreement-among-Bayesians theorem, is a believing Orthodox Jew.  (I know I keep saying this, but it deserves repeating, for the warning it carries.)  By the time I discovered this strange proclivity of Aumann's, I had long ago analyzed the issues.  Discovering that Aumann was Jewish, did not cause me to revisit the issues even momentarily.  I did not consider for even a fraction of a second that this Nobel laureate and Bayesian might be right, and myself wrong.  I did draw the lesson, "You can teach people Bayesian math, but even if they're genuinely very good with the math, applying it to real life and real beliefs is a whole different story."

Scott Aaronson calls me a bullet-swallower; I disagree.  I am very choosy about which bullets I dodge, and which bullets I swallow.  Any view of disagreement that implies I should not disagree with Robert Aumann must be wrong.

Then there's the whole recent analysis of Many-Worlds.  I felt very guilty, writing about physics when I am not a physicist; but dammit, there are physicists out there talking complete nonsense about Occam's Razor, and they don't seem to feel guilty for using words like "falsifiable" without being able to do the math.

On the other hand, if, hypothetically, Scott Aaronson should say, "Eliezer, your question about why 'energy' in the Hamiltonian and 'energy' in General Relativity are the same quantity, is complete nonsense, it doesn't even have an answer, I can't explain why because you know too little," I would be like "Okay."

Nearly everyone I meet knows how to solve the problem of Friendly AI.  I don't hesitate to dismiss nearly all of these solutions out of hand; standard wrong patterns I dissected long since.

Nick Bostrom, however, once asked whether it would make sense to build an Oracle AI, one that only answered questions, and ask it our questions about Friendly AI.  I explained some of the theoretical reasons why this would be just as difficult as building a Friendly AI:  The Oracle AI still needs an internal goal system to allocate computing resources efficiently, and it has to have a goal of answering questions and updating your mind, so it's not harmless unless it knows what side effects shouldn't happen.  It also needs to implement or interpret a full meta-ethics before it can answer our questions about Friendly AI.  So the Oracle AI is not necessarily any simpler, theoretically, than a Friendly AI.

Nick didn't seem fully convinced of this.  I knew that Nick knew that I'd been thinking about the problem for years, so I knew he wasn't just disregarding me; his continued disagreement meant something.  And I also remembered that Nick had spotted the problem of Friendly AI itself, at least two years before I had (though I did not realize this until later, when I was going back and reading some of Nick's older work).  So I pondered Nick's idea further.  Maybe, whatever the theoretical arguments, an AI that was supposed to only answer questions, and designed to the full standards of Friendly AI without skipping any of the work, could end up a pragmatically safer starting point.  Every now and then I prod Nick's Oracle AI in my mind, to check the current status of the idea relative to any changes in my knowledge.  I remember Nick has been right on previous occasions where I doubted his rightness; and if I am an expert, so is he.

I was present at a gathering with Sebastian Thrun (leader of the team that won the DARPA Grand Challenge '06 for motorized vehicles). Thrun introduced the two-envelopes problem and then asked:  "Can you find an algorithm that, regardless of how the envelope amounts are distributed, always has a higher probability of picking the envelope with more money?"

I thought and said, "No."

"No deterministic algorithm can do it," said Thrun, "but if you use a randomized algorithm, it is possible."

Now I was really skeptical; you cannot extract work from noise.

Thrun gave the solution:  Just pick any function from dollars onto probability that decreases monotonically and continuously from 1 probability at 0 dollars, to a probability of 0 at infinity.  Then if you open the envelope and find that amount of money, roll a die and switch the envelope at that probability.  However much money was in both envelopes originally, and whatever the distribution, you will always have a higher probability of switching the envelope with the lower amount of money.

I said, "That can't possibly work... you can't derive useful work from an arbitrary function and a random number... maybe it involves an improper prior..."

"No it doesn't," said Thrun; and it didn't.

So I went away and thought about it overnight and finally wrote an email in which I argued that the algorithm did make use of prior knowledge about the envelope distribution.  (As the density of the differential of the monotonic function, in the vicinity of the actual envelope contents, goes to zero, the expected benefit of the algorithm over random chance, goes to zero.)  Moreover, once you realized how you were using your prior knowledge, you could see a derandomized version of the algorithm which was superior, even though it didn't make the exact guarantee Thrun had made.

But Thrun's solution did do what he said it did.

(In a remarkable coincidence, not too much later, Steve Omohundro presented me with an even more startling paradox.  "That can't work," I said.  "Yes it can," said Steve, and it could.  Later I perceived, after some thought, that the paradox was a more complex analogue of Thrun's algorithm.  "Why, this is analogous to Thrun's algorithm," I said, and explained Thrun's algorithm.  "That's not analogous," said Steve.  "Yes it is," I said, and it was.)

Why disagree with Thrun in the first place?  He was a prestigious AI researcher who had just won the DARPA Grand Challenge, crediting his Bayesian view of probability - a formidable warrior with modern arms and armor.  It wasn't a transhumanist question; I had no special expertise.

Because I had worked out, as a very general principle, that you ought not to be able to extract cognitive work from randomness; and Thrun's algorithm seemed to be defying that principle.

Okay, but what does that have to do with the disagreement?  Why presume that it was his algorithm that was at fault, and not my foolish belief that you couldn't extract cognitive work from randomness?

Well, in point of fact, neither of these was the problem.  The fault was in my notion that there was a conflict between Thrun's algorithm doing what he said it did, and the no-work-from-randomness principle.  So if I'd just assumed I was wrong, I would have been wrong.

Yet surely I could have done better, if I had simply presumed Thrun to be correct, and managed to break down the possibilities for error on my part into "The 'no work from randomness' principle is incorrect" and "My understanding of what Thrun meant is incorrect" and "My understanding of the algorithm is incomplete; there is no conflict between it and 'no work from randomness'."

Well, yes, on that occasion, this would have given me a better probability distribution, if I had assigned probability 0 to a possibility that turned out, in retrospect, to be wrong.

But probability 0 is a strawman; could I have done better by assigning a smaller probability that Thrun had said anything mathematically wrong?

Yes.  And if I meet Thrun again, or anyone who seems similar to Thrun, that's just what I'll do.

Just as I'll assign a slightly higher probability that I might be right, the next time I find what looks like an error in a famous math book.  In fact, one of the reasons why I lingered on what looked like a problem in Pearl's Causality, was that I'd previously found an acknowledged typo in Probability Theory: The Logic of Science.

My rhythm of disagreement is not a fixed rule, it seems.  A fixed rule would be beyond updating by experience.

I tried to explain why I disagreed with Roger Schank, and Robin said, "All else equal a younger person is more likely to be right in a disagreement?"

But all else wasn't equal.  That was the point.  Roger Schank is a partisan of what one might best describe as "old school" AI, i.e.,  suggestively named LISP tokens.

Is it good for the young to disagree with the old?  Sometimes.  Not all the time.  Just some of the time.  When?  Ah, that's the question!  Even in general, if you are disagreeing about the future course of AI with a famous old AI researcher, and the famous old AI researcher is of the school of suggestively named LISP tokens, and you yourself are 21 years old and have taken one undergraduate course taught with "Artificial Intelligence: A Modern Approach" that you thought was great... then I would tell you to go for it.  Probably both of you are wrong.  But if you forced me to bet money on one or the other, without hearing the specific argument, I'd go with the young upstart.  Then again, the young upstart is not me, so how do they know that rule?

It's hard enough to say what the rhythm of disagreement should be in my own case.  I would hesitate to offer general advice to others, save the obvious:  Be less ready to disagree with a supermajority than a mere majority; be less ready to disagree outside than inside your expertise; always pay close attention to the object-level arguments; never let the debate become about tribal status.

New Comment
65 comments, sorted by Click to highlight new comments since:

The pattern I see is tribal. You feel a strong commitment to certain points of view, like that old-style A.I. was all wrong, that there is nothing mysterious about consciousness, and that Jewish religion has no plausibility. When people disagree about these topics you lower your opinion about those people, not about the topics. But for people who have agreed with you about key topics and impressed you with their technical ability, you are more willing to take their disagreement seriously as more undermining your opinion on the topic, and less undermining your opinion of them.

blinks

Robin, how should I choose who to trust, if not by seeing their competence displayed in handling other issues? Yes, the algorithm is incestuous, but what's your alternative that you think works better in real life? E.g., your disagreement case studies with Cutler?

Tribal affiliation forces should logically operate to bind me far more strongly to people who think AGI is possible, than to people who think consciousness is non-mysterious. The former are my friends; I go with them to conferences; some of them pay my rent. But I lost respect for Pinker more than Dennett, because of what I thought their respective opinions implied about their general competence.

Robin, you seem to be implying EY's preferred views are as casually come by as a sports affiliation.

We see patterns of belief that on the surface might be explained as rationality or as tribalism. So we need to go beyond the surface, to analyze in more careful detail what each theory implies, so that we can distinguish them in the data. This is holy quest! Or, at least one of my quests ...

Robin, I may have to call fundamental attribution error on this. You disagree with Cutler, and say:

I don't know what David thinks of me, but I accept that he is clearly objectively more expert than I on this topic, given his prestigious position and many more years of focus on the topic. But given the strong usual tendency to give medicine the benefit of the doubt, my impression that David gives medicine this benefit of doubt on other topics, and his inability to point to any concrete supporting evidence, I'm willing to attribute David's more positive assessment here to such wishful thinking, rather than to his superior intuition on this matter. How rational am I?

(I like that last touch, but I'm not giving you any modesty credit for humbly asking the question, only for actual shifts in opinion.)

Anyway, when it comes to your own disagreements, you attribute them to strictly situational factors - like a discount you apply for David's (dispositional) tendency to "give medicine the benefit of the doubt".

Then you look at my disagreements, and attribute them to persistent dispositional tendencies, like tribalism.

Perhaps you can see inside your own head to your oh-so-demanding specific reasons, but not see inside mine?

I really can't back your reading, looking over the cases. Pearl, Thrun, and Aumann are all three of them Bayesians and eminent, masters in their separate ways of my chosen art. In the cases of Pearl and Aumann, I had read their work and been impressed. In Thrun's case he had graciously agreed to present at the Singularity Summit. Pearl I humbly petitioned, and his verdict I accepted in both parts; Thrun I questioned, and came to terms with; Aumann I dismissed. Where is the tribal disposition? I do not see it, but I do see a situational difference between disagreeing with a math book, disagreeing with a math anecdote, and disagreeing with a religious profession.

Eliezer, I do not mean to set my disagreements up as a model of rationality, nor to express much confidence in any particular reading of your disagreements. I mainly mean to call attention to the fact that on the surface both of our disagreements might be read as rationality or as tribalism - we need to dig deeper to discern the difference. Only if you insist that for one of us one of these theories is just not a plausible explanation would I persist in arguing for that plausibility.

Whoa, said I to myself, Steven Pinker is a mysterian?

Well, he had already said as much in How the Mind Works. And also in this conversation with Robert Wright.

There's a big difference between two-way disagreements and one-way disagreements. In two-way disagreements, people interact but are unable to come to agreement. Eliezer's interaction with Nick Bostrom might be an example of this; it's not clear what the time scale was but it sounds like they may have left off, each respecting the other but unable to come to agreement on their question. However it sounds like in the aftermath their positions became very close.

One-way disagreements are where you read or hear that someone said something that seems wrong to you, and you have to decide what to do. Theoretically, and modulo some assumptions that not everyone accepts, Aumann's theorem says that two-way disagreements are impossible for rational, honest truth-seekers. However one-way disagreements are not affected by this result. Clearly you would not want to just accept everything you read or hear that someone said, so one-way disagreements are perfectly valid on theoretical grounds. The practical question is when to disagree.

Eliezer describes a number of cases where experts made claims near to their fields of expertise, but he disagreed with them. Although there are a couple of instances where his initial disagreement was at least partially wrong, in each case he is able to go back and trump the other person by improving their result in a manner they did not foresee. Eliezer FTW.

I don't think most people could apply Eliezer's strategy for one-way disagreements successfully. I don't see it as a particularly useful model. Most people, most of the time will probably do better to believe experts when they make claims in their areas of expertise.

Two-way disagreements are more interesting to me, because they so often violate Aumann's theorem. Robin and Eliezer often seem to dance around certain matters where they are not perhaps 100% in agreement, but I seldom see the issue fully joined. I have to admit that I often veer off when I find myself approaching open disagreement with someone I respect, falling back into cautious bepuzzlement. I am afraid that this may be an evasion of the reality of mutual disagreement.

Hal, I started this conversation with the dispute between A.I. old-timers and singularitarians(?) over plausible future rates of A.I. progress. These sides have had enough back and forth contact for the dispute to approach a disagreement. Also, don't forget we can't forsee to disagree at any point in a conversation.

Eliezer, why doesn't the difficulty of creating this AGI count as a reason to think it won't happen soon?

You've said it's extremely, incredibly difficult. Don't the chances of it happening soon go down the harder it is?

In the video link that komponisto gave, the relevant section of the video starts at 49:00 or so. He doesn't argue there though that consciousness -- he uses the term sentience -- will necessarily remain a mystery but only that this might turn out to be the case. He makes the analogy of trying to think about time before the big bang and that there is some kind of conceptual category-type error going on there that gives it its subjective feeling of being mysterious and unknowable, and states that this may be the case with sentience/consciousness, free-will, etc.

Then if you open the envelope and find that amount of money, roll a die and switch the envelope at that probability.

The probability of the die coming up == f(amount of dollars)? f being the probability function.

It seems to me that Eliezer's disagreements have this common factor: when they relate to one of his already considered opinions, he does not modify his opinion when confronted with disagreement. The case of Nick Bostrom is a slight exception to this, but only very slightly: Eliezer continued to maintain that an Oracle AI would have to be explicitly programmed as Friendly. (And this might very well not be true.)

The disagreements where he modified his opinion had to do with mathematical formulas, for example, he he did not have a previously considered opinion at all; for example he had never considered Thrun's precise example.

This confirms my general position that Eliezer is overconfident: if he has considered a question and come to a conclusion, finding that others, even others whom he respected, disagree with it, this does not modify his opinion regarding the question, as Robin said, but his opinion of the other person.

if he has considered a question and come to a conclusion, finding that others, even others whom he respected, disagree with it, this does not modify his opinion regarding the question, as Robin said, but his opinion of the other person
When there is a disagreement between your opinion and that of another person, we need to carefully seek out evidence to tip the balance.

Problems arise when we start treating the fact that we hold opinions as evidence in favor of those opinions. It is extremely important to maintain the correct hierarchy of reason: no conclusion reached by an argument that takes certain premises for granted can be used in an argument in which those premises are the conclusion.

If you always presume that your every whimsy is correct, people will become tired of arguing with you, and your errors will never be rectified.

I gave insufficient support for my last statement, so I should correct this by illustrating the point in each case:

The math book: Eliezer hadn't seen that particular false claim in the past, so he took it into account sufficiently to try to prove its falsity, rather than just assuming it.

Judea Pearl: a similar case.

Daniel Dennett: Dennett contradicted a previous opinion, so Eliezer was surprised and wanted to know the reasons. But Eliezer did not indicate any shift whatsoever in his own opinion. He does not indicate, for example, that he now thinks that there is a 40% chance that there will not be human level AI in the next hundred years, or anything like this. As far as we can tell, he is still convinced that there is a high chance that it is definitely coming soon. He would like to know Dennett's reasons, but only so as to see where Dennett went wrong.

Pinker: again, this contradicts a previous opinion of Eliezer, so he does not shift his opinion. The particular way in which he changes his opinion about Pinker, relative to how he changed his opinion about Dennett, is not really relevant. In both cases, Eliezer's opinions about the matter at hand do not change.

Aumann: again, Eliezer refuses to change his opinion in the slightest, not even assigning a 0.001% chance that Aumann is right and he is wrong.

Many-Worlds: Eliezer has a determined opinion on this matter, and so it does not matter to him how expert some physicist might be in physics who disagrees with him.

Nick Bostrom: Here Eliezer came close to modifying his opinion, but to the degree that he continued to claim that it was necessary to directly program Friendliness, again, he did not do so. It does seem, though, that if anything can modify Eliezer's already formed opinions, it would be the influence of such a personal acquaintance.

Thrun: a mathematical formula that Eliezer hadn't previously considered, so he was willing to change his mind.

Roger Schank: Eliezer simply dismisses him as disagreeing with one of his fixed opinions.

Conclusion: once Eliezer has made up his mind, his mind is made up, and cannot be changed or modified to any degree whatsoever by the influence of someone else's opinion, no matter how expert the person may be.

Unknown: Well, maybe yeah, but so what? It's just practically impossible the completely re-evaluate every belief you hold whenever someone says something that asserts the belief to be wrong. That's nothing at all to do with "overconfidence", but it's everything to do with sanity. The time to re-evaluate your beliefs is when someone gives a possibly plausible argument about the belief itself, not just an assertion that it is wrong. Like e.g. whenever someone argues anything, and the argument is based on the assumption of a personal god, I dismiss it out of hand without thinking twice - sometimes I do not even take the time to hear them out! Why should I, when I know it's gonna be a waste of time? Overconfidence? No, sanity!

Frank: I do not suggest "completely re-evalating" a belief when it is contradicted. And it is true that if you have a considered opinion on some matter, even though you know that many people disagree with it, you are unlikely to completely change your mind when you hear that some particular person disagrees with it.

However, if you are surprised to hear that some particular person disagrees with it, then you should update your opinion in such a way that there is a greater probability (than before) that the person holds an unreasonable opinion in the matter. But you should also update your opinion in such a way that there is a greater probability than before that you are wrong.

For example, since Eliezer was surprised to hear of Dennett's opinion, he should assign a greater probability than before to the possibility that human level AI will not be developed with the foreseeable future. Likewise, to take the more extreme case, assuming that he was surprised at Aumann's religion, he should assign a greater probability to the Jewish religion, even if only to a slight degree.

Of course this would be less necessary to the degree that you are unsurprised by his disagreeing opinion, just as if you knew about it completely when you originally formed your opinion, you would not need to update.

So, let's see, Unknown thinks I'm overconfident for not believing in collapse fairies, for believing in reductionism, for not shifting my opinion toward Robert Aumann's belief that an omnipotent deity took sides in ancient tribal quarrels, and... what else?

Foolish mortal, it is a hundred years too early for you to accuse me of arrogance.

Oh, and point me to any single comment on Overcoming Bias where you've changed your opinion. Ever. Googling on "you know, you're right" produced this and this from me. There've been other cases but I can't recall their Google keywords offhand.

Re: "This confirms my general position that Eliezer is overconfident."

Best to have evidence of him being wrong - rather than just disagreements, then. I don't mean to boost his confidence - but Eliezer has views on many subjects, and while a few strike me as being a bit odd, he seems to me to be right a lot more than most people are. This may even be a consequence of his deliberate efforts to act like a rational agent.

Eliezer, you should also update your opinion about whether I have ever changed my opinions.

See (I don't know how to do the links):

http://www.overcomingbias.com/2007/11/no-evolution-fo.html#comment-90291902

Here I partly changed my opinion.

See also:

http://www.overcomingbias.com/2008/01/something-to-pr.html#comment-99342014

This represented a rather dramatic change in my opinion.

Finally, although I admit that I never admitted it, you persuaded me that zombies are impossible (although you did not persuade me to assign this a 100% probability.)

@Unknown: Okay, updated.

Foolish mortal, it is a hundred years too early for you to accuse me of arrogance.

Posted without further comment.

@Unknown: Okay, updated.

Eliezer, if this whole 'Friendly AI' thing doesn't work out, a career writing comedy awaits.

Love this blog to bits.

Foolish mortal, it is a hundred years too early for you to accuse me of arrogance.

Please don't do that.

Eliezer, if this whole 'Friendly AI' thing doesn't work out, a career writing comedy awaits.

Huh? I cannot imagine what is comic about that exchange. Do some find it comically sincere or geeky?

[Unknown wrote:] [...] you should update your opinion [to] a greater probability [...] that the person holds an unreasonable opinion in the matter. But [also to] a greater probability [...] that you are wrong.

In principle, yes. But I see exceptions.

[Unknown wrote:] For example, since Eliezer was surprised to hear of Dennett's opinion, he should assign a greater probability than before to the possibility that human level AI will not be developed with the foreseeable future. Likewise, to take the more extreme case, assuming that he was surprised at Aumann's religion, he should assign a greater probability to the Jewish religion, even if only to a slight degree.

Well, admittedly, the Dennett quote depresses me a bit. If I were in Eliezers shoes, I'd probably also choose to defend my stance - you can't dedicate your life to something with just half a heart!

About Auman's religion: That's one of the cases where I refuse to adapt my assigned probability one iota. His belief about religion is the result of his prior alone. So is mine, but it is my considered opinion that my prior is better! =)

Also, if I may digress a bit, I am sceptical about Robin's Hypothesis that humans in general update to little from other people's beliefs. My first intuition about this was that the opposite was the case (because of premature convergence and resistance to paradigm shifts). After having second thoughts, I believe the amount is probably just about right. Why? 1) Taking other people's beliefs as evidence is an evolved trait, and so is probably the approximate amount. 2) Evolution is smarter than I (and Robin, I presume).

[-]MZ00

"You can teach people Bayesian math, but even if they're genuinely very good with the math, applying it to real life and real beliefs is a whole different story."

The problem is that people do things, or believe things, or say they believe things for reasons other than instrumental truth.

Many people have done that throughout history. Homosexuals married and had children. Atheists attended church. While they may have had private disagreements that their closest relatives knew about, they were advised to keep quiet and not embarrass their family.

In fact, religion in America is something of an oddity. Here, you're actually expected to believe what you espouse. That is not the case throughout most of the world. People treat their religion more like an ethnicity or family tradition. They espouse things for social reasons.

In Europe, people call themselves Anglican or Lutheran or Catholic often without much thought to what that really means. When I, at the age of 12, told my European mother that I didn't believe in God, she responded, "What do you mean? You're Catholic!" As if that meant something other than the system of beliefs in question.

It's similar to the legend about the man in Northern Ireland who said he was an atheist. "But are you a Catholic Atheist or a Protestant Atheist?" was the reply.

In the Far East, the situation is even more complicated. Traditions like Buddhism and Confucianism are not mutually exclusive. And in Hinduism there are many gods (or many versions of one God, depending on who you ask). Each family "worships" a different God by invoking that God during various ceremonies. But if you attend the wedding of another family, you supplicate to that God, and you don't argue about it. You don't care. Your association as a Shiva-worshiping Hindu or a Vishnu-worshiping Hindu is ambiguous at best.

Likewise, I think that many intelligent and scholarly people who identify as Catholic (Ken Miller) or Jewish (Aumann) do so in just this (social) sense.

Frank, our belief tendencies did evolve, but selection pressures often prefer inaccurate beliefs.

"In IEEE Spectrum's sad little attempt at Singularity coverage, one bright spot is Paul Wallich's "Who's Who In The Singularity",..."

Brightness here being a relative quality... I am labeled green, meaning "true believer, thinks it will happen within 30 years." Yet I am quoted (correctly) as saying "I would... assign less than a 50% probability to superintelligence being developed by 2033." (I also don't endorse "once the singularity comes near, we will all be kicking ourselves for not having brought it about sooner", even though they attribute this to me as my "central argument".)

Reg Oracle AI, I'm not sure how much of a disagreement there exists between Eliezer and me. My position has not been that it is definitely the case that Oracle AI is the way to go. Rather, my position is something like "this seems to have at least something going for it; I have not yet been convinced by the arguments I've heard against it; it deserves some further consideration". (The basic rationale is this: While I agree that a utility function that is maximized by providing maximally correct and informative answers to our questions is clearly unworkable (since this could lead the SI to transform all Earth into more computational hardware so as to better calculate the answer), it might turn out to be substantially easier to specify the needed constraints to avoid such catastrophic side-effects of an Oracle AI than it is to solve the Friendliness problem in its general form--I'm not at all sure it is easier, but I haven't yet been persuaded it is not.)

Reg disagreement between Robin and Eliezer on singularity: They've discussed this many times, both here and on other mailinglists. But the discussion always seems to end prematurely. I think this would make for a great disagreement case study--topic is important, both are disagreement savvy, both know and respect one another, both have some subject matter expertise... I would like them to try once to get to the bottom of the issue, and continue discussion until they either cease disagreeing or at least agree exactly on what they disagree about, and why, and on how each person justifies the persistent disagreement.

Nick, I like where you're coming from on this topic. I'd like for you to have more urgent desperation though. Also, I'm curious where you stand on mass breeding/cloning our best existential risk minimizers. If you support it, I encourage you to start writing/blogging about it anonymously. We have such similar worldviews, that I'd be interested in knowing if you diverge from me on this topic, and why.

Nick:

Reg disagreement between Robin and Eliezer on singularity: They've discussed this many times, both here and on other mailinglists. But the discussion always seems to end prematurely. I think this would make for a great disagreement case study--topic is important, both are disagreement savvy, both know and respect one another, both have some subject matter expertise... I would like them to try once to get to the bottom of the issue, and continue discussion until they either cease disagreeing or at least agree exactly on what they disagree about, and why, and on how each person justifies the persistent disagreement.

I think that's the general idea, but I want to do as much foundational blogging in advance before that point that I can refer back to - otherwise it's duplicated effort, or hurried half-arguments.

Eliezer, may I ask, I'm having trouble figuring this out, what is your algorithm which is superior to Thrun's?

""" Moreover, once you realized how you were using your prior knowledge, you could see a derandomized version of the algorithm which was superior, even though it didn't make the exact guarantee Thrun had made. """

Nevermind. I actually started to think about it. You can just make a cutoff at the 50% cdf point in your probability distribution over infinity.

[+]Louisa-50

(As the density of the differential of the monotonic function, in the vicinity of the actual envelope contents, goes to zero, the expected benefit of the algorithm over random chance, goes to zero.)

I don't understand that statement. Let's define the function f(x) as "1 if x<3, otherwise 0" (add a suitable fix for the discontinuity at 3 if you're pedantic), and the distribution of envelopes as "put 2 dollars in one envelope and 4 in the other" (smear it around a bit if you don't like discrete distributions). I'm not sure what Eliezer means by "density of the differential", but df/dx=0 in the vicinity of both x=2 and x=4, yet the expected benefit of the algorithm is 1 dollar. Or maybe I'm missing something?

It seems that Eliezer's point is not best expressed in terms of differentials. The benifet of the algorithm depends on the difference in the monotonic function between the values of each the envelopes. There is only a finite amount of difference available to distribute over the domain, of infinite measure, of the function, and for any epsilon greater than 0, for all but a finite measure of the domain, the difference in the function between possible pairs of values for the envelopes will be less than epsilon. So if you really have no information constraining the distribution of values for the envelopes to use to allocate the difference in function effectively, the expected benifet of the algorithm is 0.

Edit: Actually, that argument doesn't hold on its own, because the value of switching to the bigger envelope isn't constant, it depends on the size of the envelope. However, in the problem as stated, I don't think the value of switching grows fast enough. I need to think about it more.

Assume the envelope contents are distributed arbitrarily on [A,+infinity) where A is some large number. Let f(x)=1/x (the values for x<1 don't matter). Then the expected benefit of Thrun's algorithm is always 1/4, even though the difference in f(x) between the values of any two envelopes is less than 1/A. To convince yourself of that, work out the proof yourself or run a computational experiment:

from random import random

n = 100000
thrun = 0
chance = 0
for i in range(n):
  smaller = 2+3*random()  # replace this line with anything
  selected,other = smaller,2*smaller
  if random() > 0.5:
    selected,other = other,selected
  chance += (selected if (random() > 0.5) else other)
  thrun += (selected if (random() > 1/selected) else other)

print (thrun-chance)/n

I tried to prove it, and I am getting an expected benifet of 1/4, and I am worried that I am making a mistake related to reording the terms of a non absolutely converging series.

If you are right, you could do better with f(x) = 2/x, and the values for x < 2 don't matter.

Sorry, I mistyped the comment, fixed it immediately but I guess you saw the old version. It's 1/4 of course.

I'm not sure what remains of Eliezer's original point now...

Ah, that old confounder in the rhythm of disagreement: a smart person who appears to disagree with you might not have said what they meant to say. ;)

Replacing "3" with "10000" gets me varying results, mostly negative at a glance. What am I missing?

You're not missing anything, but the variance is quite high, you'll need many more samples. Or you could try writing a program that converges faster, I'm a total newbie at these things.

Read as: "as the slope goes to zero". That is, if we don't know where the dollar values are being selected from in the range from zero to infinity, we don't know where to put the part of the function that has a significant slope, and so the average benefit is about zero (for example, your function is useless if both numbers are greater than three or less than three, and most pairs of natural numbers are both greater than three). Doing better requires making assumptions about how much money is likely to be in the envelopes.

With all due respect, that's not quite the sort of answer I want... Eliezer made a sharp mathematical statement, I made another. Can you reformulate your comment as a sharp mathematical statement too, so I can see if it's a fair rephrasing of Eliezer's statement?

I apologize! My point is - you can't calculate the "expected benefit" of the algorithm after making an assumption about the distribution of the money. The expected benefit is the average benefit of the algorithm over all possible distributions (weighted by likelihood). Assuming we know nothing about the likelihood of specific distributions, the probability that there is a difference between f(x1) and f(x2) is ~0. By "density of the differential", I believe he is referring to the fact that in order for there to be a difference between those two numbers, f ' will have to be "dense" on some part of the interval between them. Since we don't know where that is, we can't decide where to concentrate the function's differential. Is this sufficiently formal / sensible?

That's... still not very precise, but I also apologize, because I can't really explain how to think precisely. I was kinda born this way and have no idea of the typical difficulties. Maybe you could look at my reply to JGWeissman as an example of what can be said precisely about the problem? It shows that Thrun's algorithm can give a large (constant) benefit even if values of f(x) have arbitrarily small differences. That seems to contradict your comment, but I can't really tell :-)

It's not that I don't know how to think precisely about math! I was attempting to translate a fairly vague concept. Though I think I was in error about the selection of dollar values, I had overlooked that one is always twice the other, it sounded like Eliezer was implying they were close to each other. So I'm not certain that my interpretation of what Eliezer was saying is a correct analysis of the problem (in fact no matter what he was actually saying, it has to have been an incorrect analysis of the problem given that you have provided a counterexample to his conclusion, yes?), though it still looks to me as though he was saying that the integral of df/dx in some region about "the vicinity of the actual envelope contents" is expected to be low (though I, and I suspect he, would use the integral of df/dx on the interval from x to 2x).

The argument makes sense if you assign utility 1 to choosing correctly and utility 0 to choosing incorrectly, I think. And in fact, rereading the post, this seems to be the intent!

It also still looks to me that your statement "the expected benefit of the algorithm is 1 dollar" above uses an incorrect interpretation of "expected benefit" (I think the expected benefit of the algorithm using that function is 0).

I took a look at your reply to JGWeissman, and yep, it looks like the algorithm works, which is neat! I'd contest the description of 25 cents as a large constant benefit, but certainly there are ways to increase it (how high, I wonder?).

Thus - my interpretation of Eliezer's argument holds that you can't get above an expected 50% chance of picking the better envelope, but that example does show that you can get higher expected value than 3/2 x. Therefore, no contradiction?

The expected benefit is 25 cents if the minimum possible contents of an envelope is 1 dollar. More generally, if envelopes are guaranteed to contain more than 100 dollars, then using f(x)=100/x yields expected benefit of 25 dollars, and so on. Also, Eliezer is right that a randomized algorithm can't be Bayesian-optimal in this problem, so for any given prior there's a deterministic algorithm with even higher expected benefit, I guess you can work it out.

1/4 of the smallest possible amount you could win doesn't count as a large constant benefit in my view of things, but that's a bit of a nitpick. In any case, what do you think about the rest of the post?

Oh, sorry, just noticed the last part of your comment. It seems wrong, you can get a higher than 50% chance of picking the better envelope. The degenerate case is if you already know there's only one possibility, e.g. 1 dollar in one envelope and 2 dollars in the other. If you open the envelope and see 1 dollar, then you know you must switch, so you get the better envelope with probability 100%. You can get more fuzzy cases by sort of smearing out the distribution of envelopes continuously, starting from that degenerate case, and using the f(x) strategy. The chance of picking the better envelope will fall below 100%, but I think it will stay above 50%. Do you want my opinion on anything else? :-)

That's definitely cheating! We don't have access to the means by which X is generated. In the absence of a stated distribution, can we still do better than 50%?

Well, Thrun's algorithm does better than 50% for every distribution. But no matter what f(x) we choose, there will always be distributions that make the chance arbitrarily close to 50% (say, less than 50%+epsilon). To see why, note that for a given f(x) we can construct a distribution far enough from zero that all values of f(x) are less than epsilon, so the chance of switching prescribed by the algorithm is also less than epsilon.

The next question is whether we can find any other randomized algorithm that does better than 50%+epsilon on any distribution. The answer to that is also no.

1) Note that any randomized algorithm must decide whether to switch or not, based on the contents of the envelope and possibly a random number generator. In other words, it must be described by a function f(x) like in Thrun's algorithm. f(x) doesn't have to be monotonic, but must lie between 0 and 1 inclusive for every x.

2) For every such function f(x), we will construct a distribution of envelopes that makes it do worse than 50%+epsilon.

3) Let's consider for each number x the degenerate distribution D_x that always puts x dollars in one envelope and 2*x in the other.

4) To make the algorithm do worse than 50%+epsilon on distribution D_x, we need the chance of switching at 2*x to be not much lower than the chance of switching at x. Namely, we need the condition f(2*x)>f(x)-epsilon.

5) Now we only need to prove that there exists an x such that f(2*x)>f(x)-epsilon. We will prove that by reductio ad absurdum. If we had f(2*x)≤f(x)-epsilon for every x, we could iterate that and obtain f(x)≥f(x*2^n)+n*epsilon for every x and n, which would make f(x) greater than 1. Contradiction, QED.

Yes, that all looks sensible. The point I'm trying to get at - the one I think Eliezer was gesturing towards - was that for any f and any epsilon, f(x) - f(2x) < epsilon for almost all x, in the formal sense. The next step is less straightforward - does it then follow that, prior to the selection of x, our expectation for getting the right answer is 50%? This seems to be Eliezer's implication. However, it seems also to rest on an infinite uniform random distribution, which I understand can be problematic. Or have I misunderstood?

That's called an improper prior. Eliezer mentions in the post that it was his first idea, but turned out to be irrelevant to the analysis.

So I guess we're back to square one, then.

I don't understand. Which part are you still confused about? To me the whole thing seems quite clear.

How did Eliezer determine that the expected benefit of the algorithm over random chance is zero?

He didn't say that, he said the benefit gets closer and closer to zero if you modify the setup in a certain way. I couldn't find an interpretation that makes his statement correct, but at least it's meaningful.

I don't get why it makes sense to say

the algorithm did make use of prior knowledge about the envelope distribution. (As the density of the differential of the monotonic function, in the vicinity of the actual envelope contents, goes to zero, the expected benefit of the algorithm over random chance, goes to zero.)

without meaning that the expected density of the differential does go to zero - or perhaps would go to zero barring some particular prior knowledge about the envelope distribution. And that doesn't sound like "modifying the setup" to me, that seems like it would make the statement irrelevant. What exactly is the "modification", and what did you decide his statement really means, if you don't mind?

Sorry, are you familiar with the mathematical concept of limit? Saying that "f(x) goes to zero as x goes to zero" does not imply the nonsensical belief that "x goes to zero".

Yes, I am familiar with limits. What I mean is - if you say "f(x) goes to zero as x goes to zero", then you are implying (in a non-mathematical sense) that we are evaluating f(x) in a region about zero - that is, we are interested in the behavior of f(x) close to x=0.

Edit: More to the point, if I say "g(f(x)) goes to zero as f(x) goes to infinity", then f(x) better not be (known to be) bounded above.

As for the deterministic variant, as you'd need some distribution from which the value of x is being selected, I'm not sure how best to calculate the EV of any particular scheme (whereas the nondeterministic algorithm sidesteps this by allowing a calculation of EV after X is selected). With a good prior, yeah, it'd be pretty simple, but without it becomes GAI-complete, yeah?

Eliezer is right that a randomized algorithm can't be Bayesian-optimal in this problem, so for any given prior there's a deterministic algorithm with even higher expected benefit

I think the claim is that there must a deterministic algorithm that does at least as well as the randomized algorithm. The randomized algorithm is allowed to be optimal, just not uniquely optimal.

Oh. Sorry. You're right.

[-][anonymous]00

I lost track completely when I read about this Aumann.

Discovering that Aumann was Jewish, did not cause me to revisit the issues even momentarily.

What? What are these issues and why is Robert Aumann in the article? What has he said? What are you talking about? It seemed out of context. I mean, did not understand what you meant by

Then if you open the envelope and find that amount of money, roll a die and switch the envelope at that probability.

either, but that I can lookup. I don´t feel I should need to investigate Aumann just because he was mentioned.