In reply to:

Personal contact via the people employed in the Wired magazine and a lot of hackers are quite strong. Wired had actually an intention of pushing projects like Cypherpunks or in the last years the Quantified Self movement which they essentially founded (Keven Kelly and Gary Wolf are both Wired Editors).

I don't think that LW is really the place that needs positive PR. I can't really think of a story about LW that I want to tell a reporter. I can think of stories about MIRI or about CFAR but LW itself doesn't need PR.

I can think of stories about MIRI or about CFAR but LW itself doesn't need PR.

That's a great point. LW is not MIRI. LW comments are not MIRI research. LW moderation policy is not FAI source code. Etc.

The proper response to basilisk would probably be: "So, tell me about the most controversial comment ever in your web discussions. You know, just so I can popularize it as the stuff your website is really about."

[LINK] Another "LessWrongers are crazy" article - this time on Slate

WARNING: Memetic hazard.

http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html?wpisrc=obnetwork

Is there anything we should do?

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 4:30 PM
Select new highlight date
All comments loaded

Is there anything we should do?

  • Meet 10 new people (over a moderately challenging personal specific timeframe).
  • Express gratitude or appreciation.
  • Work close to where we live.
  • Have new experiences.
  • Get regular exercise.

ie. No. Nothing about this article comes remotely close to changing the highest expected value actions for the majority of the class 'we'. If it happens that there is a person in that class for whom this opens an opportunity to create (more expected) value then it is comparatively unlikely that that person is the kind who would benefit from "we shoulding" exhortations.

I want this list posted in response to every "is there anything we should do" ever. Just all over the internet. I would give you more than one upvote just for that list if I could.

"So? What do you think I should do?"

"Hm. I think you should start with all computable universes weighted by simplicity, disregard the ones inconsistent with your experiences, and maximize expected utility over the rest."

"That's your answer to everything!"

(source)

Is there anything we should do?

Laugh, as the entire concept (and especially the entire reaction to it by Eliezer and people who take the 'memetic hazard' thing seriously) is and always has been laughable. It's certainly given my ab muscles a workout every now and then over the last three years... maybe with more people getting to see it and getting that exercise it'll be a net good! God, the effort I had to go through to dig through comment threads and find that google cache...

This is also such a delicious example of the Streisand effect...

This is also such a delicious example of the Streisand effect...

Yes, Eliezer's Streisanding is almost suspiciously delicious. One begins to wonder if he is in thrall to... well, perhaps it is best not to speculate here, lest we feed the Adversary.

I think this is also a delicious example of how easy it is to troll LessWrong readers. Do you want to have an LW article and a debate about you? Post an article about how LW is a cult or about Roko's basilisk. Success 100% guaranteed.

Think about the incentives this gives to people who make their money by displaying ads on their websites. The only way we could motivate them more would be to pay them directly for posting that shit.

This isn't a particularly noteworthy attribute of Less Wrong discussion; most groups below a certain size will find it interesting when a major media outlet talks about them. I'm sure that the excellent people over at http://www.clown-forum.com/ would be just as chatty if they got an article.

I suppose you could say that it gives journalists an incentive to write about the groups below that population threshold that are likely to generate general interest among the larger set of readers. But that's just the trivial result in which we have invented the human interest story.

Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.

Or perhaps a publicity boost would be better utilized by directing traffic to effective altruism information, e.g. at givewell or 80000 hours.

According to the Slate article,

Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever.

Uh, no. Surprisingly few "rich dudes" have shown an interest in cryonics. I know quite a few cryonicists and I have helped to organize cryonics-themed conferences, and to the best of my knowledge no one on the Forbes 500 list has signed up.

Moreover ordinary people can afford cryonics arrangements by using life insurance as the funding mechanism.

We can see that rich people have avoided cryonics from the fact that the things rich people really care about tend to become status signals and attract adventuresses in search of rich husbands. In reality cryonics lacks this status and it acts like "female Kryptonite." Just google the phrase "hostile wife phenomenon" to see what I mean. In other words, I tell straight men not to sign up for cryonics for the dating prospects.

This really is not a friendly civilization is it.

7 ideas that might cause you eternal torture, click now

If Langford basilisks actually existed, Gawker would be the first site to use them.

I thought the article was quite good.

Yes it pokes fun at lesswrong. That's to be expected. But it's well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn't agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb's problem. I could give that explanation to my grandma and she'd understand it.

I don't generally believe that "any publicity is good publicity." However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that's really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.

I'm not sure what people's expectations are for free publicity but this is, IMO, best case scenario.

From a technical standpoint, this bit:

Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. ... The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. In order to make its prediction, the computer would have to simulate the universe itself.

Seems wrong. Omega wouldn't necessarily have to simulate the universe, although that's one option. If it did simulate the universe, showing sim-you an empty box B doesn't tell it much about whether real-you will take box B when you haven't seen that it's empty.

(Not an expert, and I haven't read Good and Real which this is supposedly from, but I do expect to understand this better than a Slate columnist.)

And I think the final two paragraphs go beyond "pokes fun at lesswrong".

From a technical standpoint, this bit:

It is wrong in about the same way that highschool chemistry is wrong. Not one of the statements is true but the error seems to be one of not quite understanding the details rather than any overt misrepresentation. ie. I'd cringe and say "more or less", since that's closer to getting Transparent Newcomb's right than I could reasonably expect from most people.

Looks like a fairly standard parable about how we should laugh at academic theorists and eggheads because of all those wacky things they think. If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.

Giving people the chance to show up and explain that this community is Obviously Wrong And Here's Why is a pretty good way to start conversations, human nature being what it is. An opportunity to have some interesting dialogues about the broader corpus.

That said, I am in the camp that finds the referenced 'memetic hazard' to be silly. If you are the sort of person who takes it seriously, this precise form of publicity might be more troubling for the obvious 'hazard' reasons. Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?

Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?

Vanishingly small - the post was deleted by Eliezer (was that what, a year ago? two?) because it gave some people he knew nightmares, but I don't remember anybody actually complaining about it. Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).

Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).

'Moderation' was precisely the opposite of the response that occurred. Hysterical verbal abuse is not the same thing as deleting a post and mere censorship would not have created such a lasting negative impact. While 'moderator censorship' was technically involved the incident is a decidedly non-central member of that class.

the post was deleted by Eliezer (was that what, a year ago? two?)

Nearly four years ago to the day, going by RationalWiki's chronology.

Talking about it presumably makes it feel like a newer, fresher issue than it is.

If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.

Not sure I agree with your point. There's a standard LW idea that smart people can believe in crazy things due to their environment. For "environment" you can substitute "non-LW" or "LW" as you wish.

Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.

(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):

I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)

…and further…

For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

His comment indicates that he doesn’t believe that this could currently work. Yet he also does not seem to dismiss some current and future danger. Why didn’t he clearly state that there is nothing to worry about?

(2) The following comment by Mitchell Porter, to which Yudkowsky replies “This part is all correct AFAICT.”:

It’s clear that the basilisk was censored, not just to save unlucky susceptible people from the trauma of imagining that they were being acausally blackmailed, but because Eliezer judged that acausal blackmail might actually be possible. The thinking was: maybe it’s possible, maybe it’s not, but it’s bad enough and possible enough that the idea should be squelched, lest some of the readers actually stumble into an abusive acausal relationship with a distant evil AI.

If Yudkowsky really thought it was irrational to worry about any part of it, why didn't he allow people to discuss it on LessWrong, where he and others could debunk it?

"Doesn't work against a perfectly rational, informed agent" does not preclude "works quite well against naïve, stupid newbie LW'ers that haven't properly digested the sequences."

Memetic hazard is not a fancy word for coverup. It means that the average person accessing the information is likely to reach dangerous conclusions. That says more about the average of humanity than the information itself.

Good point. To build on that here's something I thought of when trying (but most likely not succeeding) to model/steelman Eliezer's thoughts at the time of his decision:

This basilisk is clearly bullshit, but there's a small (and maybe not vanishingly small) chance that with enough discussion people can come up with a sequence of "improved" basilisks that suffer from less and less obvious flaws until we end up with one worth taking seriously. It's probably better to just nip this one in the bud. Also, creating and debunking all these basilisks would be a huge waste of time.

At least Eliezer's move has focused all attention on the current (and easily debunked) basilisk, and it has made it sufficiently low-status to try and think of a better one. So in this sense it could even be called a success.

I would not call it a success. Sufficiently small silver linings are not worth focusing on with large-enough clouds.

Is there anything we should do?

When reporters interviewed me about Bitcoin, I tried to point to LW as a potential source of stories and described in a positive way. Several of them showed interest, but no stories came out. I wonder why it's so hard to get positive coverage for LW and so easy to get negative coverage, when in contrast Wired magazine gave Cypherpunks a highly positive cover story in 1993, when Cypherpunks just got started and hadn't done much yet except publish a few manifestos.

I wonder why it's so hard to get positive coverage for LW and so easy to get negative coverage, when in contrast Wired magazine gave Cypherpunks a highly positive cover story in 1993, when Cypherpunks just got started and hadn't done much yet except publish a few manifestos.

There's an easy answer and a hard answer.

The easy answer is that, for whatever reason, the media today is far more likely to run a negative story about the tech industry or associated demographics than to run a positive story about it. LW is close enough to the tech industry, and its assumed/stereotyped demographic pattern is close enough to that of the tech industry, that attacking it is a way to attack the tech industry.

Observe:

"highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality ... techno-futurism ... high-profile techies like Peter Thiel ... some very influential and wealthy scientists and techies believe it ... computing power ... computers ... computer ... mathematical geniuses Stanislaw Ulam and John von Neumann ... The ever accelerating progress of technology ... Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil ... exponential increases in computing power ... Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes ... the machine equivalent of God ... rational action ... a smattering of parallel universes and quantum mechanics on the side ... supercomputer ... supercomputer ... supercomputer ... supercomputer ... autism ... Yudkowsky and other so-called transhumanists are attracting so much prestige and money for their projects, primarily from rich techies ... messianic ambitions, being convinced of your own infallibility, and a lot of cash"

Out of the many possible ways to frame the article, Slate chose to make it about "rich techies". Why postulate that Omega has a supercomputer? Why repeat the word 'supercomputer' four times in five sentences? The LW wiki doesn't mention computers of any sort, and the Wikipedia article only uses the word 'computer' twice. advancedatheist said above that the cryonics claim is false; assuming he's right, why include a lie?

It's clearly not a neutral explanation of the Basilisk -- and it fits into a pattern.

The hard answer would include an explanation of this pattern. (I'm not sure whether it would be a good idea to speculate about this in this particular thread, so: anyone who's tempted to do so, take five minutes and think over the wisdom of it beforehand.)

Personal contact via the people employed in the Wired magazine and a lot of hackers are quite strong. Wired had actually an intention of pushing projects like Cypherpunks or in the last years the Quantified Self movement which they essentially founded (Keven Kelly and Gary Wolf are both Wired Editors).

I don't think that LW is really the place that needs positive PR. I can't really think of a story about LW that I want to tell a reporter. I can think of stories about MIRI or about CFAR but LW itself doesn't need PR.

I can think of stories about MIRI or about CFAR but LW itself doesn't need PR.

That's a great point. LW is not MIRI. LW comments are not MIRI research. LW moderation policy is not FAI source code. Etc.

The proper response to basilisk would probably be: "So, tell me about the most controversial comment ever in your web discussions. You know, just so I can popularize it as the stuff your website is really about."

The basilisk seems to pretty much be the first thing outsiders know to associate with LW these days.

Well, for Charlie Stross it's practically professional interest :P

Anyhow: anecdote. Met an engineer on the train the other day. He asked me what I was reading on my computer, I said LW, he said he'd heard some vaguely positive things, I sent him a link to one of Yvain's posts, he liked it.