Open thread, Mar. 20 - Mar. 26, 2017

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:42 AM
Select new highlight date
All comments loaded

Okay, so I recently made this joke about future Wikipedia article about Less Wrong:

[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.

A few days later I actually looked at the Wikipedia article about Less Wrong:

In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, calling it "stupid". Discussion of Roko's basilisk was banned on LessWrong for several years before the ban was lifted in October 2015.

The majority of the LessWrong userbase identifies as atheist, consequentialist, white and male.

The neoreactionary movement is associated with LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology. In the 2014 self-selected user survey, 29 users representing 1.9% of survey respondents identified as "neoreactionary". Yudkowsky has strongly repudiated neoreaction.

Well... technically, the article admit that at least Yudkowsky considers the basilisk stupid, and disagrees with neoreaction. Connotationally, it suggests that basilisk and neoreaction are 50% of what is worth mentioning about LW, because that's the fraction of the article these topics got.

Oh, and David Gerard is actively editing this page. Why am I so completely unsurprised? His contributions include:

  • making a link to a separate article for Roko's basilisk (link), which luckily didn't materialize;
  • removing suggested headers "Rationality", "Cognitive bias", "Heuristic", "Effective altruism", "Machine Intelligence Research Institute" (link) saying that "all of these are already in the body text"; but...
  • adding a header for Roko's basilisk (link);
  • shortening a paragraph on LW's connection to effective altruism (link) -- by the way, the paragraph is completely missing from the current version of the article;
  • an edit war emphasising that it is finally okay to talk on LW about the basilisk (link, link, link, link, link);
  • restoring the deleted section on basilisk (link) saying that it's "far and away the single thing it's most famous for";
  • adding neoreaction as one of the topics discussed on LW (link), later removing other topics competing for attention (link), and adding a quote that LW "attracted some readers and commenters affiliated with the alt-right and neoreaction, that broad cohort of neofascist, white nationalist and misogynist trolls" (link);

...in summary, removing or shortening mentions of cognitive biases and effective altruism, and adding or developing mentions of basilisk and neoreaction.

Sigh.

EDIT: So, looking back at my prediction that...

Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games.

...I'd say I was (1) right about the basilisk; (2) partially right about the white supremacism, which at this moment is not mentioned explicitly (yet! growth mindset), but the article says that the userbase is mostly white and male, and discusses eugenics; and (3) wrong about the computer games. 50% success rate!

Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.

Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.

And there are a few things I think we will observe first (some of which we are already observing) that will act as a catalyst for this. Number one, if economic inequality increases, I think a lot of the blame for this will be placed on the elite (as it always is), but in particular the cognitive elite (which makes up an ever-increasing share of the elite). Whatever the views of the cognitive elite are will become the philosophy of evil from the perspective of the masses. Because the elite are increasingly made up of very high intelligence people, many of whom with a connection to technology or Silicon Valley, we should expect that the dominant worldview of that environment will increasingly contrast with the worldview of those who haven't benefited or at least do not perceive themselves to benefit from the increasing growth and wealth driven by those people. What's worse, it seems that even if economic gains benefit those at the very bottom too, if inequality still increases, that is the only thing that will get noticed.

The second issue is that as technology improves, our powers of inference increase, and privacy defenses become weaker. It's already the case that we can predict a person's behavior to some degree and use that knowledge to our advantage (if you're trying to sell something to them, give them / deny them a loan, judge whether they would be a good employee, or predict whether or not they will commit a crime). There's already a push-back against this, in the sense that certain variables correlate with things we don't want them to, like race. This implies that the standard definition of privacy, in the sense of simply not having access to specific variables, isn't strong enough. What's desired is not being able to infer the values of certain variables, either, which is a much, much stronger condition. This is a deep, non-trivial problem that is unlikely to be solved quickly - and it runs into the same issues as all problems concerning discrimination do, which is how to define 'bias'. Is reducing bias at the expense of truth even a worthy goal? This shifts the debate towards programmers, statisticians and data scientists who are left with the burden of never making a mistake in this area. "Weapons of Math Destruction" is a good example of the way this issue gets treated.

We will also continue to observe a lot ideas from postmodernism being adopted as part of political ideology of the left. Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme. And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought. So if a lot of the above problems get worse, I think there is a chance that rationalism will get blamed as it has been in the framework of postmodernism.

The summary of this is: As politics becomes warfare between worldviews rather than arguments for and against various beliefs, populist hostility gets directed towards what is perceived to be the worldview of the elite. The elite tend to be more rationalist, and so that hostility may get directed towards rationalism itself.

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

(I thought the post was reasonably written.)

Can you say a word on whether (and how) this phenomenon you describe ("populist hostility gets directed towards what is perceived to be the worldview of the elite") is different from the past? It seems to me that this is a force that is always present, often led to "problems" (eg, the Luddite movement), but usually (though not always) the general population came around more in believing the same things as "the elites".

Link on "discussion" disappeared from the lesswrong.com. Is it planned change? Or only for me?

Accidental css pull that caused unusual things. It's being worked on. Apologies.

Hello guys, I am currently writing my master's thesis on biases in the investment context. One sub-sample that I am studying is people who are educated about biases in a general context, but not in the investment context. I guess LW is the right place to find some of those so I would be very happy if some of you would participate since people who are aware about biases are hard to come by elsewhere. Also I explicitly ask for activity in the LW community in the survey, so if enough of LWers participate I could analyse them as an individual subsample. Would be interesting to know how LWers perform compared to psychology students for example. Also I think this is related enough to LW that I could post a link to the survey in discussion, right? If so I would be happy about some karma, because I just registered and cant post yet. The link to the survey is: https://survey.deadcrab.de/

Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).

Maybe there could be some high-profile positive press for cryonics if it became standard policy to freeze endangered species seeds or DNA for later resurrection

What is the steelmanned, not-nonsensical interpretation of the phrase "democratize AI"?

One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

s/AI/capital/

Now, where have I heard this before..?

And your point is...?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn't stolen is used very inefficiently.

But on a smaller scale... companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).

Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let's assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don't think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn't mind... especially considering that going for the former option will make people much more willing to cooperate with him.

String substitution isn't truth-preserving; there are some analogies and some disanalogies there.