Open thread, June 27 - July 3, 2016

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:48 PM
Select new highlight date
All comments loaded

Rationality lessons from Overwatch, a multiplayer first-person shooter:

1) Learning when you're wrong: The killcam, which shows how I died from the viewpoint of the person who killed me, often corrects my misconception of how I died. Real life needs a killcam that shows you the actual causes of your mistakes. Too bad that telling someone why they are wrong is usually considered impolite.

2) You get what you measure: Overwatch's post-game scoring gives metals for teamwork activities such as healing and shots blocked and this contributes to players' willingness to help their teammates.

3) Living in someone else's shoes: The game has several different classes of characters that have different strengths and weaknesses. Even if you rarely play a certain class, you get a lot from occasionally playing it to gain insight into how to cooperate with and defeat members of this class.

Addressing 1) "Learning when you're wrong" (in a more general sense):

Absolutely a good thing to do, but the problem is that you're still losing time making the mistakes. We're rationalists; we can do better.

I can't remember what book I read it in, but I read about a practice used in projects called a "pre-mortem." In contrast to a post-mortem, in which the cause of death is found after the death, a pre-mortem assumes that the project/effort/whatever has already failed, and forces the people involved to think about why.

Taking it as a given that the project has failed forces people to be realistic about the possible causes of failures. I think.

In any case, this struck me as a really good idea.

Overwatch example: If you know the enemy team is running a Mcree, stay away from him to begin with. That flashbang is dangerous.

Real life example: Assume that you haven't met your goal of writing x pages or amassing y wealth or reaching z people with your message. Why didn't you?

I read about pre-mortem-like questions in a book called Decisive: How to Make Better Choices in Life and Work by Chip Heath and Dan Heath.

That's probably it; I read it recently. Thanks!

Real life needs a killcam

Goes into the "shit LW people say" bin :-D

On a tiny bit more serious note, I'm not sure the killcam is as useful as you say. It shows you how you died, but not necessarily why. The "why" reasons look like "lost tactical awareness", "lingered a bit too long in a sniper's field of view", "dived in without team support", etc. and on that level you should know why you died even without a killcam.

Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D

"Other lessons from Overwatch: if a cute small British girl blinks past you, shoot her in the face first :-D"

Pfft

Rationalists play Reaper. Shoot EVERYONE IN ALL THE FACES.

Reaper gets relatively little value from cooperating with teammates so I hope that rationalists don't find Reaper to be the best for them.

Cooperation is not a terminal goal. Winning the game is.

If I don't see my team's Reaper (or Tracer) ever, but the rear ranks of the enemy team mysteriously drop dead on a regular basis, that's perfectly fine.

Cooperation is not a terminal goal

Agreed, but if a virtue and comparative advantage of rationalists is cooperating than our path to victory won't often involve us using Reaper or Tracer.

Do you play on the Xbox?

I'm a bit mystified by how cooperation became a "virtue and comparative advantage of rationalists". I understand why culturally, but if you start from the first principles, it doesn't follow. In a consequentialist framework there is no such thing as virtue, the concept just doesn't exist. And cooperation should theoretically be just one of the many tools of a rationalist who is trying to win. In situations where it's advantageous she'll cooperate and where it isn't she won't.

Nope, I play on a PC.

Rationality is systematized winning. If failure to cooperate keeps people like us from winning then we should make cooperation a virtue and practice it when we can. (I'm literally playing Overwatch while I answer this.)

The situation is symmetrical: if eagerness to cooperate keeps people like us from winning then we should make non-cooperation a virtue and practice it when we can.

My multitasking isn't as good :-)

I was wrong. Reaper and Mei can greatly benefit from cooperation.

Pfft

Rationalists play whatever class at the moment is convenient for shooting everyone in the face in the most speedy and efficient manner :-P

I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?

1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.

  1. Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.

  2. The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.

  3. There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.

  4. The doubling time in some benchmarks in deep learning seems to be 1 year.

  5. Media overhype AI achievements.

  6. Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).

  7. A lot of new investment going into AI research and salaries in field are rising.

  8. Military are increasingly interested in implementing AI in warfare.

  9. Google has AI ethic board, but what it is doing is unclear.

  10. It seems like AI safety and implementation is lagging from actual AI development.

Say you are a strong believer and advocate for the Silicon Valley startup tech culture, but you want to be able to pass an Ideological Turing Test to show that you are not irrational or biased. In other words, you need to write some essays along the lines of "Startups are Dumb" or "Why You Should Stay at Your Big Company Job". What kind of arguments would you use?

This comment got 6+ responses, but none that actually attempted to answer the question. My goal of Socratically prompting contrarian thinking, without being explicitly contrarian myself, apparently failed. So here is my version:

  • Most startups are gimmicky and derivative, even or especially the ones that get funded.
  • Working for a startup is like buying a lottery ticket: a small chance of a big payoff. But since humans are by nature risk-averse, this is a bad strategy from a utility standpoint.
  • Startups typically do not create new technology; instead they create new technology-dependent business models.
  • Even if startups are a good idea in theory, currently they are massively overhyped, so on the margin people should be encouraged to avoid them.
  • Early startup employees (not founders) don't make more than large company employees.
  • The vast majority of value from startups comes from the top 1% of firms, like Facebook, Amazon, Google, Microsoft, and Apple. All of those firms were founded by young white males in their early 20s. VCs are driven by the goal of funding the next Facebook, and they know about the demographic skew, even if they don't talk about it. So if you don't fit the profile of a megahit founder, you probably won't get much attention from the VC world.
  • There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.

Argument thread!

You should probably stay at your big company job because the people who are currently startup founders are self-selected for, on average, different things than you're selecting yourself for by trying to jump on a popular trend, and so their success is only a weak predictor of your success.

Startups often cash out by generating hype and getting bought for ridiculous amounts of money by a big company. But they are very, very often, in more sober analysis, not worth this money. From a societal perspective this is bad because it's not properly aligning incentives with wealth creation, and from a new-entrant perspective this is bad because you likely fail if the bubble pops before you can sell.

This comment got 6+ responses, but none that actually attempted to answer the question.

Likely because the answers called for a ITT but provided no questions for the ITT.

Startups typically do not create new technology; instead they create new technology-dependent business models.

There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.

Both of those seem to me like failing the Intellectual Turing Test. I would have a hard time thinking that the average person who works at a big company would make those arguments.

You never explained what you mean by "startup culture," nor "good."

One can infer something from your arguments. But different arguments definitely appeal to different definitions of "good." In particular: good for the founder, good for the startup employee, good for the VC, and good for society.

There is no reason to believe that it should be good for all of them. In particular, a belief that equity is valuable to startup employees is good for founders and VCs, but if it is false, it is bad for startup employees. If startups are good for society, it may be good for society for the employees to be deceived. But if startups are good for society, it may be a largely win-win for startups to be considered virtuous and everyone involved in startups to receive status. Isn't that the kind of thing "culture" does, rather than promulgate specific beliefs?

By "startup culture" you seem to mean anything that promotes startups. Do these form a natural category? If they are all VC propaganda, then I guess that's a natural category, but it probably isn't a coherent culture. Perhaps there is a pro-startup culture that confabulates specific claims when asked. But are the details actually motivating people, or is it really the amorphous sense of virtue or status?

Sometimes I see people using "startup culture" in a completely different way. They endorse the claim that startups are good for society, but condemn the current culture as unproductive.

What exactly is the thesis in question? "Startup culture is a valuable piece of a large economy", for example, is not the same thing as "I should go and create a startup, it's gonna be great!".

Not to disagree with this exercise, but I think that the name ITT is overused and should not be applied here. Why not just ask "What are some good arguments against startups?" If you want a LW buzzword for this exercise, how about hypothetical apostasy or premortem?

I think that ITT should be reserved for the narrow situation where there is a specific set of opponents and you want to prove that you are paying attention to their arguments. Even when the conventional wisdom is correct, it is quite common that the majority has no idea what the minority is saying and falsely claims to have rebutted their arguments. ITT is a way of testing this.

(Not that I know a thing about the subject, but are you sure this angle is exactly how an 'unbiased re: startups" person would think about it? Why not something more like, "Startups are simply irrelevant, if we get down to it"?)

Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X. So this should be impossible, except by deliberately including arguments that are, to the best of your knowledge, flawed. I might be able to imitate a homeopath, but I can't imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.

Yes, a lot of people extoll the virtues of doing this. But a lot of people aren't rational, and don't believe X on the basis of arguments in the first place. If so, then producing good arguments against X are logically possible, and may even be helpful.

(There's another possibility: where you are weighing things and the other side weighs them differently from you. But that's technically just a subcase--you still think the other side's weights are incorrect--and I still couldn't use it to imitate a creationist or flat-earther.)

Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X.

Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn't have to be anywhere near 1. Moreover, a rationalist can "believe" (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.

And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn't look?

I might be able to imitate a homeopath, but I can't imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.

Great point!

I guess the point of ITT is that even when you disagree with your opponents, you have the ability to see their (wrong) model of the world exactly as they have it, as opposed to a strawman.

For example, if your opponent believes that 2+2=5, you pass ITT by saying "2+2=5", but you fail it by saying "2+2=7". From your perspective, both results are "equally wrong", but from their perspective, the former is correct, while the latter is plainly wrong.

In other words, the goal of ITT isn't to develop a "different, but equally correct" map of the territory (because if you would believe in correctness of the opponent's map, it would also become your map), but to develop a correct map of your opponent's map (as opposed to an incorrect map of your opponent's map).

So, on some level, while you pass an ITT, you know you are saying something false or misleading; even if just by taking correct arguments and assigning incorrect weights to them. But the goal isn't to derive a correct "alternative truth"; it is to have a good model of your opponent's mind.

No good arguments, or the weight of the arguments for X are greater than the weight of the arguments against X?

Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X.

No, http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/

In high level debating at the debating world championship the participants are generally able to give good arguments for both sides of every issue.

The Einstein Toolkit Consortium is developing and supporting open software for relativistic astrophysics

this is a core product, that you can attach modules to for specific models that you want to run. able to handle GR on a cosmological scale !

http://einsteintoolkit.org/

I tried to follow the link but the whole framework (ETK + Cactus + Loni an so on...) is so scattered and so poorly documented that it discouraged me.
I have the idea that only those who already use Cactus intensively will know how to use the toolkit.

I didn't realize that the biggest supporter of UBI in the US is the ex-leader of the Service Employees Union. Guess i will have to read that book next. Have Agars 'Humanities End' to tackle next..

http://www.alternet.org/economy/universal-basic-income-solves-robots-taking-jobs

and a write-up on why the elites don't get the Brexit drama right..

http://www.bloomberg.com/view/articles/2016-06-24/-citizens-of-the-world-nice-thought-but

They failed, because as the blogger Epicurean Dealmaker pointed out on Twitter, “Markets distill the biases, opinions, & convictions of elites,” which makes them “Structurally less able to predict populist movements.”

That seems to be way off. Prediction markets reflects the opinion of those who enter in the market. AFAIK there's no barrier to the lower income strata of the popoluation. Polls also failed to predict the result, so I would say that it was not a structural failure of the markets.

Prediction markets reflects the opinion of those who enter in the market. AFAIK there's no barrier to the lower income strata of the popoluation.

The thing is, the markets reflect committed-capital-weighted opinions of market participants. This is not an egalitarian democracy.

Given that market participants insure against risks with the prediction market and the event of the Brexit does carry risk to some businesses I'm not sure that's empircally the case.

Possibly we (meaning I vs Epicurean Dealmaker) have a very different notion of 'elite'.
I imagine the elite as the 10% (or 5% or 1%, depending on your Pareto distribution) which has enough capital to hedge against market fluctuations (or enough to create it entirely); as far as I understand instead ED means as 'elite' anyone who has enough money to invest in a market.

I don't think this is the issue. If you invest $10m into some market position, your "opinion" literally has one million times the impact of someone who invested $10. It's not just "people who invest" vs "people who do not invest". Even among those who invest, the more capital you apply, the more your opinion matters.

Markets are inherently capital-weighted and their opinion necessarily reflects the positions of the rich to a much greater degree.

Is the EU regulations on algorithmic decision-making and a “right to explanation” positive for our future? Does it make a world with UFAI less likely?

Room for improvement in Australia’s overseas development aid

Poor countries typically receive aid from many donors. In Vietnam, Australia is one of 51 multilateral and bilateral donors (Vietnam Ministry of Planning 2010). Interactions between a large number of donors and a single recipient government can have a cumulative and damaging impact. For example, in 2005, the Tanzanian government produced about 2,400 reports for the more than 50 donors operating in the country (TASOET 2005: 1). In the Pacific Islands, some senior government officials are so busy meeting donor–financed consultants and producing reports for donors that they have little time for the business of governing (AusAID 2008a: 21).

  • quoted in the Australian Government Independent Review of Aid Effectiveness chapter 1-3

Perhaps we need a common OECD project committee or other multilateral aid review committees so only one reported needed rather than multiple reports - focus on fewer big ambitious projects rather than many small impact projects?

The EA community for historical reason doesn't do much analysis of government aid (actually, no one does), even though this is a fundamentally public activity in democratic countries. And that's reasonable, it's extremely complex to analyse incumbent donors. It's easier to think on the margins, and from the perspectives of individuals. To get started, I read through the Australian Governments Independent Review of Aid Effectiveness to identify the counter-intuitive takeaways.

what's the current scope of Australia's aid operations

'Australians are generous supporters of this cause. Each year the Australian people contribute $800 million to NGOs for aid work. Australia has some of the most active NGOs in the field and many Australians also volunteer their time and skills overseas. Additionally, on behalf of the people, the government provides $4 billion a year, and runs a substantial aid operation around the world.'

Why is this a timely issue

'The total volume of aid has grown dramatically, driven by: large increases in aid from traditional donors (basically the Western industrialised countries); the emergence of new non–government donors (such as the Bill and Melinda Gates Foundation) and global funds (for example the Global Fund to Fight AIDS, Tuberculosis and Malaria); and the rapid growth in aid from non–traditional donors such as China and Brazil'

Not to mention the emergence of history's pre-eminent aid effectiveness focussed civic community - effective altruists

Effective Development Group:

“While we believe the aid program should aim at contributing to development and poverty reduction efforts overseas, we need to recognise its limited capacity to yield results, and even sometimes its potential counterproductive effect over the longer term, given the sheer complexity of the dynamics at play and the many factors/actors that contribute to them.”

-quoted in the Australian Government Independent Review of Aid -Effectiveness Chapter 1-3

----Policy proposals----

Multilateral aid consolidation

The Australian Government's Independent Review of Aid Effectiveness identified that the principle operating procedure for Australian foreign aid should be value for money. Those multilateral organisations that they have recently found and in the future those which they find to have a poor or worse overall assessment of value for money should be stripped of their funding, which is probably in the hundreds of millions and possibly into the billions

References: see part 3 of Independent Review of Aid Effectiveness

Independence from aid

To ensure Australia's aid partners don't become dependent on Australian foreign aid, thus destabilising foreign economies stability and self-reliance - e.g. undercutting farmers produce at the markets thus depriving them of incentives to produce, thus becoming more dependent and creating less surplus and thus greater deprivation and poverty over the long term and greater costs to our aid budget

Scaling down aid or halting expansion of aid in geographic areas identified by the review where there is both a low case for expansion but high reliance on bilateral delivery channels

References: see part 3 of Independent Review of Aid Effectiveness

Defragmentation

(see print screen of page 39 in chapters 1-3 of the report)

To put it simply, there are too many small ineffective programs and these are costing wellbeing and Australian dollars.

'Evidence of the problems of fragmentation, and recommendations to help reduce it, are a recurring theme through this Report. It needs a sustained effort to consolidate and tighten political and bureaucratic discipline in the future.'

-chapters 1-3

Public communication

Aid budget given to communicating effectiveness or otherwise:

'The government does not have an effective communications strategy for the aid program. This contrasts for example with the proactive communications practices of the Australian Defence Force. The contrast was particularly stark following the Padang earthquake in Indonesia when the ADF actively promoted publicly the work it had done, whereas there was very limited coverage of the Au said effort. A generally ad hoc and reactive approach may have been viable for a small program, but it will not work given the increased scrutiny the program will face as it grows. Asiad’s leadership has been more forthcoming and more publicly available than in the past, and this is a welcome development.'

'The Review Panel does not advocate a public relations strategy which is merely self–congratulatory. The issue is, rather, ensuring that the Australian public are able to obtain an accurate and full account of the resources which are being devoted to aid, both the accomplishments and the difficulties. Fostering more informed public debate about the program is healthy and appropriate. The Australian people have a right to know why Australia gives aid and what is being achieved with their money. But the requirement goes beyond public information. It is also desirable that there should be a greater sense of public engagement with the aid program. The Review Panel makes a number of recommendations in this regard.'

Seconded recommendations that are obvious

'Recommendation 37: A Transparency Charter should be developed, committing the aid program to publishing documents and data in a way that is comprehensive, accessible and timely.'

National interest scepticism

One problem with the objective of the program as it is presently stated is that it is unclear and ambiguous in relation to how the national interest should figure in the program. The Review Panel believes that this issue should be brought out into the open and addressed squarely. Those responsible for managing the transition to the much increased aid program of the future need clarity and guidance.

In the first place, Australia’s interests are served by a world of prosperity and opportunity, rather than one plagued by poverty.

In the quest to optimize my sleep I have found over the last days that I relaxed a lot more as usual. I sleep on the side but I put cushion between my back and the wall so that part of my weight rests on the back and part rests on the mattress of the bed.

Are there any real reasons why standard beds are flat? Or is it just a cultural custom like our standard toilet design that exists for stupid reasons?

Is post-rationalism dead? I'm following some trails and the most updated material is at least three years old.
If so, good riddance?

If I put the phrase into Google I find http://thefutureprimaeval.net/postrationalism/ that was written in 2015 as one of the results, so the phrase got used more recently than three years ago.

In general the term isn't important to many of the people that Scott put under that label when he wrote his map. http://www.ribbonfarm.com/ is still alive and well. David Chapman also still writes.

That was my starting point too, but I noticed that most new content linked there specifically about PR seems to have been written pre-2015. If those authors still write, I get the impression that they are not writing about PR anymore.
That makes me suspect that postrationalism was never a 'thing'.

That makes me suspect that postrationalism was never a 'thing'.

Scott used the term when he draw his map and a few people thought that it describes a cluster but most of the involved people don't care for the term.

It's similar to a term like Darwinism that wasn't primarily about self-labeling.