Open Thread, Jul. 27 - Aug 02, 2015

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 5:27 AM
Select new highlight date
Rendering 50/222 comments  show more

The excellent intro to AI risk by the Computerphile people (mentioned in the last Open Thread) has an even better continuation: AI Self Improvement. This is quite obviously inspired by the Sequences (down to including the simile of playing Kasparov at Chess), but explained with remarkable brevity and lucidity.

Apparently, NASA is testing an EM Drive, a reactionless drive which to work would have to falsify the law of conservation of momentum. As good Bayesians I know that we should have a strong prior belief that the law of conservation of momentum is correct so that even if EM Drive supporters get substantial evidence we should still think that they are almost certainly wrong, especially given how common errors and fraud are in science. But, my question is how confident should we be that the law of conservation of momentum is correct? Is it, say, closer to .9999 or 1-1/10^20?

If it breaks conservation of momentum and also produces a constant thrust, it breaks conservation of energy since kinetic energy goes up quadratically with time while input energy goes up linearly.

If it doesn't break conservation of energy there will be a priviliged reference frame in which it produces maximum thrust per joule, breaking the relativity of reference frames.

Adjust probability estimates accordingly.

Conservation laws occasionally turn out to be false. That said, momentum is pretty big, since it corresponds to translation and rotation invariance, and those intuitively seem pretty likely to be true. But then there was

https://en.wikipedia.org/wiki/CP_violation

I would give at least .00001 probability to the following: momentum per se is not conserved, but instead some related quantity, call it zomentum, is conserved, and momentum is almost exactly equal to zomentum under the vast majority of normal conditions.

In general, since we can only do experiments in the vicinity of Earth, we should always be wondering if our laws of physics are just good linearized approximations, highly accurate in our zone of spacetime, of real physics.

This seems much more like a "We know he broke some part of the Federal Aviation Act, and as soon as we decide which part it is, some type of charge will be filed" situation. The person who invented it doesn't think it's reactionless, if thrust is generated it's almost certainly not reactionless, but what's going on is unclear.

Shawyer has said he thinks it doesn't violate conservation of momentum because interacts with "quantum vacuum virtual plasma." I don't really find that reassuring. The current effect size is very small with no sign yet of scaling.

Obligatory links to John Baez on the topic: 1, 2.

Published 4 hours ago as of Monday 27 July 2015 20.18 AEST:

Musk, Wozniak and Hawking urge ban on AI and autonomous weapons: Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

Link from Reddit front page

Link from The Guardian

Does anybody know of a way to feed myself data about current time/North? I noticed that I really dislike not knowing time or which direction I'm facing, but pulling out a phone to learn them is too inconvrnient. I know there's north paw, but it'd be too awkward to actually wear it.

Something with magnets under the skin, maybe?

Sometimes I will be talking to a student, and be perfectly happy to talk with her until a minute before my next class starts, but I'm uncertain of the time. If I make any visible effort to look at the time, however, she will take it as a sign that I want to immediately end our conversation, so I could use your described device.

While I'm sure you've thought of setting silent alarms on your phone, a slightly less obvious idea would be to get a watch that has a vibrating alarm capability.

Instead of real-time directional data, could you improve your sense of direction with training? Something like: estimate North, pull out phone and check, score your estimate, iterate. I imagine this could rapidly be mastered for your typical locations, such that you no longer need to pull out your phone at all.

Do you know about this thing? It actually gets introduced at 11:00. It's originally intended to let deaf people hear again, but later on he shows that you can use any data as input. It's (a) probably overkill and (b) not commercially available, but depending on how much time and resources you want to invest I imagine it shouldn't be all too hard to make one with just 3 pads or so.

There's been far less writings on improving rationality here on LW during the last few years. Has everything important been said about the subject, or have you just given up on trying to improve your rationality? Are there diminishing returns on improving rationality? Is it related to the fact that it's very hard to get rid off most of cognitive bias, no matter how hard you try to focus on them? Or have people moved talking about these on different forums, or in real life?

Or like Yvain said on 2014 Survey results.

It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions. To give an example of how horrendous, people who were 50% sure of their answers to question 10 got it right only 13% of the time; people who were 100% sure only got it right 44% of the time. Obviously those numbers should be 50% and 100% respectively.

This builds upon results from previous surveys in which your calibration was also horrible. This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.

LW's strongest, most dedicated writers all seem to have moved on to other projects or venues, as has the better part of its commentariat.

In some ways, this is a good thing. There is now, for example, a wider rationalist blogosphere, including interesting people who were previously put off by idiosyncrasies of Less Wrong. In other ways, it's less good; LW is no longer a focal point for this sort of material. I'm not sure if such a focal point exists any more.

Where, exactly? All I've noticed is that there's less interesting material to read, and I don't know where to go for more.

Okay, SSC. That's about it.

About that survey... Suppose I ask you to guess the result of a biased coin which comes up heads 80% of the time. I ask you to guess 100 times, of which ~80 times the right answer is "heads" (these are the "easy" or "obvious" questions) and ~20 times the right answer is "tails" (these are the "hard" or "surprising" questions). Then the correct guess, if you aren't told whether a given question is "easy" or "hard", is to guess heads with 80% confidence, for every question. Then you're underconfident on the "easy" questions, because you guessed heads with 80% confidence but heads came up 100% of the time. And you're overconfident on the "hard" questions, because you guessed heads with 80% confidence but got heads 0% of the time.

So you can get apparent under/overconfidence on easy/hard questions respectively, even if you're perfectly calibrated, if you aren't told in advance whether a question is easy or hard. Maybe the effect Yvain is describing does exist, but his post does not demonstrate it.

Wow, that's a great point. We can't measure anyone's "true" calibration by asking them a specific set of questions, because we're not drawing questions from the same distribution as nature! That's up there with the obvious-in-retrospect point that the placebo effect gets stronger or weaker depending on the size of the placebo group in the experiment. Good work :-)

I re-analyzed the calibration data, looking at all 10 question averaged together (which I think is a better approach than going question-by-question, for roughly the reasons that D_Malik gives), and found that veterans did better than newbies (and even newbies were pretty well calibrated). I also found similar results for other biases on the 2012 LW survey.

A lot of this has moved to blogs. See malcolmocean.com, mindingourway.com, themindsui,com, agentyduck.blogspot.com, and slatestarcodex.com for more of this discussion.

That being said, I think writing/reading about rationality is very different than becoming good at it. I think someone who did a weekend at CFAR, or the Hubbard Research AIE level 2 workshop would rank much higher on rationality than someone who spent months reading through all the sequences.

1) There are diminishing returns on talking about improving rationality.

2) Becoming more rational could make you spend less time online, including on LessWrong. (The time you would have spent in the past writing beautiful and highly upvoted blog articles is now spent making money or doing science.) Note: This argument is not true if building a stronger rationalist community would generate more good than whatever you are doing alone instead. However, there may be a problem with capturing the generated value. (Eliezer indirectly gets paid for having published on LessWrong. But most of the others don't.)

I've just finished a solid first-draft of a post that I'm planning on submitting to main, and I'm looking for someone analytical to look over a few of my calculations. I'm pretty sensitive, so I'd be embarrassed if I posted something with a huge mistake in it to LW. The post is about the extent to which castration performed at various ages extends life expectancy in men, and was mainly written to inform people interested in life extension about said topic, though it might also be of interest to MtF trans people.

All of my calculations are in an excel spreadsheet, so I'll email you the text of the post, as well as the excel file, if you're interested in looking over my work. I'm mainly focused on big-picture advice right now, so I'm not really looking for someone to, say, look for typos. The only thing I'm really worried about is that perhaps I've done something mathematically unsavory when trying to crudely use mean age-at-death actuarial data from a subset of the population that existed in the past to estimate how long members of that same subset of the population might live today.

Being able to use math to build the backbone of a scientific paper might be a useful skill for any volunteers to have, though I don't suspect that any advanced knowledge of statistics is necessary. Thanks!

I can take a look; you know my email.

All of my calculations are in an excel spreadsheet, so I'll email you the text of the post, as well as the excel file, if you're interested in looking over my work.

One of the trends I've seen happening that I'm a fan of is writing posts/papers/etc. in R, so that the analysis can be trivially reproduced or altered. In general, spreadsheets are notoriously prone to calculation errors because the underlying code is hidden and decentralized; it's much easier to look at a python or R script and check its consistency than an Excel table.

(It's better to finish this project as is than to delay this project until you know enough Python or R to reproduce the analysis, but something to think about for future projects / something to do if you already know enough Python or R.)

"The Games of Entropy", which premiered at the European Less Wrong Community Weekend 2015, chapter two of the science and rationality promoting art project Seven Secular Sermons, is now available on YouTube. The first chapter, "Adrift in Space and Time" is also there, re-recorded with better audio and video quality. Enjoy!

I travelled to a different city for a period of a few days and realised I should actively avoid trying to gather geographical information (above a rough sense) to free up my brain space for more important things. Then I realised I should do that near home as well.

Two part question:-

  1. What do you outsource that is common and uncommon among people that you know?
  2. What should you be avoiding keeping in your brain that you currently are? (some examples might be birthdays, what day of the week it is, city-map-location, schedules/calendars, task lists, shopping lists)

And while we are at it: What automated systems have you set up?

I was under the impression that "brain space" was unlimited for all practical intents and purposes, and that having more stuff in your brain might actually even make extra learning easier - e.g. I've often heard it said that a person loses fluid intelligence when they age, but this is compensated by them having more knowledge that they can connect new things with. Do you know of studies to the contrary?

What do you outsource that is common and uncommon among people that you know?

A lot of little facts (of the kind that people on LW use Anki decks to memorize). I outsource them to Google.

I barely remember any phone numbers nowadays and that seems to be common.

What should you be avoiding keeping in your brain that you currently are?

Schedules / to-do lists. I really should outsource them to some GTD app, but can't bring myself to use one consistently.

Has any one been working on the basics of rationality or summarizing the sequences? I think it would be helpful if someone created a sequence in which they cover the less wrong core concepts concisely as well as providing practical advice on how to apply rationality skills related to these concepts at the 5 second level.

A useful format for the posts might be: overview of concept, example in which people frequently fail at being rational because they innately don't follow the concept and then advice on how to apply the concept. Or another format might be: principle underlying multiple less wrong concepts, examples in which people fail at being rational because they don't follow the concepts and then advice on how to deal with the principle and become more rational.

I think that all these posts should summed up with or contain pratical methods on how to improve rationality skills and ways to quantify and measure these improvements. The results of CFAR workshops could probably provide a basis for these methods.

Lots of links to the related less wrong posts or wikis would also be useful.

Donating now vs. saving up for a high passive income

Is there any sort of consensus on whether it is generally better to (a) directly donate excess money you earn or (b) save money and invest it until you have a high enough passive income to be financially independent? And does the question break down to: Is the long term expected return for donated money (e.g. in terms of QALYs) higher than for invested money (donated at a later point)? If it is higher for invested money there is a general problem of when to start donating, because in theory, the longer you wait, the higher the impact of that donated money. If the expected return for invested money is higher atm, I expect there will however come a point in time where this will no longer be the case.

If the expected return is higher for immediately donated money, are there additional benefits of having a high passive income that can justify actively saving money? E.g. not needing to worry about job security too much...

https://nnaisense.com/

NNAISENSE leverages the 25-year proven track record of one of the leading research teams in AI to build large-scale neural network solutions for superhuman perception and intelligent automation, with the ultimate goal of marketing general-purpose neural network-based Artificial Intelligences.

An AI startup created by Jurgen Schmidhuber.