Open thread, Feb. 06 - Feb. 12, 2017

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 10:43 AM
Select new highlight date
All comments loaded

From "A Bitter Ending":

At a conference back in the early 1970s, Danny [Kahneman] was introduced to a prominent philosopher named Max Black and tried to explain to the great man his work with Amos [Tversky]. "I’m not interested in the psychology of stupid people," said Black, and walked away.

Danny and Amos didn’t think of their work as the psychology of stupid people. Their very first experiments, dramatizing the weakness of people’s statistical intuitions, had been conducted on professional statisticians. For every simple problem that fooled undergraduates, Danny and Amos could come up with a more complicated version to fool professors. At least a few professors didn’t like the idea of that. "Give people a visual illusion and they say, ‘It’s only my eyes,’ " said the Princeton psychologist Eldar Shafir. "Give them a linguistic illusion. They’re fooled, but they say, ‘No big deal.’ Then you give them one of Amos and Danny’s examples and they say, ‘Now you’re insulting me.’ "

In late 1970, after reading early drafts of Amos and Danny’s papers on human judgment, Edwards [former teacher of Amos] wrote to complain. In what would be the first of many agitated letters, he adopted the tone of a wise and indulgent master speaking to his naïve pupils. How could Amos and Danny possibly believe that there was anything to learn from putting silly questions to undergraduates? "I think your data collection methods are such that I don’t take seriously a single ‘experimental’ finding you present," wrote Edwards. These students they had turned into their lab rats were "careless and inattentive. And if they are confused and inattentive, they are much less likely to behave more like competent intuitive statisticians." For every supposed limitation of the human mind Danny and Amos had uncovered, Edwards had an explanation. The gambler’s fallacy, for instance. If people thought that a coin, after landing on heads five times in a row, was more likely, on the sixth toss, to land on tails, it wasn’t because they misunderstood randomness. It was because "people get bored doing the same thing all the time."

An Oxford philosopher named L. Jonathan Cohen raised a small philosophy-sized ruckus with a series of attacks in books and journals. He found alien the idea that you might learn something about the human mind by putting questions to people. He argued that because man had created the concept of rationality, he must, by definition, be rational. "Rational" was whatever most people did. Or, as Danny put it in a letter that he reluctantly sent in response to one of Cohen’s articles, "Any error that attracts a sufficient number of votes is not an error at all.

He argued that because man had created the concept of rationality, he must, by definition, be rational.

Oh my.

Or, as Danny put it in a letter that he reluctantly sent in response to one of Cohen’s articles, "Any error that attracts a sufficient number of votes is not an error at all."

Wondering how many computation cycles humanity has wasted since the beginning of time debating words will give me nightmares. Have we in four thousands years of history accumulated a month of creative, uninterrupted thoughts about truth that wasn't about definitions?

We don't have an open quotes thread on the main page, but this made me chuckle:

"mathematician thinks in numbers, a lawyer in laws, and an idiot thinks in words." from Nassim Taleb in

http://www.thehindu.com/books/%E2%80%98Trump-makes-sense-to-a-grocery-store-owner%E2%80%99/article17109351.ece

I came here looking for a Rationality Quotes thread to quote that in. :)

I'm especially sensitive to it because I spent a lot of time last year reading postmodernist literary theory, which rejects logic in favor of rhetoric. They support theories that have impressive-sounding words because postmodernist theory says the point of theory is to have fun rather than to understand things.

postmodernist theory says the point of theory is to have fun rather than to understand things.

Do they really admit they are just trolling? :O

Is it currently legal to run a for-money prediction market in Canada? I assume the answer is "no," but I was surprisingly unable to find a clear ruling anywhere on the Internet. All I can find is this article which suggests that binary options (which probably includes prediction markets) exist in a legally nebulous state right now.

I think you are misreading the article. I think that it is saying that betting on financial markets is heavily regulated. The whole reason those sites exist is avoid claiming to give access to financial markets. At one point the SEC(?) negotiated a deal with Intrade that it could have American customers in return for not offering financial bets. Sports betting websites are much less lightly regulated and I'm pretty sure political prediction markets are legal.

Why do you ask? What does it matter where a market is located? As that article shows, the Canadian government, unlike USG, doesn't try hard to block Canadians from using off-shore services that would not be legal in Canada. Prediction markets are definitely legal in Ireland. What more do you need? It might be dangerous to live in Canada and run a sketchy betting site nominally located in the Caribbean, but Ireland? If you follow Irish law, no problem. Again, I put odds at 90% that it is legal in Canada. The problem with prediction markets is lack of demand and lack of access to Americans.

Added: this says that sports betting is a gray area and even these sites are not based in Canada. Also, I checked the example I had in mind and it was not registered in Canada. So probably I was wrong about Canada, but, again, Ireland is all you need.

I think this paper implies that rare harmful genetic mutations explains lots of the variation in human intelligence. Since it will soon be easy for CRISPR to eliminate such mutations in embryos, I think this paper's results if true means that genetic engineering for super-intelligence will be relatively easy.

I don't think that this shows anything different from earlier studies, which it shouldn't because it doesn't use different methods.* I guess it simultaneously measures that 25% of the variance is due to common variants and another 25% is due to rare variants in LD with common variants, while previously those results came from separate studies.

I have only skimmed the paper, but I reject the claim in the abstract that these "rare" variants are due to mutation-selection balance (aka mutational load). That is the claim you are talking about, right? They are rarer than the SNPs on the chip, but I think they are too common to be purely deleterious. I don't see how they could measure such rare variants without sequencing at least some of the subjects.

* One method that is different is that it doesn't throw out close relatives. This is how it distinguishes common variants from those merely in LD with them. Potentially this could detect the effect of variants rarer than in the earlier article, but it did not.


Having a choice between targeting common variants and rare variants can only make things easier than not having a choice, but why do you think rare variants are easier?

There are tradeoffs between several difficulties. I see three axes. (1) Knowing what variants to target; (2) Cost of edit: (2a) cost of CRISPR and (2b) deleterious effects of the edit; (3) Benefit per gene.

Mutational load likely eliminates (2b). Proofreading the genome allows us to skip (1), but at the cost of doing a tremendous number of edits (2a). Do you really expect that to be easy soon? Alternately, we could try to identify which super-rare variants affect intelligence, paying cost (1), but I think this will be extremely difficult. I expect the effect size (3) to be smaller for rarer variants, although I'm not sure I have a good reason for this.

I have only skimmed the paper, but I reject the claim in the abstract that these "rare" variants are due to mutation-selection balance (aka mutational load). That is the claim you are talking about, right?

Yes

Having a choice between targeting common variants and rare variants can only make things easier than not having a choice, but why do you think rare variants are easier?

Eliminating rare variants almost certainly won't have negative side effects. If X and Y reduce my intelligence and X is a rare variant and Y a common one, then there is a much bigger chance that Y does something good for me than that X does.

Do you really expect that to be easy soon?

If we have solid evidence that CRISPR editing could result in super-geniuses I expect lots of resources (perhaps in China) to be devoted to the relevant practical problems.

Request for programmers: I have developed a new programming trick that I want to package up and release as open-source. The trick gives you two nice benefits: it auto-generates a flow-chart diagram description of the algorithm, and it gives you steppable debugging from the command line without an IDE.

The main use case I can see is when you have some code that is used infrequently (maybe once every 3 months), and by default you need to spend an hour reviewing how the code works every time you run it. Or maybe you want to make it easier for coworkers to get a high-level understanding of the program, without having to dig into the actual source code. In these scenarios, the autogenerated flow diagram becomes quite useful. Conceptually, it is also nice to be able to look at the algorithm states and control flow as you are developing it, to clarify your own thinking.

Before releasing the tool I want to code up some examples that showcase how the technique works. I was hoping people could help me out by contributing some ideas for good test problems. The ideal problem, in my mind, is one where the difficulty comes not from any deep conceptual requirements, but rather from the presence of many different program states, options, subroutines, or special cases that interact in a way that is hard to remember or reason about without assistance.

Converting local time to UTC and back. Time zones, daylight savings times, etc. are very messy.

(some previous discussion of predictionbook.com here)

[disclaimer: I have only been using the site seriously for around 5 months]

I was looking at the growth of predictionbook.com recently, and there has been a pretty stable addition of about 5 new public predictions per day since 2012 (that is counting only new predictions, not including additional wagers on existing predictions). I was curious why the site did not seem to be growing, and how little it is mentioned or linked to on lesswrong and related blogs.

(sidebar: Total predictions (based on the IDs of the public predictions) are growing at about double that rate although there was huge growth around 2015 (graph) that I assume was either a script generating automated predictions, or just testing by the devs maybe -- does anyone know what caused this?)

Personally I find predictionbook to be very useful for

  • reducing hindsight bias
  • revealing planning fallacy
  • making me more objective, reducing effects of narrative fallacy
  • forcing me to think through questions more thoroughly by considering base rates, what the world would need to look like now for the prediction to come to pass, noticing composite predictions and considering each part individually, etc.
  • making me more aware of other people's failure at prediction, or when they are careful to make hard to verify predictions.
  • making me more wary of post-hoc rationalization of events I would not have predicted
  • fun

Gwern covers many other benefits of making and tracking predictions here

I would expect predictionbook to be more popular, since I am not aware of any similar services, and I find predictions to be so useful. I was therefore wondering:

  • who on lesswrong tracks their predictions outside of predictionbook, and their thoughts on that method
  • who is not tracking their predictions at all, and why they made that decision

I use Metaculus a lot, and have made predictions on the /r/SpaceX subreddit which I need to go back and make a calibration graph for.

(They regularly bet donations of reddit gold, and have occasional prediction threads, like just before large SpaceX announcements. They would make an excellent target audience for better prediction tools.)

I've toyed with the idea of making a bot which searched for keywords on Reddit/LW, and tracked people's predictions for them. However, since LW is moving away from the reddit code base, I'm not sure if building such a bot would make sense right now.

I don't think that the dev's touched predictionbook in quite a while. In general discovery of interesting public predictions doesn't work well because it's not easy to search and there are no tags.

There's https://www.metaculus.com and https://www.gjopen.com/ for curated public predictions. For private prediction there's a new Android App (still very much in Beta and in development): https://play.google.com/store/apps/details?id=squirrelinhell.lwpredictions

I also did a bunch of prediction tracking in less structured ways. Our LW dojo had a while a shared Workflowy for predictions.

Looks like the 'RECENT ON RATIONALITY BLOGS' section on the sidebar is still broken.

Is this a difficult fix?

What advice would you give to a 12-years old boy who wants to become great at drawing and painting?

(Let's assume that "becoming great at drawing and painting" is a given, so please no advice like "do X instead".)

My thoughts: There is the general advice about spending "10 000 hours", for example by allocating a fixed space in your schedule (e.g. each day between 4AM and 5AM, whether I feel like doing it or not). And the time should be best spent learning and practicing new-ish stuff, as opposed to repeating what you are already comfortable with over and over again. So for example, you could decide to spend one lesson trying to get the shadows right, another lesson trying to get the perspective right, etc.

Related things you should study: perspective, anatomy.

You should probably try different tools, e.g. acrylic paint, watercolor, chalk; or different styles, e.g. realistic or cartoon; if only to get outside of your comfort zone once in a while.

I suppose there are some great books to read, and useful online websites for beginning painters, but I am not familiar with this area. A list with a short description would be appreciated.

Make a habit of including a lot of pictures for any notes for school. Dan Roam's "The Back of the Napkin".

The ability to translate ideas into good pictures is commercially valuable.

I don't see the paint of exploring many different kinds of 2D painting. I would expect that a digital pen beats most other tools. Especially in the future as technology advances.

It might be worth looking into 3D virtual reality painting. It's a new medium and thus valuable.

I don't see the paint of exploring many different kinds of 2D painting. I would expect that a digital pen beats most other tools. Especially in the future as technology advances.

There are a lot of people who say that piano is the most versatile instrument, and they're right about that on a superficial level. You can do polyphonic things with a piano that you can't do with a clarinet or a trumpet. And like a digital pen, a digital piano can simulate a lot of other instruments, especially if you hook it up to flashy synthesis software that knows all the different articulations for those instruments.

But using a digital piano doesn't feel very much like using those instruments, and you won't express the same way you would if you had one.

A calligraphy brush is really fun and you can't replicate the feeling of using it without the physical tools. Many of them have nice texture and you can feel their shape when you rotate them in your hands -- they're also lightweight, so if you're not holding one to the page it feels more like a pencil than like a paintbrush.

A lot of my friends do art, and I do art too when they ask me to try it with them. Different art tools feel different, and for some people, some tools are more fun than others. I think it's really important to try these things out before you make a decision about them.

There are a lot of people who say that piano is the most versatile instrument, and they're right about that on a superficial level.

Actually the piano is one the least versatile instruments. You get control of which key you press, a chosen velocity and the sustain pedal. A piano performance can be midi recorded using these three values and reproduced in an extremely high level of detail. If you try to do the same with a truly versatile instrument like for example the violin you will find that it is impossible. Not difficult, impossible.

In my opinion, the most important advice:

  • Find a teacher that can demonstrably draw/paint to a high level.
  • Observe the first few lessons to make sure that he is also good at transmitting the information and is passionate about the art.

Art has a rational component which we can call the 'technique' or 'craft' of the field. In drawing this would be light/shade, perspective, texture, drawing material etc. but it also has an intuitive component that results from developing an aesthetic sense. It is important to realise that art is not learned just from painting but by seeing the world through a painters eyes. Observing the world and understanding the way the eye perceives it and the mind reproduces it.

I once went to a workshop on Sumi-e painting at the local Japanese cultural centre, and it changed how I look at paintings. So I'd recommend taking a Sumi-e class, or these days, I suppose watching Sumi-e tutorials on Youtube might do.

In general, getting an idea of how different cultures look at visual arts can be eye-opening. In addition to learning by doing, going to different museums and galleries can be a way to learn about art from many different time periods and cultures in different mediums.

Another thing that changed my perspective is a book called An Eye For Fractals by Michael McGuire. It taught me to break down things into different types of shapes when looking at them, and to appreciate a different kind of beauty than is usually taught to children. It is an exploration of Benoit Mandelbrot's famous quote

"Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line." - Benoit Mandelbrot, The Fractal Geometry of Nature, 1983.

I am "not terrible" at various forms of art / media so I might be able to give some adviceL

In my case, I spent a bunch of time drawing human figures in only a few specific angles, and this hindered me a lot. So definitely focus on getting the kids drawing lots of different things. As a general note, focus more on the general shape of things than specific details (EX: have the whole body anatomy roughly done is better than just a nice-looking face).

Other than that, in my education I'm unsure there are generally accepted "core books", comared to other subjects. I think this may also be because art/painting is a large subject.

So I'd recommend doing these obvious things (happpy to chat more via PM if you want to get into more detail):

  • Focus on letting the kid do the stuff he enjoys. When forced to attend drawing class, that took a lot of the fun out of it.

  • The first point being said, if you can find a class/teacher that specifically teaches the sorts of things the kid is interested in (i.e. they enjoy going to the class), this is a pretty good idea.

  • Practice. Obviously the more you draw the better you'll get.

  • Google "best books / resources for X" where X is whatever things / medium the kid is interested in.

  • Nice materials. It's surprising how helpful a good sketchbook / high-quality pens can make the whole process feel more excellent. I'm not suggesting you shell out several hundred for some huge Copic set, but some nice Canson paper and Prismacolor pens can go a long way.

I am not "great" at drawing and have never put any time into painting. That said, the workbook Drawing on the Right Side of the Brain caused a quantum leap in my sketching ability. (There is a separate, much longer long-form book with the same title, but the workbook consolidates all the exercises and I found it more practically useful.)

Thanks, I was thinking about this, but I didn't know there was a separate workbook. Now I have... ahem... purchased both.

Has there been any discussion or thought of modifying the posting of links to support a couple paragraphs of description? I often think that the title alone is not enough to motivate or describe a link. There are also situations where the connection of the link content to rationality may not be immediately obvious and a description here could help clarify the motivation in posting. Additionally, it could be used to point readers to the most valuable portions of sometimes long and meandering content.

Does anyone have a backup of that one scifi short story from Raikoth about future AGI and acausal trade with simulated hypothetical alien AGI? The link is broken. http://www.raikoth.net/Stuff/story1.html

"Why Boltzmann Brains Are Bad" by Sean M. Carroll https://arxiv.org/pdf/1702.00850.pdf

Two excepts: " The data that an observer just like us has access to includes not only our physical environment, but all of the (purported) memories and knowledge in our brains. In a randomly-fluctuating scenario, there’s no reason for this “knowledge” to have any correlation whatsoever with the world outside our immediate sensory reach. In particular, it’s overwhelmingly likely that everything we think we know about the laws of physics, and the cosmological model we have constructed that predicts we are likely to be random fluctuations, has randomly fluctuated into our heads. There is certainly no reason to trust that our knowledge is accurate, or that we have correctly deduced the predictions of this cosmological model.” - my thought in https://arxiv.org/pdf/1702.00850.pdf

"If we discover that a certain otherwise innocuous cosmological model doesn’t allow us to have a reasonable degree of confidence in science and the empirical method, it makes sense to reject that model, if only on pragmatic grounds”

My opinion: I agree with idea that BB can’t know is he BB or not, and wrote about it on LessWrong, but it is not the basis to conclude that BB-theory has zero probability. We can’t put zero probability to theories if we don’t like them, because it is great way to start to ignore any cognitive biases.

My position: There is no problem to be BB:

1) If nothing else exist, different BB states are connected with each other like digits in natural set, and this way of their connection create almost normal world, and it may have some testable predictions. (Dust theory)

2) If special type of BB, called BB-AIs exist and dominate landscape, such BB-AIs create simulations which are full of human minds, so we are probably in one of them. (The idea is that superintelligent computers are more probable than messy human minds and so are more often type of BB; Or if any BB-AI create more human simulations than random BB appear)

3) If real world exist and BB exist, each BB correspond to some state in real world. As any observer should think as of all sets of similar observers under UDT, it means that I can’t be BB, but I am number of BB plus some real me. And I could ignore BB-part of me, because some form of “quantum immortality”, every second transfer dead BBs into the “real me”. In short: “Big world immortality” completely neutralise BB problem.