Open Thread, January 4-10, 2016

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:40 PM
Select new highlight date
All comments loaded

Lessons from teaching a neural network...

Grandma teaches our baby that a pink toy cat is "meow".
Baby calls the pink cat "meow".
Parents celebrate. (It's her first word!)

Later Barbara notices that the baby also calls another pink toy non-cat "meow".
The celebration stops; the parents are concerned.
Viliam: "We need to teach her that this other pink toy is... uhm... actually, what is this thing? Is that a pig or a pink bear or what? I have no idea. Why do people create such horribly unrealistic toys for the innocent little children?"
Barbara shrugs.
Viliam: "I guess if we don't know, it's okay if the baby doesn't know either. The toys are kinda similar. Let's ignore this, so we neither correct her nor reward her for calling this toy 'meow'."

Barbara: "I noticed that the baby also calls the pink fish 'meow'."
Viliam: "Okay... I think now the problem is obvious... and so is the solution."
Viliam brings a white toy cat and teaches the baby that this toy is also "meow".
Baby initially seems incredulous, but gradually accepts.

A week later, the baby calls every toy and grandma "meow".

So the child was generalizing along the wrong dimension, so you decided the solution was to train an increase in generalization of the word meow which is what you got. You need to teach discrimination; not generalization. A method for doing so is to present the pink cat and pink fish sequentially. Reward the meow response in the presence of the cat, and reward fish responses to the fish. Eventually meow responses to the fish should extinguish.

I've gotten around to doing a cost-benefit analysis for vitamin D: http://www.gwern.net/Longevity#vitamin-d

Why too much evidence can be a bad thing

(Phys.org)—Under ancient Jewish law, if a suspect on trial was unanimously found guilty by all judges, then the suspect was acquitted. This reasoning sounds counterintuitive, but the legislators of the time had noticed that unanimous agreement often indicates the presence of systemic error in the judicial process, even if the exact nature of the error is yet to be discovered. They intuitively reasoned that when something seems too good to be true, most likely a mistake was made.

In a new paper to be published in The Proceedings of The Royal Society A, a team of researchers, Lachlan J. Gunn, et al., from Australia and France has further investigated this idea, which they call the "paradox of unanimity."

"If many independent witnesses unanimously testify to the identity of a suspect of a crime, we assume they cannot all be wrong," coauthor Derek Abbott, a physicist and electronic engineer at The University of Adelaide, Australia, told Phys.org. "Unanimity is often assumed to be reliable. However, it turns out that the probability of a large number of people all agreeing is small, so our confidence in unanimity is ill-founded. This 'paradox of unanimity' shows that often we are far less certain than we think."

The researchers demonstrated the paradox in the case of a modern-day police line-up, in which witnesses try to identify the suspect out of a line-up of several people. The researchers showed that, as the group of unanimously agreeing witnesses increases, the chance of them being correct decreases until it is no better than a random guess.

In police line-ups, the systemic error may be any kind of bias, such as how the line-up is presented to the witnesses or a personal bias held by the witnesses themselves. Importantly, the researchers showed that even a tiny bit of bias can have a very large impact on the results overall. Specifically, they show that when only 1% of the line-ups exhibit a bias toward a particular suspect, the probability that the witnesses are correct begins to decrease after only three unanimous identifications. Counterintuitively, if one of the many witnesses were to identify a different suspect, then the probability that the other witnesses were correct would substantially increase.

The mathematical reason for why this happens is found using Bayesian analysis, which can be understood in a simplistic way by looking at a biased coin. If a biased coin is designed to land on heads 55% of the time, then you would be able to tell after recording enough coin tosses that heads comes up more often than tails. The results would not indicate that the laws of probability for a binary system have changed, but that this particular system has failed. In a similar way, getting a large group of unanimous witnesses is so unlikely, according to the laws of probability, that it's more likely that the system is unreliable.

This isn't "more evidence can be bad", but "seemingly-stronger evidence can be weaker". If you do the math right, more evidence will make you more likely to get the right answer. If more evidence lowers your conviction rate, then your conviction rate was too high.

Briefly, I think what's going on is that a 'yes' presents N bits of evidence for 'guilty', and M bits of evidence for 'the process is biased', where M>N. The probability of bias is initially low, but lots of yeses make it shoot up. So you have four hypotheses (bias yes/no cross guilty yes/no), the two bias ones dominate, and their relative odds are the same as when you started.

A side note.

My mother is a psychologist, father - an applied physicist, aunt 1 - a former morgue cytologist, aunt 2 - a practicing ultrasound specialist, father-in-law - a general practitioner, husband - a biochemist, my friends (c. 5) are biologists, and most of my immediate coworkers teach either chemistry or biology. (Occasionally I talk to other people, too.) I'm mentioning this to describe the scope of my experience with how they come to terms with the 'animal part' of the human being; when I started reading LW I felt immediately that people here come from different backgrounds. It felt implied that 'rationality' was a culture of either hacking humanity, or patching together the best practices accumulated in the past (or even just adopting the past), because clearly, we are held back by social constraints - if we weren't, we'd be able to fully realize our winning potential. (I'm strawmanning a bit, yes.) For a while I ignored the voice in the back of my mind that kept mumbling 'inferential distances between the dreams of these people and the underlying wetware are too great for you to estimate', or some such, but I don't want to anymore.

To put it simply, there is a marked difference within biologists in how reverently they view the gross (and fine) human anatomy, in how easily they accept that a body is just a thing, composed of matter, with charges and insulation and stuff -just a system of tubes, but still not a car in which you can individually tweak the axles and the windshield (probably). (This is why I think Peter Watts is so popular on LW - the idea that you can just tinker with circuitry and upgrade people.

Psychologists are the most 'gentle', they and the doctors have too much 'social responsibilities' baked in to comfortably discuss people as walking meat. Botanists (like me) don't have enough knowledge to do it, but we at least are aware of this. Biochemists are narrow-minded by necessity (too many pathways). Vertebrate zoologists are best (Steinbeck, I think, described it in his book about the Sea of Cortes), in that you can count on them to be brutally consistent. Physicists - at least the one I know - like to talk about 'open systems' and such, but they (he) could just as plausibly speak about some totally contrived aliens.

I know it is dishonest to ask LW-ers to spend time on studying exactly human anatomy, but even a thorough look at some skeleton should give you a vibe of how defined human bodies are. There are ridges on the bones. There are seams. Try to draw them, to internalize the feeling.

I'm sorry for the cavalier assuming of ignorance, but I think at least some of you can benefit from my words.

I am not sure what exactly you wanted to say. All I got from reading it is: "human anatomy is complicated, non-biologists hugely underestimate this, modifying the anatomy of human brain would be incredibly difficult".

I am not what is the relation to the following part (which doesn't speak about modifying the anatomy of human brain):

It felt implied that 'rationality' was a culture of either hacking humanity, or patching together the best practices accumulated in the past

Are you suggesting that for increasing rationality, using "best practices" will be not enough, changes in anatomy of human brain will be required (and we underestimate how difficult it will be)? Or something else?

I am not sure what exactly you wanted to say.

I read Romashka as saying that the clean separation between the hardware and the software does not work for humans. Humans are wetware which is both.

That, and that those changes in the brain might lead to other changes not associated with intelligence at all. Like sleep requirements, haemorrages or fluctuations in blood pressure in the skull, food cravings, etc. Things that belong to physiology and are freely discussed by a much narrower circle of people, in part because even among biologists many people don't like the organismal level of discussion, and doctors are too concerned with not doing harm to consider radical transformations.

Currently, 'rationality' is seen (by me) as a mix of nurturing one's ability to act given the current limitations AND counting on vastly lessened limitations in the future, with some vague hopes of adapting the brain to perform better, but the basis of the hopes seems (to me) unestablished.

Would anyone actually be interested if I prepared a post about the recent "correlation explanation" approach to latent-model learning, the "multivariate mutual information"/"total correlation" metric it's all based on, supervenience in analytical philosophy, and implications for cognitive science and AI, including FAI?

Because I promise I didn't write that last sentence by picking buzzwords out of a bag.

Any LessWrongers in Taipei? I am there for a while, PM me and I will buy you a beer.

So I think I've genuinely finished http://gwern.net/Mail%20delivery now. It should be an interesting read for LWers: it's a fully Bayesian decision-theoretic analysis of when it is optimal to check my mail for deliveries. I learned a tremendous amount working my way through it, from how to much better use JAGS to how to do Bayesian model comparison & averaging to loss functions and EVSI and EVPI for decision theory purposes to even dabbling in reinforcement learning with Thompson sampling/probability-matching.

I thought it was done earlier, but then I realized I had messed up my Thompson sampling implementation and also vectorspace alien pointed out that my algorithm for deciding what datapoint to sample for maximizing information gain was incorrect & how to fix it, and I have made a lot of other small improvements like more images.

Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)

You seem to assume that MWI makes the Sequences more vulnerable; i.e. that there are people who feel okay with the rest of the Sequences, but MWI makes them dismiss it as pseudoscience.

I think there are other things that rub people the wrong way (that EY in general talks about some topics more than appropriate for his status, whether it's about science, philosophy, politics, or religion) and MWI is merely the most convenient point of attack (at least among those people who don't care about religion). Without MWI, something else would be "the most controversial topic which EY should not have added because it antagonizes people for no good reason", and people would speculate about the dark reasons that made EY write about that.

For context, I will quote the part that Yvain quoted from the Sequences:

Everyone should be aware that, even though I’m not going to discuss the issue at first, there is a sizable community of scientists who dispute the realist perspective on QM. Myself, I don’t think it’s worth figuring both ways; I’m a pure realist, for reasons that will become apparent. But if you read my introduction, you are getting my view. It is not only my view. It is probably the majority view among theoretical physicists, if that counts for anything (though I will argue the matter separately from opinion polls). Still, it is not the only view that exists in the modern physics community. I do not feel obliged to present the other views right away, but I feel obliged to warn my readers that there are other views, which I will not be presenting during the initial stages of the introduction.

Everyone please make your own opinion about whether this is how cult leaders usually speak (because that seems to be the undertone of some comments in this thread).

Because he was building a tribe. (He's done now).


edit: This should actually worry people a lot more than it seems to.

My model of him has him having an attitude of "if I think that there's a reason to be highly confident of X, then I'm not going to hide what's true just for the sake of playing social games".

Maybe it's just the particular links I have been following (acausal trade and blackmail, AI boxes you, the Magnum Innominandum) but I keep coming across the idea that the self should care about the well-being (it seems to always come back to torture) of one or of a googleplex of simulated selves. I can't find a single argument or proof of why this should be so. I accept that perfectly simulated sentient beings can be seen as morally equal in value to meat sentient beings (or, if we accept Bostrom's reasoning, that beings in a simulation other than our own can be seen as morally equal to us). But why value the simulated self over the simulated other? I accept that I can care in a blackmail situation where I might unknowingly be one of the simulations (ala Dr Evil or the AI boxes me), but that's not the same as inherently caring about (or having nightmares about) what may happen to a simulated version of me in the past, present, or future.

Any thoughts on why thou shalt love thy simulation as thyself?

While browsing the Intelligence Squared upcoming debates, I noticed two things that may be of interest to LW readers.

The first is a debate titled "Lifespans are long enough", with Aubrey De Grey and Brian Kennedy of the Buck Institute for Research on Aging arguing against Paul Root Wolpe from the Emory Centre for Ethics and another panelist TBA. The debate is taking place in early February.

The second, and of potentially more interest to the LW community, is taking place on March 9th and is titled "Artificial Intelligence: The risks outweigh the rewards". All 4 speakers for and against the motion are presently unannounced.

I am a long time watcher of Intelligence Squared debates and recommend them highly. I believe others in the LW community have referred to specific debates in the past. The moderator is quite talented and encourages interesting discourse, and is often successful in steering parties away from stringing series of applause lights together.

Both the moderator and founder of the debates have indicated previously that they have been influenced in the questions asked and experts brought on to argue by commentary and suggestions from the public. I have also had a positive response from previous suggestions made to IQ in the past in relation to other debates. I have emailed them already with some suggestions about who I think would provide interesting commentary and perspectives on the debate, and links to some useful 'background briefing' documents that they may wish to add to the resources attached to the debate. I suggest that others choosing to do the same might increase the quality of discourse in a debate that is likely to come up highly in people's Google and YouTube searches into the future.

Generally speaking, the videos from Intelligence Squared are uploaded to their YouTube account fairly soon after the live stream.

Buck Kennedy

Brian Kennedy. Note that he's on the "Against" side with Aubrey, as makes sense given the Buck Institute's goal to "extend help towards the problems of the aged."

PSA: I had a hard drive die on me. Recovered all my data with about 25 hours of work all up for two people working together.

Looking back on it I doubt many things could have convinced me to improve my backup systems; short of working in the cloud; my best possible backups would have probably lost the last two weeks of work at least.

I am taking suggestions for best practice; but also a shout out to backups, and given it's now a new year, you might want to back up everything before 2016 right now. Then work on a solid backing up system.

(Either that or always keep 25 hours on hand to manually perform a ddrescue process on separate sectors of a drive; unplugging and replugging it in between each read till you get as much data as possible out, up until 5am for a few nights trying to scrape back the entropy from the bits...) I firmly believe with the right automated system it would take less than 25 hours of effort to maintain.

bonus question: what would convince you to make a backup of your data?

Use a backup system that automatically backs up your data, and then nags at you if the backup fails. Test to make sure that it works.

For people who don't want / can't run their own, I've found that Crashplan is a decent one. It's free, if you only back up to other computers you own (or other peoples' computers); in my case I've got one server in Norway and one in Ireland. There have, however, been some doubts about Crashplan's correctness in the past.

There are also about half a dozen other good ones.