Open thread, 7-14 July 2014

Previous thread

 

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 12:49 AM
Select new highlight date
All comments loaded

Guardian: Scientists threaten to boycott €1.2bn Human Brain Project:

The European commission launched the €1.2bn (£950m) Human Brain Project (HBP) last year with the ambitious goal of turning the latest knowledge in neuroscience into a supercomputer simulation of the human brain. More than 80 European and international research institutions signed up to the 10-year project.

But it proved controversial from the start. Many researchers refused to join on the grounds that it was far too premature to attempt a simulation of the entire human brain in a computer. Now some claim the project is taking the wrong approach, wastes money and risks a backlash against neuroscience if it fails to deliver.

In an open letter to the European commission on Monday, more than 130 leaders of scientific groups around the world, including researchers at Oxford, Cambridge, Edinburgh and UCL, warn they will boycott the project and urge others to join them unless major changes are made to the initiative.

[...] "The main apparent goal of building the capacity to construct a larger-scale simulation of the human brain is radically premature," Peter Dayan, director of the computational neuroscience unit at UCL, told the Guardian.

Open message to the European Commission concerning the Human Brain Project now with 234 signatories.

Finally, scientists speaking up against sensationalistic promises and project titles...

I have tried some online lessons from Udacity and Coursera, and this is my impression so far:

The system of Udacity is great, but there is little content. Also, the content made by the founder Sebastian Thrun is great, but the content made by other authors is sometimes much less impressive.

For example, some authors don't even read the feedback for their lessons. Sometimes they make a mistake in a lesson or in a test, the mistake is debated in the forum, and... one year later, the mistake is still there. They don't even need to change the lesson video... just putting one paragraph of text below the video would be enough. (In one programming lesson, you had to pass a unit test, which sometimes mysteriously crashed. The crash wasn't caused by something logical, like spending too much time or too much memory, it was a bug in the test. In the forum students gave each other advice how to avoid this bug. It probably could be fixed in 5 minutes, but the author didn't care.) -- The lesson is, you can't make online education just like "fire and forget", but some authors probably underestimate this.

Coursera is the opposite: it has a lot of content, almost anything, but the system feels irritating to me. They don't fully use the interactivity, which in my experience helps paying attention. For example, on Coursera you have five videos from 15 to 30 minutes, and then some homework. (Depending on the course.) On Udacity, the videos are interrupted every 2 or 3 minutes to ask you a simple question.

Some of the lessons require peer assessment, which means that you write your answers in plain text, and then you have to grade the answers of other users. Which is a waste of time, because of course it requires some redundancy, so have to fill the test, wait a week, and then read peer assessment guidelines and rate five random tests by other people... although in most cases it could be done automatically (by choosing an option, entering a number, or entering a string that is matched against a regexp). Very annoying. Also because of this, you have to take the class at the same time when everyone else does; if you try it a few months later, you don't have the full experience.

Both sites provide free and paid certificates. With the paid certificate, you have some Skype exams to prove it was really you who did the lessons, the free certificate just means you do the exercises and receive a PDF. In Udacity, you can get the free certificate anytime. In Coursera, you get the free certificate only if you do the lesson at the same time as everyone else. Thus, if you are interested in a topic, and the lesson happened a year ago, you can do it... but you won't even get the free certificate. I know the free certificates are only symbolic, but still, on Udacity I can get them for learning at my own pace, on Coursera there is a lot of lost purpose involved.

Thus... I wish all the content from Coursera to be ported to Udacity. Alternatively, Coursera switching to the system Udacity uses. Alternatively, someone else to combine the best aspects from both.

In a weird dance of references, I found myself briefly researching the "Sun Miracle" of Fatima.
From a point of view of a mildly skeptic rationalitist, it's already bad that almost anything written that we have comes from a single biased source (the writings of De Marchi), but also bad is that some witnesses, believer and not, reported not having seen any miracle. But what arose my curiosity is another: if you skim witnesses accounts, they tell the most divers(e) things. If you OR the accounts, what comes out is really a freak show: the sun revolving, emitting strobo lights, dancing in the sky, coming close to the earth drying up the soaking wet attendants.
If you otherwise AND the accounts, the only consistent element is this: the 'sun' was spinning. To which I say: what? How can something that has rotational symmetry be seen spinning? The only possible answer is that there was an optical element that broke the symmetry, but I have been unable to find out what was this element. Do you know anything about it?

The human brain is capable of registering "X is moving" without being able to point to "X was over here and is now over there". This can happen visually with the rotating snakes illusion, or acoustically with Shepard tones, for instance. It's also pretty common on some psychedelic drugs.

This is the outline of a conversation that took part no fewer than 14 times on Friday just past, between me and a number of close friends.

"Life is like an RPG. Often, a wise, kind, and and deeply important character (hand gesture to myself) gives a quest item to a lowly, unsuspecting, otherwise plain character (hand gesture to friend). As a result of this, this young character goes on to be a great hero in an important quest.

Now, here with me today, I have a quest item.

For you.

But I can only give it to you if you shake on the following oath; that, once you have finished with this item, when you have taken what you require from it, that then, you too shall find someone for whom this will be of great utility, and pass it along. They must also shake on this oath."

"I will."

Handshake occurs.

"Here is your physical copy of the first 16 and a half chapters of 'Harry Potter and the Methods of Rationality'."

Spoilers: after a tedious chain of deals, your friend's going to end up with half an oyster shell sitting in their inventory and no idea what to do with it.

Abstract: It is frequently believed that autism is characterized by a lack of social or emotional reciprocity. In this article, I question that assumption by demonstrating how many professionals—researchers and clinicians—and likewise many parents, have neglected the true meaning of reciprocity. Reciprocity is “a relation of mutual dependence or action or influence,” or “a mode of exchange in which transactions take place between individuals who are symmetrically placed.” Assumptions by clinicians and researchers suggest that they have forgotten that reciprocity needs to be mutual and symmetrical—that reciprocity is a two-way street. Research is reviewed to illustrate that when professionals, peers, and parents are taught to act reciprocally, autistic children become more responsive. In one randomized clinical trial of “reciprocity training” to parents, their autistic children’s language developed rapidly and their social engagement increased markedly. Other demonstrations of how parents and professionals can increase their behavior of reciprocity are provided.

— Morton Ann Gernsbacher, "Towards a Behavior of Reciprocity"

The paper cites several examples of improvements to autistic children's social development when non-autistic peers, parents, or teachers are trained to behave reciprocally towards them. This one particularly caught my eye (emphases added):

In 1986 researchers taught four typically developing preschoolers to either initiate interaction with three autistic preschoolers or to respond to the interaction that the three autistic preschoolers initiated, in other words, to be reciprocal (Odom & Strain, 1986). Which intervention had the more lasting influence on the autistic preschoolers’ social interaction? When the typically developing preschoolers were taught to respond to the interaction that the autistic preschoolers initiated, the autistic preschoolers responded more frequently. In other words, when the typically developing preschoolers behaved reciprocally, the autistic preschoolers responded more positively.

Quantified-self biohacker-types: what wearable fitness tracker do I want? Most will meet my basic needs (sleep, #steps, Android-friendly), but are there any on the market with clever APIs that I can abuse for my own sick purposes?

I think the main thing the facebook emotional contagion experiment highlights is that our standard for corporate ethics is overwhelmingly lower than our standard for scientific ethics. Facebook performed an A/B test, just as it and similar companies do all the time, but because it was in the name of science we recognized that it was not up to usual ethical standards. By comparison, there is no review board for the ethics of advertisements and products. If something is too dangerous, it will result in lawsuits. If it is offensive, it will be censored. However, something unethical in science, like devoting millions of dollars to engineer and millions of experimental-subject-hours to develop a sugar-coated money-sucking skinner box won't make anyone blink an eye.

I think the core issue is one of lack of understanding how modern technology works. Facebook performed a A/B test and everyone who know how the internet works shouldn't be surprised.

On the other hand there are a bunch of people who don't get that web companies run thousands of A/B tests. Those people got surprised by reading about the study.

Hey guys, so, I'm dumb and am continuing to attempt to write fiction. I figured I would post an excerpt first this time so people can point out glaring problems before I post anything to Discussion. I've changed some of the premise (as can be seen most obviously in the title); mostly I'm moving away from LessWrong-parody and toward self-parody, mostly because Eliezer's followers are really whiny and it was distracting from the actual ideas I was trying to convey. The premise is now less disingenuous about its basically being a self-insert fic. Also I've tried to incorporate some of the implicit suggestions I received, especially complaints that the first chapter was too in-jokey, pseudo-clever, and insufficiently substantive. This isn't the whole chapter, it's just the first part of a first draft. Criticism appreciated!

Harry Potter-Newsome and the Methods of Postrationality: Chapter Two: Analyzing the Fuck out of an Owl: Excerpt

Harry let out a long sigh and addressed the owl with mocking eyes.

"So, owl. About this 'Hogwarts'. Are there other magical schools out there that I might attend?"

The owl cocked its head. "Why are you asking me? I'm an owl," said the owl in a voice that sounded like an impossibly rapid sequence of hoots.

"Oh come on. We both know you're needed for the exposition."

The owl hooted regretfully. "Fine. Yes, there are other schools. But you should really be asking more interesting questions. Or perhaps I should lead. How did you know to talk to me?"

Harry flashed a look of disappointment. "Although it pains me to say it, I just figured this is the sort of story with talking animals."

"Pray tell, Mr. Potter, why do you think this is a story in the first place? Most humans who think so are what we owls like to call 'batshit insane'."

Harry sighed. This owl is stupid or a troll or both; nonetheless, for the sake of the story, I should probably just go along with it, he thought. "Let's start with the basics. Riddle me this: how on Earth does someone get a lightning-bolt-shaped scar? Have you ever seen a utensil with a suitably shaped prong? Does an otherwise sane mother decide one day that lightning bolt tattoos are just too expensive and so she should carve her infant son's forehead with a kitchen knife?"

The owl glanced at Harry's forehead, and for the first time appeared to be intrigued. "Maybe a neo-Inglorious-Basterd took you as genetically inclined toward Zeus worship and decided they wouldn't let you hide your depraved Paganism so easily."

"I hadn't thought of that," admitted Harry.

"Or perhaps your parents just read way too much Harry Potter."

Harry was distraught. "Harry Potter? What, am I a book now?"

The owl paused for a long moment, somehow grimaced, looked downwards, and placed the tip of its wing on its forehead.

[...]

I'd recommend writing five or so chapters and then posting a link. The fic as you're posting it just feels meta for the sake of meta (charitably, because your narrative is still winding up). I'd be more likely to read/upvote if plot were already happening.

That makes sense; to be honest, I generally don't have a high opinion of narratives and mostly view them as excuses for authors to write about characters and settings and spew insights and jokes. (I also mean this in the metaphorical post-structuralist sense.) This might be why my fiction is so much worse than my nonfiction writing.

Merging traditional Western occultism with Bayesian ideas seems to produce some interesting parallels, which may be useful psychologically/motivationally. Anyone care to riff on the theme?

Eg: "The Great Work" is the Most Important Thing that you can possibly be doing.

Eg, tests to pass and gates to go through in which a student has to realize certain things for themselves, as opposed to simply being taught them, from pre-membership ones of learning basic arithmetic and physics, to the initial initiation of joining the Bayesian Conspiracy, to an early gate of becoming a full atheist, to a higher gate of, say, making arrangements to be brought back from the dead. (Possibly the highest level would be to have arrangements to be brought back from the dead /without/ anyone else's help...)

That makes a bit of sense. The occultists fancied themselves scientists, back when that wasn't such a clearly defined term as it is now, and they rummaged through lots of traditions looking for bits to incorporate into their new (claimed to be old) culture. But computer games design had all the same sources to draw from, greater manpower and vastly more cultural impact. I would expect "almost any" useful innovations the occultists came up with to be contained in computer games.

This is true for both of your examples: "winning the game" and skill trees, respectively. And skill trees are better than initiation paths, because they aren't fully linear while still creating motivation to go further.

Compare the rules of how to play more like a PC, less like an NPC.

I say "almost any" because an exception may be fully immersed, bodily ritual stuff. Maybe that can hammer things down into system 1 that you simply don't "get" the same way when you just read them.

I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1, but they think they would do that. I'm interested in what could cause someone to think that. I actually have a little more information upon asking a few more questions, but I'd like to see what others think without knowing the answer.

My own thoughts:This may be related to the Allais paradox. It also trivially implies two-boxing in Newcomb.

Some more questions raised:

What arguments might I make to change this person's mind?

Would it be ethical, if I had to make this choice for them, to choose the $1,000,000? What about an AI making choices for a human with this utility function?

I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1

Losing money and gaining money is not the same. Most humans use heuristics that treat both cases differently. If you want to understand someone you shouldn't equate both cases even if they look the same in your utilitarian assessment.

What happened to Will Newsome's drunken HPMOR send-up? Did it get downvoted into oblivion?

I checked at Will Newsome's page. There seems to have been a failed effort to move it to Main.

Has anyone read The Artificial Intelligence Revolution by Louis Del Monte?

suppose someone's life plan was to largely devote themselves to making money until they were in, say, the top 10% in cumulative income. They also did not plan to save money to any very unusual extent.

then after that was accomplished, they would switch goals and devote themselves to altruism.

Given that the person today is able to make the money and resolves to do this, I wonder what people here think the chance is of doing it. For example, fluid intelligence declines over time. So by the time you're 60 years old and have made your money and have kids, will you really be smart enough to diametrically change direction and have much impact? Maybe Bill Gates has enough brain cells, but his IQ might be 160. And maybe you'll just forget about altruism and learn to enjoy nice cars more.

It doesn't seem that unusual for rich people to become more charitable as they get older, though perhaps I'm just hearing about the famous ones. I assume a large part of it is feeling as though one has solved the money-making game, and now it's time to do something new. (Rich people getting into politics is probably similar.)

Is anything known about how to maintain fluid intelligence?