Better late than never, a new open thread. Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

New Comment
43 comments, sorted by Click to highlight new comments since:

Oh, I've been aching to announce to people who wouldn't find it absolutely insane or unthinkable!

After being convinced that it isn't just something insane rich people do out of hubris, debating a bunch of my friends, reading all the documentation I could, listening to the horror stories from This American Life, and doing oodles of paperwork, I am now officially one of the potentially immortal. I am a pre-cryonaut.

Hooray! I can't wait for the post-cryo meetups - though of course plan A is to live long enough to live forever...

I can't wait for the post-cryo meetups

Well, I suppose you don't have to... or at least, you don't have to experience waiting... but I rather wish you would.

Yes, plan A is definitely to wait as long as possible. :)

Some years ago (at least 2 or 3), I read a long article somewhere in which a study or two looked at prominent figures - many politicians, Hollywood stars, that sort of thing - in the arts and sciences, and mentioned that a lot of them had childhoods filled with neglect or abuse. I think the author also suggested that this was not correlation, but causation as well, from the latter to the former.

Unfortunately, I can't seem to remember where I read this, and looking through my Evernote, don't find it there, nor do Google searches help. I thought I might've read it in The Atlantic, but looking through a few hundred hits there, I didn't find anything.

If this rings any bells for anyone, I'd appreciate a pointer. (I was going to write a Hansonesque article arguing for systematized child abuse/neglect of smart kids.)

EDIT: Also tried asking the Sociology subreddit.

Talk about creating perverse incentives!

Recently, an acquaintance asked me whether I believed in destiny. I told her I didn't, and she told me a long story boiling down to this: someone she knows was prevented, by a series of improbable accidents, from getting on a plane. The plane then crashed, killing this person's entire family.

"How do you explain that?" she asked. "I don't really see the need for an explanation," I said.

I relayed the cached wisdom that there are billions of people on Earth, all living eventful lives, and therefore we can expect one-in-a-billion experiences to occur daily.

It just occurred to me that people in this sort of situation are subject to something very similar to the red/green paradox first discussed here. Suppose 10^9 people have a commonly agreed-on model of the universe (each person assigns it a probability of about 10^-3 of being false). This model says that, each time you wake up, there is precisely a 10^-9 chance of discovering you have been transformed into a giant insect, with the chance being independent for each person for each day. If we read in the news that someone was transformed, we don't update against the model being true--the model predicts that about one person should be transformed each day.

On the other hand, if you yourself wake up to find yourself transformed into a giant insect, it is tempting to say that you should update against the model, since it is more likely that the model underestimates the chances of this happening than that you have experienced a 1 in 10^9 event. Indeed, if someone within 2 degrees of separation from you is transformed, it seems you should update.

Such a population could experience a long period of statistical adherence to the model, yet contain a growing population of skeptics who believe that a lot of transformations are unreported or covered up.

Is this, generalized, the situation we actually find ourselves in with respect to what's usually called "belief in the supernatural?"

Was re-reading Think Like Reality and came upon the following quote from Eliezer - emphasis mine.

The same optimization process that built your retina backward and then routed the optic cable through your field of vision, also designed your visual system to process persistent objects bouncing around in 3 spatial dimensions because that's what it took to chase down tigers. But "tigers" are leaky surface generalizations - tigers came into existence gradually over evolutionary time, and they are not all absolutely similar to each other. When you go down to the fundamental level, the level on which the laws are stable, global, and exception-free, there aren't any tigers. In fact there aren't any persistent objects bouncing around in 3 spatial dimensions. Deal with it.

[-]ata10

For those who don't get the reference: http://knowyourmeme.com/memes/deal-with-it

(Though if you didn't already know of that and/or don't care for silly internet meme humour, then reading the linked page will probably not cause you to find the above comment funny. Deal with it.)

I've reviewed Jason Rosenhouse's book on the Monty Hall problem. Overall summary: The book is very good and does an interesting job of discussing not just the original problem but variations of it, as well as what reactions to the problem can teach us about how humans estimate probabilities.

My university recently held an event involving a presentation and book signing by Ray Kurzweil and a screening of The Singularity is Near. Fangirling about meeting Ray and getting my book signed aside, I thought I would give a short description of the movie and my opinion of it.

The film is mostly made of interviews with big names, so most of the time you will have somebody's head on-screen talking, whether it's Ray's, Aubrey De Grey's, Eric Drexler's, or Eliezer Yudkowsky's. The movie covers a wide variety of singulitarian and transhumanist topics (including a small debate on life extension), and although the material is a bit basic for someone who has already spent a lot of time reading sites like lesswrong and The Transhumanist Wiki, I quite enjoyed it.

Some of the scenes make use of spiffy CGI, including the opening and a couple of illustrations of nanobots. The biggest use of CGI by far occurs in the B-Plot, though, which consists of several interspersed short scenes dealing with an artificial intelligence named Ramona who goes from being a mechanical paper doll to a Second Life bot to a wholly sentient virtual being as the decades pass. I thought this plot thread was the weakest part of the movie; while it was entertaining and it illustrated ideas like editing one's own thinking process (Ramona cures herself of her fear of mice), several of the scenes were rather narmful (WARNING: TVTropes) and the overall sci-fi feel may have hurt the movie's credibility with people who would have otherwise taken it seriously.

So, to conclude, I asked a friend what she thought of the movie, and she said that it had been "too flashy" for her taste.

Can anyone link a comment or quote somewhere on LW saying something like "there are things that we can't imagine, but that we can imagine imagining, and we confuse that with actually imagining them."? Possible examples would be philosophical zombies, actual infinities, uncomputable physics, mathematical inconsistency and certain forms of objective morality; I vaguely remember the original thread having to do with theology.

Question inspired by the Reddit thread containing this comment, although I probably won't end up posting there.

I can't find it there, and I remember it being a comment rather than a top-level post. (In hindsight I should've asked you for the specific line in the first place rather than searching through an enormous post for it myself, but that's hindsight.)

Audience member video of Watson taking on Jennings and another at Jeopardy: http://www.youtube.com/watch?v=hR528D64rpM&feature=player_embedded#!

So if 14 years ago it was chess, and approximately now it is Jeopardy, what's an example from the class of challenging high profile targets 14 years from now? Turing Test?

Here's a better quality video. And this is another video in which the people at IBM talk about developing Watson and why they chose to do Jeopardy.

Watson apparently refines it's notion of the kinds of answers that are expected under a given category as it accumulates previous answers. The human contestants could exploit this by starting with the higher dollar questions. I'll be curious to see if they do.

There's a detailed chart of the performance over time of the system here.

Proving novel theorems?

Significantly increases my confidence that I will win my bet with Eliezer.

I still have my doubts about cryonics. I believe people here are a bit too optimistic about the future. How confident are you that the “molecular nanotechnology” necessary to repair cells will be developed within 100 or 200 years? If Alcor had been founded in 1800, would it have survived the industrial revolution and both world wars?

About neuropreservation, is it that easy to grow a new body? I mean, there is a big difference between just fixing some broken cells and completely creating a whole body. Even if it's possible, it'll probably be much more expensive (and thus you'll be less likely to get revived). And unless the new body is exactly like the old one, your motor system will be screwed up.

And you need rejuvenation technology too. Alcor claims that "By the time it becomes possible to revive cryonics patients, especially today's cryonics patients, biological aging as we know it today will not exist". I don't know how likely that is, but there is a difference between stopping aging and rejuvenating. What if they find a simple DNA mutation that stops aging, but it can only be applied before birth? In the worst case you'll wake up and die again a few weeks later. You may be lucky and only have to spend a few decades in a 90-year-old body.

The Big Short is a really fun book about the financial crisis of last decade. It's a good followup to Eliezer's Just Lose Hope Already.

I was asked what the mainstream thinks of AI Risk. My understanding is that the only comment on the subject from "mainstream" AI research is a conference report that says something like "Some people think there might be a risk from powerful AI, but there isn't." This was discussed here on LW, but obviously searching for it given only that information is pretty much impossible, so help would be much appreciated - thanks!

Hanson and then you posted a link to AAAI Panel on Long-term AI Futures (also discussed here).

From "Interim Report" (Aug 2009):

The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of scientists in this realm, and for the need to educate people outside the AI research community about the promise of AI for enhancing the quality of human life in numerous ways, coupled with a re-focusing of attention on actionable, shorter-term challenges.

Pursuing this further, I emailed focus group chair Professor David McAllester to ask if there had been any progress in "sharing the rationale". He replied:

The wording you mention in the report was supported by many people. However, I personally think the possibility of an AI chain reaction in the next few decades should not be dismissed. I am trying my very hardest to make it happen.

(I have his permission to share that)

AAAI ex-president, Eric Horvitz seems ambivalent here:

Horvitz doubts that one of these virtual receptionists could ever lead to something that takes over the world. He says that's like expecting a kite to evolve into a 747 on its own.

So does that mean he thinks the singularity is ridiculous?

Mr. HORVITZ: Well, no. I think there's been a mix of views, and I have to say that I have mixed feelings myself.

Impressed - how did you find this? I'm also impressed I managed to forget something I myself re-posted. Thanks!

Why robots won't rule. See also the links here.

Alon Halevy, a faculty member in the University of Washington's computer science department and an editor at the Journal of Artificial Intelligence Research, said he's not worried about friendliness.

"As a practical matter, I'm not concerned at all about AI being friendly or not," Halevy said. "The challenges we face are so enormous to even get to the point where we can call a system reasonably intelligent, that whether they are friendly or not will be an issue that is relatively easy to solve."

"There's certainly a finite chance that the whole process will go wrong - and the robots will eat us." - Hans Moravec, here.

I've had an idea for a "What I've learned from PUA post" bouncing around in my head for some time now. I would talk about what I've learned about the psychological differences between men and women and what that means for the dating market, NOT specific tip/tricks or anything like that. Would this be too controversial? I didn't participate in the PUA debates before - they inspired me to learn about PUA to be honest - so I don't know if I would be crossing a line.

There are enough examples of people making broad general claims about how men and women think and behave that aren't particularly supported by data that such a post is, I expect, something of an uphill climb.

Put another way: that any given post on the subject is just another attempt to support pre-existing social preconceptions with pragmatic-sounding but ultimately ungrounded assertions is a pretty high prior probability for some readers (myself included), so overcoming that prior with evidence is important.

That said, IMHO a post that is demonstrably grounded in actual data, genuinely relevant to questions of how people think and behave, and at least somewhat novel ought to garner more approval than disapproval.

Here are some LessWrong RSS feeds aggregated on a single page:

http://www.netvibes.com/lesswrong

Scripts for LW I'd like to see someone write: something that blocks the appearance of the four recent-items sidebars, in the spirit of reducing shiny distraction.

Firefox's Adblock Plus add-on has a supporting add-on called Element Hiding Helper that is helpful for situations such as these. You just press Ctrl+Shift+K and you can block pieces of websites as you see fit, or you can directly add the following filters under My Element Hiding Rules:

lesswrong.com###side-comments
lesswrong.com###side-posts
lesswrong.com###recent-wiki-edits

I haven't found a way to block the "New on Overcoming Bias" section without removing the right bar completely, though.

Just set it up on my laptop and it seems to work; thanks. (I think Firefox 4 changes the combination from Ctrl+Shift+K to Ctrl+Shift+S, though.)

The latest Scenes from a Multiverse (the comic plus the news post below) discusses rationality.

Help request: is it possible to draw tables in top-level posts?

More generally, is it possible to get any sort of formatting in top-level posts other than that which is available from the toolbar? What sort of markup is used for top-level posts?

If the answer is no, I can probably manage, but I think it would make my life easier if the answer is yes.

Top level posts use HTML, which supported tables last time I checked.

Yes, thanks, the problem I was actually having was that I hadn't spotted the "edit HTML" button in the toolbar for editing top-level posts.

[-][anonymous]00

Andrew Gelman's book on Bayesian Statistics has been banned in China. Apparently Bayes' theorem is "politically sensitive material." Gelman notes that that last sentence might not be true, but I really hope it is.