Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]
http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html
Very surprised none has linked to this yet:
TL;DR: AI is a very underfunded existential risk.
Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.
This article was pretty lacking in actual argument. I feel like if I hadn't already been concerned about AI risk, reading that wouldn't have changed my mind. I guess the fact that the authors are pretty high-powered authority figures makes it still somewhat significant, though.
The argument is simply an argument from authority. What more could you reasonably expect for six paragraphs and the mega-sized inferential distance between physicists/computer scientists and the readers of Huffington Post?