Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]
http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html
Very surprised none has linked to this yet:
TL;DR: AI is a very underfunded existential risk.
Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.
This argument is not terribly convincing by itself. For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.
Hmm, what about the following idea. The FAI can threaten the aliens to somehow consume a large portion of the free energy in the solar system. Assuming the 2nd law of thermodynamics is watertight, it will be profitable for them to leave us a significant portion (1/2?) of that portion. Essentially it's the Ultimatum game. The negotiation can be done acausally assuming each side has sufficient information about the other.
Thus we remain a small civilization but survive for a long time.
It's not obvious that having a long time is preferable. For example, optimizing a large amount of resources in a short time might be better than optimizing a small amount of resources for a long time. Whatever's preferable, that's the trade that a FAI might be in a position to facilitate.