Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]
http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html
Very surprised none has linked to this yet:
TL;DR: AI is a very underfunded existential risk.
Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.
It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own "FAIs" much older and therefore more powerful.
Regarding probes to extremely far galaxies: theoretically might work, depending on economics of space colonization. We would survive at the cost of losing most of potential colonization space. Neat.