This time, it's by "The Editors" of Bloomberg view (which is very significant in News world). Content is very reasonable explanation of AI concerns, though not novel to this audience.
http://www.bloombergview.com/articles/2014-08-10/intelligent-machines-scare-smart-people
Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas? Perhaps one of the orgs (MIRI, FHI, CSER, FLI) reach out and say hello to the editors?
Okay, fair enough. To explain briefly:
I disagree with (3) because the Lobian obstacle is just an obstacle to a certain kind of stable self-modification in a particular toy model, and can't say anything about what kinds of safety guarantees you can have for superintelligences in general.
I disagree with (4) because MIRI hasn't shown that there are ways to make a superintelligence 90% or more likely (in a subjective Bayesian sense) to be stably friendly, and I don't expect us to have shown that in another 20 years, and plausibly not ever.
Thanks! I guess I was unduly optimistic. Comes with being a hopeful but ultimately clueless bystander.