I recently wrote an essay about AI risk, targeted at other academics:
Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.
Isn't the arms race a safeguard? If multiple AIs of similar intelligence are competing it is difficult for any one of them to completely outsmart all the others and take over the world.