I recently wrote an essay about AI risk, targeted at other academics:
Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.
That's a big assumption.
Nobody desires extinction, and nobody is better off if extinction comes form their own AI project rather than the AI project of somebody else, hence there is no tragedy of the commons scenario.
People are not going to make an AI capable of causing major disasters without being reasonable sure that they can control it.