I recently wrote an essay about AI risk, targeted at other academics:
Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.
I'm not sure what point you are trying to make.
Yes, private organizations or national governments make decisions that are less socially optimal compared to a super-competent world-government ruled by a benevolent dictator that has somehow solved the interpersonal preferences comparison problem. That's not a motte I will try to attack.
But it seems to me that you are actually trying to defend the bailey that private organizations or national governments will engage in an arms race to launch a potentially dangerous AI as soon as they could disregarding reasonable safety concerns. This positions seems less defensible.
Expect government regulation.
Also note that the same argument can be made for nuclear power, nuclear weapons, chemical weapons or biological weapons.
In principle individuals or small groups could build them, and there have been perhaps one instance of bioweapon attack (the 2001 anthrax mail attacks in the US) and a few instances of chemical attacks. But all of them were inefficient and ultimately caused little damage. In practice it seems that the actual expertise and organizational capabilities required to pull such things to a significant scale are non-trivial.
AI may be quite similar in this regard: even without malicious intent, going from research papers and proof-of-concept systems to a fully operational system capable of causing major damage will probably require significant engineering efforts.