I recently wrote an essay about AI risk, targeted at other academics:
Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.
That's a big assumption.
Nobody desires extinction, and nobody is better off if extinction comes form their own AI project rather than the AI project of somebody else, hence there is no tragedy of the commons scenario.
People are not going to make an AI capable of causing major disasters without being reasonable sure that they can control it.
1.) I was drawing from the book, and that reading is the only exposure I have that particular dynamic of the intelligence explosion. Moderate takeoff prediction times range from months to years. Slow would be decades, or centuries.
2.) I agree with you, and further it seems all parties heading the field are taking steps to ensure that this sort of thing doesn't happen.
Deviation point: When you mentioned arms races, I suppose I imagined the groups that were secretive about their progress; competing. Though I suppose this issue isn't likely to be an either/or case in terms of collaboration vs. competition.
My comment was fueled by a perception that group considerations, like arms races, have considerations in the forming of safety, rather than just assuming they would occur after the control problem has been dealt with. Not implying that is what you indeed assumed.