Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"
Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?
Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.
"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21
The way AI is going, our aim is to reach General Intelligence or mimic the human brain at some point. I just want to differentiate that with the AI we know today. If we assume that, then there are two end points that we might reach. One would be, we are not as smart as we think and we have made an "intelligent" being, by that I mean stupid and the stupid being has the tools it needs to destroy us and it can at anytime harm us. The second option is we are really smart and we create the intelligent being we have always dreamed about. Think about it, the system we have built would surely by so complex that the smallest change could trigger a big chain reaction. We might start building robots and one robot might have a malfunction, just like the car malfunction that the car industry faced. Now think of the consequence that the world might face. The AI we have built surely out smarts us and if it can think evil, who is to say it can treat us like we treat ants ? Is there a guaranty? No, would surely be the answer and I don't think we should pursue it because either way we go the result is deadly.