Google may be trying to take over the world

So I know we've already seen them buying a bunch of ML and robotics companies, but now they're purchasing Shane Legg's AGI startup.  This is after they've acquired Boston Dynamics, several smaller robotics and ML firms, and started their own life-extension firm.

 

Is it just me, or are they trying to make Accelerando or something closely related actually happen?  Given that they're buying up real experts and not just "AI is inevitable" prediction geeks (who shall remain politely unnamed out of respect for their real, original expertise in machine learning), has someone had a polite word with them about not killing all humans by sheer accident?

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 2:52 AM
Select new highlight date
Rendering 50/133 comments  show more

Buying off AGI startups and then letting the relevant programmers program smart cars seems to me a quite good move to stall UFAI.

...has someone had a polite word with them about not killing all humans by sheer accident?

Shane Legg is familiar with AI risks. So is Jaan Tallinn, a top donor of MIRI, who is also associated with DeepMind. I suppose they will talk about their fears with Google.

Actually, there does seem to have been a very quiet press release about this acquisition resulting in a DeepMind ethics board.

So that's a relief.

Not to mention of course the Google employees that post on LW.

Not to mention of course the Google employees that post on LW.

I didn't know there were any. My guess is that you have to be pretty high in the hierarchy to actually steer Google into a direction that would suit MIRI (under the assumption that people who agree with MIRI are in the minority).

plus cousin_it and at least 2-3 others. Plus Ctrl+F for Google here http://intelligence.org/team/. Moshe Looks might be one of Google's AGI people I think.

I didn't know there were any.

Greetings from Dublin! You're right that the average employee is unlikely to matter, though.

Eliezed specifically mentioned Google in his Intelligence Explosion Microeconomics paper as the only named organization that could potentially start an intelligence explosion.

Larry Page has publicly said that he is specifically interested in “real AI” (Artificial General Intelligence), and some of the researchers in the field are funded by Google. So far as I know, this is still at the level of blue-sky work on basic algorithms and not an attempt to birth The Google in the next five years, but it still seems worth mentioning Google specifically.

In these interviews Larry Page gave years ago he constantly said that he wanted Google to become "the ultimate search engine" that would be able to understand all the information in the world. And to do that, Larry Page said, it would need to be 'true' artificial intelligence (he didn't say 'true', but it comes clear what he means in the context).

Here's a quote by Larry Page from the year 2007:

We have some people at Google who are really trying to build artificial intelligence and to do it on a large scale and so on, and in fact, to make search better, to do the perfect job of search you could ask any query and it would give you the perfect answer and that would be artificial intelligence based on everything being on the web, which is a pretty close approximation. We're lucky enough to be working incrementally closer to that, but again, very, very few people are working on this, and I don't think it's as far off as people think.

I doubt it would be very Friendly if you use MIRI's definition, but it doesn't seem like they have something 'evil' in their mind. Peter Norvig is the co-author of AI: A Modern Approach, which is currently the dominant textbook in the field. The 3rd edition had several mentions about AGI and Friendly AI. So at least some people in Google have heard about this Friendliness thing and paid attention to it. But the projects run by Google X are quite secretive, so it's hard to know exactly how seriously they take the dangers of AGI and how much effort they put into these matters. It could be, like lukeprog said in October 2012, that Google doesn't even have "an AGI team".

It could be, like lukeprog said in October 2012, that Google doesn't even have "an AGI team".

Not that I know of, anyway. Kurzweil's team is probably part of Page's long-term AGI ambitions, but right now they're focusing on NLP (last I heard). And Deep Mind, which also has long-term AGI ambitions, has been working on game AI as an intermediate step. But then again, that kind of work is probably more relevant progress toward AGI than, say, OpenCog.

IIRC the Deep Mind folks were considering setting up an ethics board before Google acquired them, so the Google ethics board may be a carryover from that. FHI spoke to Deep Mind about safety standards a while back, so they're not totally closed to taking Friendliness seriously. I haven't spoken to the ethics board, so I don't know how serious they are.

Update: "DeepMind reportedly insisted on the board’s establishment before reaching a deal."

Update: DeepMind will work under Jeff Dean at Google's search team.

And, predictably:

“Things like the ethics board smack of the kind of self-aggrandizement that we are so worried about,” one machine learning researcher told Re/code. “We’re a hell of a long way from needing to worry about the ethics of AI.”

...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.

NYTimes also links to LessWrong.

Quote:

Mr. Legg noted in a 2011 Q&A with the LessWrong blog that technology and artificial intelligence could have negative consequences for humanity.

despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.

It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.

I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.

So, to summarize, Google wants to build a potentially dangerous AI, but they believe they can keep it as an Oracle AI which will answer questions but not act independently. They also apparently believe (not without some grounding) that true AI is so computationally expensive in terms of both speed and training data that we will probably maintain an advantage of sheer physical violence over a potentially threatening unboxed oracle for a long time.

Except that they are also blatant ideological Singulatarians, so they're working to close that gap.

has someone had a polite word with them about not killing all humans by sheer accident?

Why do you think you have a better idea of the risks and solutions involved than they do, anyway? Superior AI expertise? Some superior expert-choosing talent of yours?

My suggestion to Google is to free up their brightest minds and tell them to talk to MIRI for 2 weeks, full-time. After the two weeks are over, let each of them write a report on whether Google should e.g. give them more time to talk to MIRI, accept MIRI's position and possibly hire them, or ignore them. MIRI should be able to comment on a draft of each of the reports.

I think this could finally settle the issue, if not for MIRI itself then at least for outsiders like me.

Well, that's sort of like having the brightest minds at CERN spend two weeks full time talking to some random "autodidact" who's claiming that LHC is going to create a blackhole that will devour the Earth. Society can't work this way.

Does that mean there is a terrible ignored risk? No, when there is a real risk, the brightest people of extreme and diverse intellectual accomplishment are the ones most likely to be concerned about it (and various "autodidacts" are most likely to fail to notice the risk).

Well, that's sort of like having the brightest minds at CERN spend two weeks full time talking to some random "autodidact" who's claiming that LHC is going to create a blackhole that will devour the Earth.

This is an unusual situation though. We have a lot of smart people who believe MIRI (they are not idiots, you've to grant them that). And you and me are not going to change their mind, ever, and they are hardly going to convince us. But if a bunch of independent top-notch people were to accept MIRI's position, then that would certainly make me assign a high probability to the possibility that I simply don't get it and that they are right after all.

Society can't work this way.

In the case of the LHC, independent safety reviews have been conducted. I wish this was the case for the kinds of AI risk scenarios imagined by MIRI.

We have a lot of smart people who believe MIRI (they are not idiots, you've to grant them that).

If you pitch something stupid to a large enough number of smart people, some small fraction will believe.

In the case of the LHC, independent safety reviews have been conducted.

Not for every crackpot claim. edit: and since they got an ethical review board, that's your equivalent of what was conducted...

I wish this was the case for the kinds of AI risk scenarios imagined by MIRI.

There's a threshold. Some successful trading software, or a popular programming language, or some AI project that does something world-level notable (plays some game really well for example), that puts one above the threshold. Convincing some small fraction of smart people does not. Shane Legg's startup evidently is above the threshold.

As for the risks, why would you think that Google's research is a greater risk to mankind than, say, MIRI's? (assuming that the latter is not irrelevant, for the sake of the argument)

Comment by Juergen Schmidhuber:

Our former PhD student Shane Legg is co-founder of deepmind (with Demis Hassabis and Mustafa Suleyman), just acquired by Google for ~$500m. Several additional ex-members of the Swiss AI Lab IDSIA have joined deepmind, including Daan Wierstra, Tom Schaul, Alex Graves.

Yes, or in other words, these are the competent AGI researchers.

I'm quite happy to hear that, but it's not very useful advice. I'm not an AIXI agent, so I can't deduce what's being praised solely from the fact that it is praised.

Peter Norvig is at least in principle aware of some of the issues; see e.g. this article about the current edition of Norvig&Russell's AIAMA (which mentions a few distinct way in which AI could have very bad consequences and cites Yudkowsky and Omohundro).

I don't know what Google's attitude is to these things, but if it's bad then either they aren't listening to Peter Norvig or they have what they think are strong counterarguments, and in either case an outsider having a polite word is unlikely to make a big difference.

Peter Norving was a resident at Hacker School while I was there, and we had a brief discussion about existential risks from AI. He basically told me that he predicts AI won't surpass humans in intelligence by so much that we won't be able to coerce it into not ruining everything. It was pretty surprising, if that is what he actually believes.

I don't know what Google's attitude is to these things, but if it's bad then either they aren't listening to Peter Norvig or they have what they think are strong counterargument...

My guess is that most people at Google, who are working on AI, take those risks somewhat seriously (i.e. less seriously than MIRI, but still acknowledge them) but think that the best way to mitigate risks associated with AGI is to research AGI itself, because the problems are intertwined.

Microsoft seems to focus on AI as well:

Q: You are in charge of more than 1000 research labs around the world.

What kind of thing are you focusing on?

Microsoft: A big focus right now, really on point for this segment, is artificial intelligence.

We have been very focused.

It is our largest investment area right now.