The first AI probably won't be very smart

Claim: The first human-level AIs are not likely to undergo an intelligence explosion.

1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.

2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.

3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 4:10 AM
Select new highlight date
Rendering 50/64 comments  show more

Lots has already been said on this topic, e.g. at http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate

I can try to summarize some relevant points for you, but you should know that you're being somewhat intellectually rude by not familiarizing yourself with what's already been said.

1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.

It's common in computer science for some algorithms to be radically more efficient than others for accomplishing the same task. Thinking algorithms may be the same way. Evolution moves incrementally and it's likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn't happen to discover for whatever reason. For example, even given the massive amount of computational power at our brain's disposal, it takes us on the order of minutes to do relatively trivial computations like 3967274 * 18574819. And the sort of thinking that we associate with technological progress is pushing at the boundaries of what our brains are designed for. Most humans aren't capable of making technological breakthroughs and the ones who are capable of technological breakthroughs have to work hard at it. So it's possible that you could have an AGI that could do things like hack computers and discover physics way better and faster than humans using much less computational power.

2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.

In programming, I think it's often useful to think in terms of a "debugging cycle"... once you think you know how to fix a bug, how long does it take you to verify that your fix is going to work? This is a critical input in to your productivity as a programmer. The debugging cycle for DNA is very long; it would take on the order of years in order to see if flipping a few base pairs resulted in a more intelligent human. The debugging cycle for software is often much shorter. Compiling an executable is much quicker than raising a child.

Also, DNA is really bad source code--even though we've managed to get ahold of it, biologists have found it to be almost completely unreadable :) Reading human-designed computer code is way easier than reading DNA for humans, and most likely also for computers.

3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

This is your best objection, in my opinion; it's also something I discussed in my essay on this topic. I think it's hard to say much one way or the other. In general, I think people are too certain about whether AI will foom or not.

I'm also skeptical that foom will happen, but I don't think arguments 1 or 2 are especially strong.

Evolution moves incrementally and it's likely that there exist intelligence algorithms way better than the ones our brains run that evolution didn't happen to discover for whatever reason.

Maybe, but that doesn't mean we can find them. Brain emulation and machine learning seem like the most viable approaches, and they both require tons of distributed computing power.

1) "AI" is a fuzzy term. We have some pretty smart programs already. What counts?

I'm fairly sure the term you're looking for here is AGI (Artificial General Intelligence)

Assuming the same sort of incremental advance in AI that we've seen for decades, this is borderline tautological. The first AGIs will likely be significantly dumber than humans. I would be hard-pressed to imagine a world where we make a superhuman AGI before we make a chimp-level AGI.

Note that this doesn't disprove an intelligence explosion, merely implies that it won't happen over a weekend. IMO, it'll certainly take years, probably decades. (I know that's not the prevailing thought around here, but I think that's because the LW crowd is a bit too enamoured with the idea of working on The Most Important Problem In The World, and gives insufficient respect to the fact that a computer is not merely a piece of software that can self-modify billions of times a second, but is also hardware, and will likely have that incredible processing speed already fully tapped in order to create the human-level intelligence in the first place)

I think you underestimate the degree to which a comparatively slow FOOM (years) is considered plausible around here.

wrt the Most Important Problem In The World, the arguments for UFAI are not dependent on a fast intelligence explosion - in fact, many of the key players actually working on the problem are very uncertain about the speed of FOOM, more so than, say, they were when the Sequences were written.

1) Yes, brains have lots of computational power, but you've already accounted for that when you said "human-level AI" in your claim. A human level AI will, with high probability, run at 2x human speed in 18 months, due to Moore's law, even if we can't find any optimizations. This speedup by itself is probably sufficient to get a (slow-moving) intelligence explosion.

2) It's not read access that makes a major difference, it's write access. Biological humans probably will never have write access to biological brains. Simulated brains or AGIs probably will have or be able to get write access to their own brain. Also, DNA is not the source code to your brain, it's the source code to the robot that builds your brain. It's probably not the best tool for understanding the algorithms that make the brain function.

3) As said elsewhere, the question is whether the speed at which you can pick the low hanging fruit dominates the speed at which increased intelligence makes additional fruit low-hanging. I don't think this has an obviously correct answer either way.

I love the idea of an intelligence explosion but I think you have hit on a very strong point here:

In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

In fact, we can see from both history and paleontology that when a new breakthrough was made in "biologicial technology" like the homeobox gene or whatever triggered the PreCambrian explosion of diversity, that when self-modification (here a 'self' isn't one meat body, it's a clade of genes that sail through time and configuration space together - think of a current of bloodlines in spacetime, that we might call a "species" or genus or family) is made easier (and the development of modern-style morphogenesis is in some way like developing a toolkit for modification of body plan at some level) then there was apparently an explosion of explorers, bloodlines, into the newly accessible areas of design space.

But the explosion eventually ended. After the Diaspora into over a hundred phyla of critters hard enough to leave fossils, the expansion into new phyla stopped. Some sort of new frontier was reached within tens of millions of years, then the next six hundred million years or so was spent slowly whittling improvements within phyla. Most phyla died out, in fact, while a few like Arthropdoda took over many roles and niches.

We see very similar incidents throughout human history, look at the way languages develop, or technologies. For an example perhaps familiar to many readers, look at the history of algorithms. For thousands of years we see slow development in this field, from Babylonian algorithms on how to find the area of a triangle to the Sieve of Eratosthenes to... after a lot of development - medieval Italian merchants writing down how to do double entry bookkeeping.

Then in the later part of the Renaissance there is some kind of phase change and the mathematical community begins compiling books of algorithms quite consciously. This has happened before, in Sumer and Egypt to start, in Babylon and Greece, in Asia several times, and most notably in the House of Wisdom in Baghdad in the ninth century. But always there are these rising and falling cycles where people compile knowledge and then it is lost and others have to rebuild, often the new cycle is helped by the rediscovery or re-appreciation of a few surviving texts from a prior cycle.

But around 1350 there begins a new cycle (which of course draws on surviving data from prior cycles) where people begin to accumulate formally expressed algorithms that is unique in that it has lasted to this day. Much of what we call the mathematical literature consists of these collections, and in the 1930s people (Church, Turing, many others) finally develop what we might now call classical theory of algorithms. Judging by the progress of various other disciplines, you would expect little more progress in this field, relative to such a capstone achievement, for a long time.

(One might note that this seven-century surge of progress might well be due, not to human mathematicians somehow becoming more intelligent in some biological way, but to the development of printing and associated arts and customs that led to the wide spread dissemination of information in the form of journals and books with many copies of each edition. The custom of open-sourcing your potentially extremely valuable algorithms was probably as important as the technology of printing here; remember that medieval and ancient bankers and so on all had little trade secrets of handling numbers and doing maths in a formulaic way, but we don't retain in the general body of algorithmic lore any of their secret tricks unless they published or chance preserved some record of their methods.)

Now, we'd have expected Turing's 1930's work to be the high point in this field for centuries to come (and maybe it was; let history be the judge) but between the development of the /theory/ of a general computing machine, progress in other fields such as electronics, and a leg up in from the intellectual legacy left by priors such as George Boole, the 1940's somehow put together (under enormous pressure of circumstances) a new sort of engine that could run algorithmic calculations without direct human intervention. (Note that here I say 'run', not 'design - I mean that the new engines could execute algorithms on demand).

The new computing engines, electro-mechanical as well as purely electronic, were very fast compared to human predecessors. This led to something in algorithm space that looks to me a lot like the Precambrian explosion, with many wonderful critters like LISP and FORTRAN and BASIC evolving that bridged the gap between human minds and assembly language, which itself was a bridge to the level of machine instructions, which... and so on. Layers and layers developed, and then in the 1960's giants wrought mighty texts of computer science no modern professor can match; we can only stare in awe at their achievements in some sense.

And then... although Moore's law worked on and on tirelessly, relatively little fundamental progress in computer science happened for the next forty years. There was a huge explosion in available computing power, but just as jpaulson suspects, merely adding computing power didn't cause a vast change in our ability to 'do computer science'. Some problems may /just be exponentially hard/ and an exponential increase in capability starts to look like a 'linear increase' by 'the important measure'.

It may well be that people will just ... adapt... to exponentially increasing intellectual capacity by dismissing the 'easy' problems as unimportant and thinking of things that are going on beyond the capacity of the human mind to grasp as "nonexistent" or "also unimportant". Right now, computers are executing many many algorithms too complex for any one human mind to follow - and maybe too tedious for any but the most dedicated humans to follow, even in teams - and we still don't think they are 'intelligent'. If we can't recognize an intelligence explosion when we see one under our noses, it is entirely possible we won't even /notice/ the Singularity when it comes.

If it comes - as jpaulson indicates, there might be a never ending series of 'tiers' where we think "Oh past here it's just clear sailing up to the level of the Infinite Mind of Omega, we'll be there soon!" but when we actually get to the next tier, we might always see that there is a new kind of problem that is hyperexponentially difficult to solve before we can ascend further.

If it was all that easy, I would expect that whatever gave us self-reproducing wet nanomachines four billion years ago would have solved it - the ocean has been full of protists and free swimming virii, exchanging genetic instructions and evolving freely, for a very long time. This system certainly has a great deal of raw computing power, perhaps even more than it would appear on the surface. If she (the living ocean system as a whole) isn't wiser than the average individual human, I would be very surprised, and she apparently either couldn't create such a runaway explosion of intelligence, or decided it would be unwise to do so any faster than the intelligence explosion we've been watching unfold around us.

To be more precise, it was 40m to simulate 1% of the neocortex.

Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times. So that should give you more intuïtion of what 1% actually means. In the course of a couple decades it would take 4 minutes to simulate 1 second of an entire neocortext (not the entire brain).

That doesn't sound too impressive either, but bear in mind that human brain <> strong AI. We are talking here about the physics model of the human brain, not the software architecture of an acutal AI. We could make it a million times more efficient if we trim the fat and keep the essence.

Our brains aren't the ultimate authority on intelligence. Computers already are much better at arithmetic, memory and data transmission.

This isn't considered to be intelligence by itself, but amplifies the ability of any AI at a much larger scale. For instance, Watson isn't all that smart because he had to read the entire Wikipedia and a lot of other sources before he could beat people on Jeopardy. But... he did read the entire Wikipedia, which is something no human has ever done.

Using Moores law we can postulate that it takes 17 years to increase computational power a thousand fold and 34 years to increase it a million times.

You are extrapolating Moore's law out almost as far as it's been in existence!

We could make it a million times more efficient if we trim the fat and keep the essence.

It's nice to think that, but no one understands the brain well enough to make claims like that yet.

You are extrapolating Moore's law out almost as far as it's been in existence!

Yeah.

Transistor densities can't increase much further due to fundamental physical limits. The chip makers all predict that they will not be able to continue at the same rate (and been predicting that for ages).

Interestingly the feature sizes are roughly the same order of magnitude for brains and chips now (don't look at the neuron sizes, by the way, a neuron does far, far more than a transistor).

What we can do is building chips in multiple layers, but because making a layer is the bottleneck, not the physical volume, that won't help a whole lot with costs. Transistors are also faster, but produce more heat, and efficiency wise not much ahead (if at all).

Bottom line is, even without the simulation penalty, it's way off.

In near term, we can probably hack together some smaller neural network (or homologous graph-based thing) hard wired to interface with some language libraries, and have it fool people into superficially believing it's not a complete idiot. It can also be very useful when connected together with something like mathematica.

But walking around in the world and figuring out that the stone can be chipped to be sharper, figuring out that it can be attached to a stick - the action space where such inventions lie is utterly enormous. Keep in mind that we humans are not merely intelligent. We are intelligent enough to overcome the starting hurdle while terribly inbred, full of parasites, and constantly losing knowledge. (Picking a good action out of an enormous action space is the kind of thing that requires a lot of computational power). Far simpler intelligence could do great things as a part of human society where many of the existing problems had their solution space trimmed already to a much more manageable size.

No one understands the brain well enough to actually do it, but I'd be astonished if this simulation weren't doing a lot of redundant, unnecessary computations.