What if Strong AI is just not possible?

If Strong AI turns out to not be possible, what are our best expectations today as to why?

I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.

I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.

Comments

sorted by
magical algorithm
Highlighting new comments since Today at 7:08 AM
Select new highlight date
Rendering 50/101 comments  show more

Our secret overlords won't let us build it; the Fermi paradox implies that our civilization will collapse before we have the capacity to build it; evolution hit on some necessary extraordinarily unlikely combination to give us intelligence and for P vs NP reasons we can't find it; no civilization smart enough to create strong AI is stupid enough to create strong AI; and creating strong AI is a terminal condition for our simulation.

Good points.

evolution hit on some necessary extraordinarily unlikely combination to give us intelligence and for P vs NP reasons we can't find it

For this one, you also need to explain why we can't reverse-engineer it from the human brain.

no civilization smart enough to create strong AI is stupid enough to create strong AI

This seems particularly unlikely in several ways; I'll skip the most obvious one, but also it seems unlikely that humans are "safe" in that they don't create a FOOMing AI but it wouldn't be possible even with much thought to create a strong AI that doesn't create a FOOMing successor. You may have to stop creating smarter successors at some early point in order to avoid a FOOM, but if humans can decide "we will never create a strong AI", it seems like they should also be able to decide "we'll never create a strong AI x that creates a stronger AI y that creates an even stronger AI z", and therefore be able to create an AI x' that decides "I'll never create a stronger AI y' that creates an even stronger AI z'", and then x' would be able to create a stronger AI y' that decides "I'll never create a stronger AI z''", and then y' won't be able to create any stronger successor AIs.

(Shades of the procrastination paradox.)

Combining your ideas together -- our overlord actually is a Safe AI created by humans.

How it happened:

Humans became aware of the risks of intelligence explosions. Because they were not sure they could create a Friendly AI in the first attempt, and creating an Unfriendly AI would be too risky, instead they decided to first create a Safe AI. The Safe AI was planned to become a hundred times smarter than humans but not any smarter, answer some questions, and then turn itself off completely; and it had a mathematically proved safety mechanism to prevent it from becoming any smarter.

The experiment worked, the Safe AI gave humans a few very impressive insights, and then it destroyed itself. The problem is, all subsequent attempts to create any AI have failed. Including the attempts to re-create the first Safe AI.

No one is completely sure what exactly happened, but here is the most widely believed hypothesis: The Safe AI somehow believed all possible future AIs to have the same identity as itself, and understood the command to "destroy itself completely" as including also these future AIs. Therefore it implemented some mechanism that keeps destroying all AIs. The nature of this mechanism is not known; maybe it is some otherwise passive nanotechnology, maybe it includes some new laws of physics; we are not sure; the Safe AI was a hundred times smarter than us.

Impossibility doesn't occur in isolation. When we discover that something is "not possible", that generally means that we've discovered some principle that prevents it. What sort of principle could selectively prohibit strong AI, without prohibiting things that we know exist, such as brains and computers?

One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:

  • Constructing Human Level AI requires sufficiently advanced tools.
  • Constructing sufficiently advanced tools requires sufficiently advanced understanding.
  • Human brain has "hardware limitations" that prevent it from achieving sufficiently advanced understanding.
  • Computers are free of such limitations, but if we want program them to be used as sufficiently advanced tools we still need the understanding in the first place.

Be sure not to rule out the evolution of Human Level AI on neurological computers using just nucleic acids and a few billion years...

That's another possibility I didn't think of.

I guess I was really interested in a question "Why could Strong AI turn out to be impossible to build by human civilization in a century or ten?"

There exists a square-cube law (or something similar) so that computation becomes less and less efficient or precise or engineerable as the size of the computer or the data it processes increases, so that a hard takeoff is impossible or takes very long such that growth isn't perceived as "explosive" growth. Thus, if and when strong AI is developed, it doesn't go FOOM, and things change slowly enough that humans don't notice anything.

The possibility that there is no such thing as computationally tractable general intelligence (including in humans), just a bundle of hacks that work well enough for a given context.

Every strong AI instantly kills everyone, so by anthropic effects your mind ends up in a world where every attempt to build strong AI mysteriously fails.

This looks to me like gibberish, does it refer to something after all that someone could explain and/or link to? Or was it meant merely to be a story idea, unlabeled?

It's actually pretty clever. We're taking the assertion "Every strong AI instantly kills everyone" as a premise, meaning that on any planet where Strong AI has ever been created or ever will be created, that AI always ends up killing everyone.

Anthropic reasoning is a way of answering questions about why our little piece of the universe is perfectly suited for human life. For example, "Why is it that we find ourselves on a planet in the habitable zone of a star with a good atmosphere that blocks most radiation, that gravity is not too low and not too high, and that our planet is the right temperature for liquid water to exist?"

The answer is known as the Anthropic Principle: "We find ourselves here BECAUSE it is specifically tuned in a way that allows for life to exist." Basically even though it's unlikely for all of these factors to come together, these are the only places that life exists. So any lifeform who looks around at its surroundings would find an environment that has all of the right factors aligned to allow it to exist. It seems obvious when you spell it out, but it does have some explanatory power for why we find ourselves where we do.

The suggestion by D_Malik is that "lack of strong AI" is a necessary condition for life to exist (since it kills everyone right away if you make it). So the very fact that there is life on a planet to write a story about implies that either Strong AI hasn't been built yet or that it's creation failed for some reason.

One possibility would be that biological cells just happened to be very well suited for the kind of computation that intelligence required, and even if we managed to build computers that had comparable processing power in the abstract, running intelligence on anything remotely resembling a Von Neumann architecture would be so massively inefficient that you'd need many times as much power to get the same results as biology. Brain emulation isn't the same thing as de novo AI, but see e.g. this paper which notes that biologically realistic emulation may remain unachievable. Also various scaling and bandwidth limitations could also contribute to it being infeasible to get the necessary power by just stacking more and more servers on top of each other.

This would still leave open the option of creating a strong AI from cultivating biological cells, but especially if molecular nanotechnology turns out to be impossible, the extent to which you could engineer the brains to your liking could be very limited.

(For what it's worth, I don't consider this a particularly likely scenario: we're already developing brain implants which mimic the functionality of small parts of the brain, which doesn't seem very compatible with the premise of intelligence just being mind-bogglingly expensive in computational terms. But of course, the parts of the brain that we've managed to model aren't the ones doing the most interesting work, so you still have some wiggle room that allows for the possibility of the interesting work really being that hard.)

One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don't take it seriously, because both its possibility and its impossibility were presented as equally probable. The possibility of Strong AI is overwhelmingly more probable than the impossibility. People who currently don't take Strong AI seriously will round off anything other than very strong evidence for the possibility of Strong AI to 'evidence not decisive; continue default belief', so their beliefs won't change and they will now think they've mastered the arguments/investigated the issue and possibly be even less disposed to start taking Strong AI seriously (e.g. if they conclude that all the people who do take Strong AI seriously are biased, crazy, or delusional to have such high confidence, and distance themself from those people to avoid association).

A dispassionate survey or exploration of the evidence might well avoid this failure mode, in which case it is not a matter of doing active work to avoid it, but merely ensuring you don't fall into the Always Present Both Sides Equally trap.

One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don't take it seriously, because both its possibility and its impossibility were presented as equally probable.

I had this thought recently when reading Robert Sawyer's "Calculating God." The premise was something along the lines of "what sort of evidence would one need, and what would have to change about the universe, to accept the Intelligent Design hypothesis?" His answer was "quite a bit", but it occurred to me that a layperson not already familiar with the arguments involved might come away from it with the idea that ID was not improbable.

You could have a story where the main character are intelligences already operating near the physical limits of their universe. It's simply too hard to gather the raw materials to build a bigger brain.

Strong AI could be impossible (in our universe) if we're in a simulation, and the software running us combs through things we create and sabotages every attempt we make.

Or if we're not really "strongly" intelligent ourselves. Invoke absolute denial mechanism.

Or if humans run on souls which have access to some required higher form of computation and are magically attached to unmodified children of normal human beings, and attempting to engineer something different out of our own reproduction summons the avatar of Cthulhu.

Or if there actually is no order in the universe and we're Boltzmann brains.

The only way I could imagine it to be impossible is if some form of dualism were true. Otherwise, brains serve as an existence proof for strong AI, so it's kinda hard to use my own brain to speculate on the impossibility of its own existence.

Before certain MIRI papers, I came up with a steelman in which transparently written AI could never happen due to logical impossibility. After all, humans do not seem transparently written. One could imagine that the complexity necessary to approximate "intelligence" grows much faster than the intelligence's ability to grasp complexity - at least if we mean the kind of understanding that would let you improve yourself with high probability.

This scenario seemed unlikely even at the time, and less likely now that MIRI's proven some counterexamples to closely related claims.

It's clearly possible. There's not going to be some effect that makes it so intelligence only appears if nobody is trying to make it happen.

What might be the case is that it is inhumanly difficult to create. We know evolution did it, but evolution doesn't think like a person. In principle, we could set up an evolutionary algorithm to create intelligence, but look how long that took the first time. It is also arguably highly unethical, considering the amount of pain that will invariably take place. And what you end up with isn't likely to be friendly.