Central claim: Measured objectively, GPT-4 is arguably way past human intelligence already, perhaps even after taking generality into account.

Central implication: If the reason we're worried AGI will wipe us out is tied to an objective notion of intelligence--such as the idea that it starts to reflect on its values or learn planning just as it crosses a threshold for cognitive power around human level--we should already update on the fact that we're still alive.

I don't yet have a principled way of measuring "generality",[1] so my intuition just tends to imagine it as "competence at a wide range of tasks in the mammal domain." This strikes me as comparable to the anthropomorphic notion of intelligence people had back when they thought birds were dumb.

When GPT-2 was introduced, it had already achieved superhuman performance on next-token prediction. We could only hope to out-predict it on a limited set of tokens extremely prefiltered for precisely what we care the most about. For instance, when a human reads a sentence like...

"It was a rainy day in Nairobi, the capital of  _"

...it's obvious to us (for cultural reasons!) that the next word is an exceptionally salient piece of knowledge. So those are the things we base our AI benchmarks on. However, GPT cares equally about predicting 'capital' after 'the', and 'rainy' after 'It was a'. Its loss function does not discriminate.[2]

Consider in combination that a) GPT-4 has a non-discriminating loss function, and b) it rivals us even at the subset of tasks we optimise the hardest. What does this imply?

It's akin to a science fiction author whose only objective is to write better stories yet ends up rivalling top scientists in every field as an instrumental side quest.

Make no mistake, next-token prediction is an immensely rich domain, and the sub-problems could be more complex than we know. Human-centric benchmarks vastly underestimate both the objective intelligence and generality of GPTs, unless I'm just confused.

  1. ^

    Incidentally, please share if you know any good definitions of generality e.g. from information theory or something.

  2. ^

    At least during pretraining afaik.

New Comment
4 comments, sorted by Click to highlight new comments since:

GPT-4 is arguably way past human intelligence already

Well, it's only been a month. But if it were that brilliant, I think we'd hear about it performing feats that are recognisably works of genius. 

The main superhuman advantages of GPTs that are discernible to me, are speed of thought, and breadth of knowledge. 

In my own experiments, the single most impressive cognitive act might have been a suggestion from Bing, regarding how to combine a number of alignment strategies. It ingeniously noticed that each alignment method pertained to a different level of cognitive organization, opening the possibility that they could be applied simultaneously. 

I think that to a great extent, genius relies on repeated examples of such ingenious creativity, combined with successful analysis. But this was just an isolated flash of creativity, not one step in a larger process. 

If GPT-4-based agents can begin to regularly produce that kind of creativity from their GPT-4 component, and harness it in intricate purposeful activity - then they'll truly be on their way to higher intelligence. 

Meanwhile, AI psychometricians might want to consider models of intelligence that distinguish between many types of cognitive skill (if they aren't doing that already). 

My point is that we couldn't tell if it were genius. If it's incredibly smart in domains we don't understand or care about, it wouldn't be recognisably genius.

Thanks for link! Doing factor analysis is a step above just eyeballing it, but even that's anthropomorphic if the factors are derived from performance on very human tasks. The more objective (but fuzzy) notion of intelligence I have in mind is something about efficiently halving some mathematical term for "weighted size of search space".

I don't think the question is whether intelligence is objective, but whether it's linear and one-dimensional.  I suspect that the orthogonality thesis is getting some evidence with GPT, in that they seem to be intelligent on many dimensions, but their goals are alien (or perhaps nonexistent).

Yes, but none of the potential readers of this post will think intelligence is one-dimensional, so pointing it out wouldn't have the potential to educate anyone. I disagree with the notion that "good writing" is about convincing the reader that I'm a good reasoner. The reader should be thinking "is there something interesting I can learn from this post?" but usually there's a lot of "does this author demonstrate sufficient epistemic virtue for me to feel ok admitting to myself that I've learned something?"

Good writing means not worrying about justifying yourself; and efficient reading means only caring about what you can learn, not what you aren't learning.

"Rule Thinkers In, Not Out" => "Rule Ideas In And Don't Judge Them By Association"