Wiki Contributions

Comments

We ran into a hardware shortage during a period of time where there was no pause, which is evidence that the hardware manufacturer was behaving conservatively.

 

Alternative hypothesis, there are physical limits on how fast you can build things.

Also, NVIDIA currently has a monopoly on "decent AI accelerator you can actually buy".  Part of the "shortage" is just the standard economic result that a monopoly produces less of something to increase profits.  

This monopoly will not last forever, so in that sense we are currently in hardware "underhang".
 

This and the rest of your comment seems to have ignored the rest of my post (see: multiple inputs to progress, all of which seem sensitive to "demand"

 

Nvidia doesn't just make AGI accelerators.  They are are video game graphics card company. 

And even if we pause large training runs, demand for inference of existing models will continue to increase.

If you think my model of how inputs to capabilities progress are sensitive to demand for those inputs from AGI labs is wrong, then please argue so directly, or explain how your proposed scenario is compatible with it.

 

This is me arguing directly.  

The model "all demand for hardware is driven by a handful of labs training cutting edge models" is completely implausible. It doesn't explain how we got the hardware in the first place (video games) and it ignores the fact that there exist uses for AI acceleration hardware other than training cutting-edge models.

To me, the recent hardware shortage is very strong evidence that we will not be surprised by a sharp jump in capabilities after a pause, as a result of the pause creating an overhang that eliminates all or nearly all bottlenecks to reaching ASI.

 

I don't follow the reasoning here.  Shouldn't a hardware shortage be evidence we will see a spike after a pause?

For example, suppose we pause now for 3 years and during that time NVIDIA releases the RTX5090,6090,7090 which are produced using TSMC's 3nm, 2nm and 10a processes.  Then the amount of compute available at the end of the three year pause will be dramatically higher than it is today. (for reference, the 4090 is 4x better at inference than the 3090). Roughly speaking, then, after your 3 year pause a billion dollar investment will buy 64x as much compute (this is more than the difference between GPT-4 and GPT-3).

Also, a "pause" would most likely only be a cap on the largest training runs.  It is unlikely that we're going to pause all research on current LLM capabilities.  Consider that a large part of the "algorithmic progress" in LLM inference speed is driven not by SOTA models, but by hobbyists trying to get LLMs to run faster on their own devices.

This means that in addition to the 64x hardware improvement, we would also get algorithmic improvement (which has historically faster than hardware improvement).

That means at the end of a 3 year pause, an equal cost run would be not 64x but 4096x larger. 

Finally, LLMs have already reached the point where they can be reasonably expected to speed up economic growth.  Given their economic value will become more obvious over time,the longer we pause, the more we can expect that the largest actors will be willing to spend on a single run.  It's hard to put an estimate on this, but consider that historically the largest runs have been increasing at 3x/year.  Even if we conservatively estimate 2x per year, that gives us an additional 8x at the end of our 3 year pause.  This now gives us a factor of 32k at the end of our 3 year pause.

Even if you don't buy that "Most alignment progress will happen from studying closer-to-superhuman models", surely you believe that "large discontinuous changes are risky" and a factor of 32,000x is a "large discontinuous change".

It's not trying to address present harms, it's trying to address future harms, which are the important ones. 

 

A real AI system that kills literally everyone will do so by gaining power/resources over a period of time.  Most likely it will do so the same way existing bad-agents accumulate power and resources.

Unless you're explicitly committing to the Diamondoid bacteria thing, stopping hacking is stopping AI from taking over the world.

Point taken.  "$$$" was not the correct framing (if we're specifically talking about the Gwern story).  I will edit to say "it accumulates 'resources'".

 

The Gwern story has faster takeoff than I would expect (especially if we're talking a ~GPT4.5 autoGPT agent), but the focus on money vs just hacking stuff is not the point of my essay.

  1. What plateau?  Why pause now (vs say 10 years ago)?  Why not wait until after the singularity and impose a "long reflection" when we will be in an exponentially better place to consider such questions.
  2. Singularity 5-10 years from now vs 15-20 years from now determines whether or not some people I personally know and care about will be alive.
  3. Every second we delay the singularity leads to a "cosmic waste" as millions more galaxies move permanently behind the event horizon defined by the expanding universe
  4. Slower is not prima facia safer.  To the contrary, the primary mechanism for slowing down AGI is "concentrate power in the hands of a small number of decision makers," which in my current best guess increases risk.
  5. There is no bright line for how much slower we should go. If we accept without evidence that we should slow down AGI by 10 years, why not 50? why not 5000?

Sam Atis—a super forecaster—had a piece arguing against The Case Against Education

 

If it's this piece, I would be interested to know why you found it convincing.  He doesn't address (or seem to have even read) any of Brian's arguments. His argument basically boils down to "but so many people who work for universities think it's good".

then that's just irrelevant. You don't need to evaluate millions of positions to backtrack (unless you think humans don't backtrack) or play chess. 

 

Humans are not transformers. The "context window" for a human is literally their entire life.

Setting up the architecture that would allow a pretrained LLM to trial and error whatever you want is relatively trivial.

 

I agree.  Or at least, I don't see any reason why not.

My point was not that "a relatively simple architecture that contains a Transformer as the core" cannot solve problems via trial and error (in fact I think it's likely such an architecture exists).  My point was that transformers alone cannot do so.

You can call it a "gut claim" if that makes you feel better.  But the actual reason is I did some very simple math (about the window size required and given quadratic scaling for transformers) and concluded that practically speaking it was impossible.

Also, importantly, we don't know what that "relatively simple" architecture looks like.  If you look at the various efforts to "extend" transformers to general learning machines, there are a bunch of different approaches: alpha-geometry, diffusion transformers, baby-agi, voyager, dreamer, chain-of-thought, RAG, continuous fine-tuning, V-JEPA.  Practically speaking, we have no idea which of these techniques is the "correct" one (if any of them are).

In my opinion saying "Transformers are AGI" is a bit like saying "Deep learning is AGI".  While it is extremely possible that an architecture that heavily relies on Transformers and is AGI exists, we don't actually know what that architecture is.

Personally, my bet is either on a sort of generalized alpha-geometry approach (where the transformer generates hypothesis and then GOFAI is used to evaluate them) or Diffusion Transformers (where we iteratively de-noise a solution to a problem).  But I wouldn't be at all surprised if a few years from now it is universally agreed that some key insight we're currently missing marks the dividing line between Transformers and AGI.

Ok? That's how you teach anybody anything. 

 

Have you never figured out something by yourself?  The way I learned to do Sudoku was: I was given a book of Sudoku puzzles and told "have fun".

you said it would be impossible to train a chess playing model this century.

I didn't say it was impossible to train an LLM to play Chess. I said it was impossible for an LLM to teach itself to play a game of similar difficulty to chess if that game is not in it's training data.

These are two wildly different things.

Obviously LLMs can learn things that are in their training data.  That's what they do.  Obviously if you give LLMs detailed step-by-step instructions for a procedure that is small enough to fit in its attention window, LLMs can follow that procedure.  Again, that is what LLMs do.

What they do not do is teach themselves things that aren't in their training data via trial-and-error.  Which is the primary way humans learn things.

sure.  4000 words (~8000 tokens) to do a 9-state 9-turn game with the entire strategy written out by a human.  Now extrapolate that to chess, go, or any serious game.

And this doesn't address at all my actual point, which is that Transformers cannot teach themselves to play a game.

Load More