1 min read17th Mar 202410 comments
This is a special post for quick takes by CronoDAS. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

10 comments, sorted by Click to highlight new comments since: Today at 2:16 PM

My father thinks that ASI is going to be impractical to achieve with silicon CMOS chips because Moore's law is eventually going to hit fundamental limits - such as the thickness of individual atoms - and the hardware required to create it would end up "requiring a supercomputer the size of the Empire State Building and consume as much electricity as all of New York City".

Needless to say, he has very long timelines for generally superhuman AGI. He doesn't rule out that another computing technology could replace silicon CMOS, he just doesn't think it would be practical unless that happens.

My father is usually a very smart and rational person (he is a retired professor of electrical engineering) and he loves arguing, and I suspect that he is seriously overestimating the computing hardware it would take to match a human brain. Would anyone here be interested in talking to him about it? Let me know and I'll put you in touch.

Update: My father later backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn't know very much about current AI, and isn't interested enough to talk to strangers online - he's in his 70s and if AI does eventually destroy the world it probably won't be in his own lifetime. :/

You can mention Portia, which can emulate mammal predators' behavior using brain much smaller.

which? https://en.wikipedia.org/wiki/Portia

Portia spiders.

I mean spiders.

This report by Joe Carlsmith on How Much Computational Power Does It Take to Match the Human Brain? seems relevant.

I think this is a sufficient crux, e.g. his views imply disagreement with this report.

The main issue with this report is that it doesn't take into seriously take into account memory bandwidth constraints (from my recollection), but I doubt this effects the bottom line that much.

requiring a supercomputer the size of the Empire State Building and consume as much electricity as all of New York City

why does he think that is unlikely to occur? such things seem on the table. existing big super computers are very, very big already. I've asked several search engines and AIs and none seem to be able to get to the point about exactly how big a datacenter housing one of these would be, but claude estimates:
 

Frontier: 5,000-8,000 square feet (70% confidence)
Eagle: 6,000-9,000 square feet (70% confidence)

I’d be delighted to talk about this.  I am of the opinion that existing frontier models are within an order of magnitude of a human mind, with existing hardware.  It will be interesting to see how a sensible person gets to a different conclusion. 

I am also trained as an electrical engineer, so we’re already thinking from a common point of view.

I brought it up with him again, and my father backpedaled and said he was mostly making educated guesses on limited information, that he knows that he really doesn't know very much about current AI, and isn't interested enough to talk to strangers online - he's in his 70s and figures that if AI does eventually destroy the world it probably won't be in his own lifetime. :/

He might also argue "even if you can match a human brain with a billion dollar supercomputer, it still takes a billion dollar supercomputer to run your AI, and you can make, train, and hire an awful lot of humans for a billion dollars."