The question:  Within the last 5-10 years, is there is any person or group that has openly increased their AGI timelines?
Ideally, they would have at least two different estimates (years apart?), with the most recent estimate showing that they think AGI is further into the future than the prior estimate(s).  

Background: Whenever I see posts about AGI timelines, they all seem to be decreasing (or staying the same, with methodological differences making some comparisons difficult). I wondered if I'm missing some subset of people or forecasters that have looked at recent developments and thought that AGI will come later not sooner.  Another framing, am I wrong if I say, "Almost everyone is decreasing their timelines and no one is increasing them" ?

New Answer
New Comment

4 Answers sorted by

lalaithion

92

No public estimates, but the difficulty of self driving cars definitely pushed my AGI timelines back. In 2018 I predicted full self driving by 2023; now that’s looking unlikely. Yes, the advance in text and image understanding and generation has improved a lot since 2018, but instead of shortening my estimates that’s merely rotated which capabilities will come online earlier and which will wait until AGI.

However, I expect some crazy TAI in the next few years. I fully expect “solve all the millennium problems” to be doable without AGI, as well as much of coding/design/engineering work. I also think it’s likely that text models will be able to do the work of a paralegal/research assistant/copywriter without AGI.

trevor

50

My timelines oscillated back and forth as I gained new information about the global crackdown on the tech industry last year, because I kept encountering unexpected new ways that governments could crack down on all these AI-relevant companies even though they were on a pedestal as the future of the economy, for years. Losing their untouchable status definitely lengthened my AI timelines.

But later on, I started to see some of the ways that the crackdowns refused to even gently disturb tech industries, particularly related to AI, and that made me wonder if certain kinds of crackdowns are even remotely likely to be possible. Ultimately I had to leave it as ambiguous and defer to people who were more familiar with the day-to-day industry affairs than I was (I mostly work on a pretty niche issue).

Global crackdown on the tech industry?

jacob_cannell

30

In 2015 when I wrote my universal learning machine post I roughly expected AGI in about 10 years, so 2025. My latest more detailed estimate has a median around 2030.

If I then hard meta-update (with the simplest model) on those prediction updates I should predict I'll update to 2032.5 in 2029 and finally 2033.75 in 2032.

Metaculus was created in 2015 and the AGI questions didn't come until later, but it's pretty clear that my 2015 prediction of AGI 2025 would be wildly shorter than the median had the question existed then.

My current prediction of AGI median 2030 or perhaps 2034 is closer to the current metaculus consensus, depending on which specific definition one uses.

Garak

3-1

My intuition for the most promising path to AGI is with the Gary Marcus/Ben Goertzel Neuro-Symbolic camp. 

As the years passed and so much money has gone to the pure learning, neural networks only, scaling-hypothesis branch of AI and imo. far too little has gone to improve on fundational methods I grew more sceptical of a rapid progress to AGI. 

About four years ago there was a DARPA funding for the sort term (2 yrs?) discovery of "next generation" AI methods with a high budget (maybe 1.5 billion USD). I dont now of anything with substance that came from it.
Also, things like Hintons capsule networks seemed interesting to me but again nothing much came from it. 
Imo. the same again for research at the big players like Deep Mind and Open AI: there was a time - maybe around 2018 - where every other week or so some apparently breakthrough was published. What again where the numerous, big, original breakthroughs for the past 2 yrs? They indeed showed some remarkable scaling/engeneering success, but that to me looks rather underwhelming - especially  considering how much they increased their staff and funding.
All in all it seems to me that over the past 5-7 yrs there have been some very interesting basic exploration of AI techniques and applications. But no/little (GATO might be considered a prominent exception) more ambitious project to bring these techniques togehter into one architechture for (proto) AGI. So its still mostly individual demo piecemeal. 
What would, to me, be indicative of goal directed progress towards AGI would be an architechture for a proto AGI to which multiple parties could contribute and interatively improve on (like Sigulairty Net and Open Cog). But the big players did imo. not come forth with anything like that so far. And the interesting small scale initiatives like OpenCog operate on a shoestring budget. 

To end on a positive note: The timeline pushbacks and entire cancelation of self driving projects were surely disappointing. The only good that might (its more wishful thinking than expectation) come from it, is that the self driving industry could develop general purpose methods that would benefit many AI projects (see Tesla Optimus). And because this industry is well funded it could progress rapidly. 
 

7 comments, sorted by Click to highlight new comments since:
[-][anonymous]74

I increased my AI timelines substantially. Back in 2016 I felt AGI was so near that I founded a startup aiming for safe AGI before 2030. After I spent a couple of years looking deeply into this topic I realized I was wrong and shut down the project.

Relative to my expectations in 2018, AI progress had been even more underwhelming since then. Now I see AGI so far away that I can no longer come up with a model for sensible timeline estimates.

What are your reasons for AGI being so far away?

I'd love to see a post with your reasonings.

Certainly most people who predicted AGI before 2022 have since increased their timelines.

Nah...I still believe that the future AGI would invent a time machine and then it invents itself before 2022

Hahaha. With enough creativity, one never has to change their mind ;)

gpt3 has been an agi the entire time and it's frustrating that people move the goalposts; it should have been obvious that agi comes before hlmi because obviously generality is easier than human level, and we have had proof for a while. but anyway, whatever, yes, we don't have superhuman-at-everything ai yet because that was obviously going to take a lot longer. my expectation was that we'd be able to reach human level this year, and afaict the only reason we haven't is because deepmind hasn't made a gato-palm-matmul-speedup rainbow agent yet.

though of course, causality is hard, and even human level isn't superintelligence. robotics is a lot harder than mere "do everything a human does" generality, you also have to be able to do everything a human does in physical context, which is a heavily training data limited problem, and requires human level generality of from scratch learning, not merely human level generality * human level capability.

[This comment is no longer endorsed by its author]Reply