Today, the Machine Intelligence Research Institute is launching a new forum for research discussion: the Intelligent Agent Foundations Forum! It's already been seeded with a bunch of new work on MIRI topics from the last few months.
We've covered most of the (what, why, how) subjects on the forum's new welcome post and the How to Contribute page, but this post is an easy place to comment if you have further questions (or if, maths forbid, there are technical issues with the forum instead of on it).
But before that, go ahead and check it out!
(Major thanks to Benja Fallenstein, Alice Monday, and Elliott Jin for their work on the forum code, and to all the contributors so far!)
EDIT 3/22: Jessica Taylor, Benja Fallenstein, and I wrote forum digest posts summarizing and linking to recent work (on the IAFF and elsewhere) on reflective oracle machines, on corrigibility, utility indifference, and related control ideas, and on updateless decision theory and the logic of provability, respectively! These are pretty excellent resources for reading up on those topics, in my biased opinion.
Seriously?
Experience tells us to discount predictions of imminent AGI, to the point where only the strongest of reasons can overcome this. If AIXI represented a large enough increase in understanding of what we're even talking about, that could be part of a strong argument. But as I said in the great-grandparent, it doesn't.
Past predictive accuracy of expert opinions on the subject if AI superintelligence tells us nothing about what to infer from current predictions. If superintelligent AI were to actually arrive tomorrow, or 50 years from now, or 150 years from now, there would be no discernable difference in present expert opinion. On these sorts of subjects expert opinion is totally uncorrolated from reality. So no, experience tells us nothing about predictions of imminent or non-imminent AGI. We can thank our own Stuart Armstrong for this contribution.
But hey, let's take 2070 at face value. That'd be great news! We could completely forget about the existential threat due to unfriendly AI. After all, it'd be decades after even pessimistic estimates for whole-brain emulation[1] enables the first uploaded human intelligences. And a decade or so further after atomicly precise manufacturing[2] gives us the tools to do in-vivo[3] intelligence enhancement. By 2070 we'd already be in a world of human-derived superintellences, so thankfully we needn't fret over our own biological limitations preventing us from keeping pace with superintelligent AI.
Or is that not the future you imagined?