Today, the Machine Intelligence Research Institute is launching a new forum for research discussion: the Intelligent Agent Foundations Forum! It's already been seeded with a bunch of new work on MIRI topics from the last few months.
We've covered most of the (what, why, how) subjects on the forum's new welcome post and the How to Contribute page, but this post is an easy place to comment if you have further questions (or if, maths forbid, there are technical issues with the forum instead of on it).
But before that, go ahead and check it out!
(Major thanks to Benja Fallenstein, Alice Monday, and Elliott Jin for their work on the forum code, and to all the contributors so far!)
EDIT 3/22: Jessica Taylor, Benja Fallenstein, and I wrote forum digest posts summarizing and linking to recent work (on the IAFF and elsewhere) on reflective oracle machines, on corrigibility, utility indifference, and related control ideas, and on updateless decision theory and the logic of provability, respectively! These are pretty excellent resources for reading up on those topics, in my biased opinion.
Sorry you feel that way, but it's kind of essential that the forum is not about the latest AI techniques, but about groundwork for the kind of safety research that could stand up to smarter-than-human AI. There are plenty of great places on the Internet for discussing those other topics!
The problem is that you think those are two separate things, that the safety research which could stand up to smarter than human artificial intelligence is something that will arise separate from the work that is being done on artificial intelligence.
And for what it's worth there really isn't a place to discuss practical safety and artificial intelligence.