The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.
To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.
If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.
By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.
(Cross-posted at my personal blog.)
Holden spent a lot of effort stating reasons in http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
True, but 2012 might be long enough ago that many of the concerns he had then may now be irrelevant. In addition, based on my understanding of MIRI's current approach and their arguments for that approach, I feel that many of his concerns either represent fundamental misunderstandings or are based on viewpoints that have significantly changed within MIRI since that time. For example, I have a hard time wrapping my head around this objection:
This seems to be precisely the same concern expressed by MIRI and one of the fundamental arguments that their Agent Foundations approach is based on, in particular, what they deem the Value Specification problem. And I believe Yudkowsky has used this as a primary argument for AI safety in general for quite a while, very likely before 2012.
There is also the "tool/agent" distinction cited as objection 2 that I think is well addressed in MIRI's publications as well as Bostrom's Superintelligence, where it's made pretty clear that the distinction is not quite that clear cut (and gets even blurrier the more intelligent the "tool AI" gets).
Given that MIRI has had quite some time to refine their views as well as their arguments, as well as having gone through a restructuring and hiring quite a few new researchers since that time, what is the likelihood that Holden holds the same objections that were stated in the 2012 review?