The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.
To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.
If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.
By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.
(Cross-posted at my personal blog.)
I don't think there is consensus among technical advisors on what directions are most promising. Also, Paul has written substantially about his preferred approach (see here for instance), and I've started to do the same, although so far I've been mostly talking about obstacles rather than positive approaches. But you can see some of my writing here and here. Also my thoughts in slide form here, although those slides are aimed at ML experts.
I haven't seen that your approach nor Paul's necessarily conflicts with that of MIRI's. There may be some difference of opinion on which is more likely to be feasible, but seeing as how Paul works closely with MIRI researchers and they seem to have a favorable opinion of him, I would be surprised if it were really true that OpenPhil's technical advisors were that pessimistic about MIRI's prospects. If they aren't that pessimistic, then it would imply Holden is acting somewhat against the advice of his advisors, or that he has strong priors against MIRI that were not overcome by the information he was receiving from them.