Dave Orr

Google AI PM; Foundation board member

Wiki Contributions

Comments

Dave Orr13d3-7

This is a great post. I knew that at the top end of the income distribution in the US people have more kids, but didn't understand how robust the relationship seems to be.

I think the standard evbio explanation here would ride on status -- people at the top of the tribe can afford to expend more resources for kids, and also have more access to opportunities to have kids. That would predict that we wouldn't see a radical change as everyone got more rich -- the curve would slide right and the top end of the distribution would have more kids but not necessarily everyone else.

But the gdp per capita graphs I think are evidence against that view. It looks like the curve is a lot flatter than when fertility is rising than when it's dropping, but if this holds into the future I really don't worry too much. We're on the cusp of all getting a lot richer, or else AI will kill us all anyway. 

Heh, that's why I put "strong" in there!

One big one is that the first big spreading event happened at a wet market where people and animals are in close proximity. You could check densely peopled places within some proximity of the lab to figure out how surprising it is that it happened in a wet market, but certainly animal spillover is much more likely where there are animals.

Edit: also it's honestly kind of a bad sign that you aren't aware of evidence that tends against your favored explanation, since that mostly happens during motivated reasoning.

We're here to test the so-called tower of babel theory. What if, due to some bizarre happenstance, humanity had thousands of languages that change all the time instead of a single universal language like all known intelligent species?

Answer by Dave OrrDec 30, 202360

You should ignore the EY style "no future" takes when thinking about your future. This is because if the world is about to end, nothing you do will matter much. But if the world isn't about to end, what you do might matter quite a bit -- so you should focus on the latter.

One quick question to ask yourself is: are you more likely to have an impact on technology, or on policy? Either one is useful. (If neither seems great, then consider earning to give, or just find a way to add value in society in other ways.)

Once you figure that out, the next step is almost certainly building relevant skills, knowledge, and networks. Connect with senior folks with relevant roles, ask and otherwise try to figure out what skills and such are useful, try to get some experience by working or volunteering with great people or organizations.

Do that for a while and I bet some gaps and opportunities will become pretty clear. 😀

Dave Orr4mo1311

I agree that it's bad to raise a child in an environment of extreme anxiety. Don't do that.

Also try to avoid being very doomy and anxious in general, it's not a healthy state to be in. (Easier said than done, I realize.)

Answer by Dave OrrDec 25, 20236449

I think you should have a kid if you would have wanted one without recent AI progress. Timelines are still very uncertain, and strong AGI could still be decades away. Parenthood is strongly value creating and extremely rewarding (if hard at times) and that's true in many many worlds.

In fact it's hard to find probable worlds where having kids is a really bad idea, IMO. If we solve alignment and end up in AI utopia, having kids is great! If we don't solve alignment and EY is right about what happens in a fast takeoff world, it doesn't really matter if you have kids or not.

In that sense, it's basically a freeroll, though of course there are intermediate outcomes. I don't immediately see any strong argument in favor of not having kids if you would otherwise want them.

Answer by Dave OrrNov 21, 202320

The thing you're missing is called instruction tuning. You gather a series of prompt/response pairs and fine tune the model over that data. Do it right and you have a chatty model.

Dave Orr5mo2-8

Thanks, Zvi, these roundups are always interesting.

I have one small suggestion, which is that you limit yourself to one Patrick link per post. He's an interesting guy but his area is quite niche, and if people want his fun stories about banking systems they can just follow him. I suspect that people who care about those things already follow him, and people who don't aren't that interested to read four items from him here.

Dave Orr6mo2913

I feel like a lot of the issues in this post are that the published RSPs are not very detailed and most of the work to flesh them out is not done. E.g. the comparison to other risk policies highlights lack of detail in various ways.

I think it takes a lot of time and work to build our something with lots of analysis and detail, years of work potentially to really do it right. And yes, much of that work hasn't happened yet.

But I would rather see labs post the work they are doing as they do it, so people can give feedback and input. If labs do so, the frameworks will necessarily be much less detailed than they would if we waited until they were complete.

So it seems to me that we are in a messy process that's still very early days. Feedback about what is missing and what a good final product would look like is super valuable, thank you for your work doing that. I hope the policy folks pay close attention.

But I think your view that RSPs are the wrong direction is misguided, or at least I don't find your reasons to be persuasive -- there's much more work to be done before they're good and useful, but that doesn't mean they're not valuable. Honestly I can't think of anything much better that could have been reasonably done given the limited time and resources we all have.

I think your comments on the name are well taken. I think your ideas about disclaimers and such are basically impossible for a modern corporation, unfortunately. I think your suggestion about pushing for risk management in policy are the clear next step, that's only enabled by the existence of an RSP in the first place.

Thanks for the detailed and thoughtful effortpost about RSPs!

Load More