habryka

Running Lightcone Infrastructure, which runs LessWrong. You can reach me at habryka@lesswrong.com

Sequences

A Moderate Update to your Artificial Priors
A Moderate Update to your Organic Priors
Concepts in formal epistemology

Wiki Contributions

Comments

I don't think this essay is commenting on AI optimists in-general. It is commenting on some specific arguments that I have seen around, but I don't really see how it relates to the recent stuff that Quintin, Nora or you have been writing (and I would be reasonably surprised if Eliezer intended it to apply to that).

You can also leave it up to the reader to decide whether and when the analogy discussed here applies or not. I could spend a few hours digging up people engaging in reasoning really very closely to what is discussed in this article, though by default I am not going to.

They still make a lot less than they would if they optimized for profit (that said, I think most "safety researchers" at big labs are only safety researchers in name and I don't think anyone would philanthropically pay for their labor, and even if they did, they would still make the world worse according to my model, though others of course disagree with this).

habryka1dΩ7106

I think people who give up large amounts of salary to work in jobs that other people are willing to pay for from an impact perspective should totally consider themselves to have done good comparable to donating the difference between their market salary and their actual salary. This applies to approximately all safety researchers. 

Seems like the thing to do is to have a program that happens after MATS, not to extend MATS. I think in-general you want sequential filters for talent, and ideally the early stages are as short as possible (my guess is indeed MATS should be a bit shorter).

I... really don't see any clickbait here. If anything these titles feel bland to me (and indeed I think LW users could do much better at making titles that are more exciting, or more clearly highlight a good value proposition for the reader, though karma makes up for a lot). 

Like, for god's sake, the top title here is "Social status part 1/2: negotiations over object-level preferences". I feel like that title is at the very bottom of potential clickbaitiness, given the subject matter.

It's really hard to get any kind of baseline here, and my guess is it differs hugely between different populations, but my guess (based on doing informal fermis here a bunch of times over the years) would be a lot lower than the average for the population, at least because of demographic factors, and then probably some extra.

I was talking about research scientists here (though my sense is 5 years of being a research engineer is still comparably good for gaining research skills, and probably somewhat better, than most PhDs). I also had a vague sense that at Deepmind being a research engineer was particularly bad for gaining research skills (compared to the same role at OpenAI or Anthropic).

Yes. Besides Deepmind none of the industry labs require PhDs, and I think the Deepmind requirement has also been loosening a bit.

and academia is the only system for producing high quality researchers that is going to exist at scale over the next few years

To be clear, I am not happy about this, but I would take bets that industry labs will produce and train many more AI alignment researchers than academia, so this statement seems relatively straightforwardly wrong (and of course we can quibble over the quality of researchers produced by different institutions, but my guess is the industry-trained researchers will perform well at least by your standards, if not mine)

I don't think this essay is intended to make generalizations to all "Empiricists", scientists, and "Epistemologists". It's just using those names as a shorthand for three types of people (whose existence seems clear to me, though of course their character does not reflect everyone who might identify under that label).

Load More