mesaoptimizer

https://mesaoptimizer.com

 

learn math or hardware

Wiki Contributions

Comments

I love the score of this comment as of writing: -1 karma points, 23 agree points.

I think it is useful for someone to tap me on the shoulder and say "Hey, this information you are consuming, its from <this source that you don't entirely trust and have a complex causal model of>".

Enforcing social norms to prevent scapegoating also destroys information that is valuable for accurate credit assignment and causally modelling reality. I haven't yet found a third alternative, and until then, I'd recommend people both encourage and help people in their community to not scapegoat or lose their minds in 'tribal instincts' (as you put it), while not throwing away valuable information.

You can care about people while also seeing their flaws and noticing how they are hurting you and others you care about.

Similarly, governmental institutions have institutional memories with the problems of major historical fuckups, in a way that new startups very much don’t.

On the other hand, institutional scars can cause what effectively looks like institutional traumatic responses, ones that block the ability to explore and experiment and to try to make non-incremental changes or improvements to the status quo, to the system that makes up the institution, or to the system that the institution is embedded in.

There's a real and concrete issue with the amount of roadblocks that seem to be in place to prevent people from doing things that make gigantic changes to the status quo. Here's a simple example: would it be possible for people to get a nuclear plant set up in the United States within the next decade, barring financial constraints? Seems pretty unlikely to me. What about the FDA response to the COVID crisis? That sure seemed like a concrete example of how 'institutional memories' serve as gigantic roadblocks to the ability for our civilization to orient and act fast enough to deal with the sort of issues we are and will be facing this century.

In the end, capital flows towards AGI companies for the sole reason that it is the least bottlenecked / regulated way to multiply your capital, that seems to have the highest upside for the investors. If you could modulate this, you wouldn't need to worry about the incentives and culture of these startups as much.

I had the impression that SPAR was focused on UC Berkeley undergrads and had therefore dismissed the idea of being a SPAR mentor or mentee. It was only recently that I looked at the website when someone mentioned that they wanted to learn from this one SPAR mentor, and then I looked at the website, and SPAR now seems to focus on the same niche as AI Safety Camp.

Did SPAR pivot in the past six months, or did I just misinterpret SPAR when I first encountered it?

Sort-of off-topic, so feel free to maybe move this comment elsewhere.

I'm quite surprised to see that you have just shipped an MSc thesis, because I didn't expect you to be doing an MSc (or anything in traditional academia). I didn't think you needed one, since I think you have enough career capital to continue to work indefinitely on the things you want to work on and get paid well for it. I also assumed that you might find academia somewhat a waste of your time in comparison to doing stuff you wanted to do.

Perhaps you could help clarify what I'm missing?

fiber at Tata Industries in Mumbai

Could you elaborate on how Tata Industries is relevant here? Based on a DDG search, the only news I find involving Tata and AI infrastructure is one where a subsidiary named TCS is supposedly getting into the generative AI gold rush.

My thought is that I don’t see why a pivotal act needs to be that.

Okay. Why do you think Eliezer proposed that, then?

Note that I agree with your sentiment here, although my concrete argument is basically what LawrenceC wrote as a reply to this post.

Ryan, this is kind of a side-note but I notice that you have a very Paul-like approach to arguments and replies on LW.

Two things that come to notice:

  1. You have a tendency to reply to certain posts or comments with "I don't quite understand what is being said here, and I disagree with it." or, "It doesn't track with my views", or equivalent replies that seem not very useful for understanding your object level arguments. (Although I notice that in the recent comments I see, you usually postfix it with some elaboration on your model.)
  2. In the comment I'm replying to, you use a strategy of black-box-like abstraction modeling of a situation to try to argue for a conclusion, one that usually involves numbers such as multipliers or percentages. (I have the impression that Paul uses this a lot, and one concrete example that comes to mind is the takeoff speeds essay. I usually consider such arguments invalid when they seem to throw away information we already have, or seem to use a set of abstractions that don't particularly feel appropriate to the information I believe we have.

I just found this interesting and plausible enough to highlight to you. Its a moderate investment of my time to find out examples from your comment history to highlight all these instances, but writing this comment still seemed valuable.

This is a really well-written response. I'm pretty impressed by it.

Load More