Steven Byrnes

I'm an AGI safety / AI alignment researcher in Boston with a particular focus on brain algorithms. Research Fellow at Astera. See https://sjbyrnes.com/agi.html for a summary of my research and sorted list of writing. Physicist by training. Email: steven.byrnes@gmail.com. Leave me anonymous feedback here. I’m also at: RSS feed , Twitter , Mastodon , Threads , Bluesky , GitHub , Wikipedia , Physics-StackExchange , LinkedIn

Sequences

Valence
Intro to Brain-Like-AGI Safety

Wiki Contributions

Comments

@nostalgebraist bites that bullet here:

“Reinforcement learning” (RL) is not a technique.  It’s a problem statement, i.e. a way of framing a task as an optimization problem, so you can hand it over to a mechanical optimizer.

What’s more, even calling it a problem statement is misleading,  because it’s (almost) the most general problem statement possible for any arbitrary task.  If you try to formalize a concept like “doing a task well,“ or even “being an entity that acts freely and wants things,” in the most generic terms with no constraints whatsoever, you end up writing down “reinforcement learning.”

and so does Russell & Norvig 3rd edition

Reinforcement learning might be considered to encompass all of AI.

…That’s not my perspective though.

For my part, there’s a stereotypical core of things-I-call-RL which entails:

  • (A) There’s a notion of the AI’s outputs being better or worse
  • (B) But we don’t have ground truth (even after-the-fact) about what any particular output should ideally have been
  • (C) Therefore the system needs to do some kind of explore-exploit.

By this (loose) definition, both model-based and model-free RL are central examples of “reinforcement learning”, whereas LLM self-supervised base models are not reinforcement learning (cf. (B), (C)), nor are ConvNet classifiers trained on ImageNet (ditto), nor are clustering algorithms (cf. (A), (C)), nor is A* or any other exhaustive search within a simple deterministic domain (cf. (C)), nor are VAEs (cf. (C)), etc.

(A,B,C) is generally the situation you face if you want an AI to win at videogames or board games, control bodies while adapting to unpredictable injuries or terrain, write down math proofs, design chips, found companies, and so on.

This (loose) definition of RL connects to AGI safety because (B-C) makes it harder to predict the outputs of an RL system. E.g. we can plausibly guess that an LLM base model, given internet-text-like prompts, will continue in an internet-text-typical way. Granted, given OOD prompts, it’s harder to say things a priori about the output. But that’s nothing compared to e.g. AlphaZero or AlphaStar, where we’re almost completely in the dark about what the trained model will do in any nontrivial game-state whatsoever. (…Then extrapolate the latter to human-level AGIs acting in the real world!)

(That’s not an argument that “we’re doomed if AGI is based on RL”, but I do think that a very RL-centric AGI would need tailored approaches to thinking about safety and alignment that wouldn’t apply to LLMs; and I likewise think that likewise a massive increase in the scope and centrality of LLM-related RL (beyond the RLHF status quo) would raise new (and concerning) alignment issues, different from the ones we’re used to with LLMs today.)

Yeah most of the time I’ll open my to-do list and just look at one the couple very leftmost columns, and the column has maybe 3 items, and then I’ll pick one and do it (or pick a few and schedule them for that same day).

Occasionally I’ll look at a column farther to the right, and see if any ought to be moved left or right. The further right, the less often I’m checking.

short lists fit much better into working memory

IMO the main point of a to-do list is to not have the to-do list in working memory. The only thing that should be in working memory is the one thing you're actually supposed to be focusing on and doing, right now. Right?

Or if you're instead in the mode of deciding what to do next, or making a schedule for your day, etc., then that's different, but working memory is still kinda irrelevant because presumably you have your to-do list open on your computer, right in front of your eyes, while you do that, right?

easier and more comprehensive approach would be to use a text editor with proper version control and diff features, and then to name particular versions before making major changes.

Is that what you do? It's not a good fit to my typical workflow. But I'm definitely in favor of trying different things and seeing what works best for you. :)

It’s not just the title but also (what I took to be) the thesis statement “Comparisons are often useful, but in this case, I think the disanalogies are much more compelling than the analogies”, and also “While I think the disanalogies are compelling…” and such.

For comparison:

Question: Are the analogies between me (Steve) and you (David) more compelling, or less compelling, than the disanalogies between me and you?

I think the correct answer to that question is “Huh? What are are you talking about?” For example:

  • If this is a conversation about gross anatomy or cardiovascular physiology across the animal kingdom, then you and I are generally extremely similar.
  • If this is a conversation about movie preferences or exercise routines, then I bet you and I are pretty different.

So at the end of the day, that question above is just meaningless, right? We shouldn’t be answering it at all.

I don’t think disanalogies between A and B can be “compelling” or “uncompelling”, any more than mathematical operations can be moist or dry. I think a disanalogy between A and B may invalidate a certain point that someone is trying to make by analogizing A to B, or it may not invalidate it, but that depends on what the point is.

FWIW I do think "Biorisk is Often an Unhelpful Analogy for AI Risk," or "Biorisk is Misleading as a General Analogy for AI Risk" are both improvements :)

I agree that in the limit of an extremely structured optimizer, it will work in practice, and it will wind up following strategies that you can guess to some extent a priori.

I also agree that in the limit of an extremely unstructured optimizer, it will not work in practice, but if it did, it will find out-of-the-box strategies that are difficult to guess a priori.

But I disagree that there’s no possible RL system in between those extremes where you can have it both ways.

On the contrary, I think it’s possible to design an optimizer which is structured enough to work well in practice, while simultaneously being unstructured enough that it will find out-of-the-box solutions very different from anything the programmers were imagining.

Examples include:

  • MuZero: you can’t predict a priori what chess strategies a trained MuZero will wind up using by looking at the source code. The best you can do is say “MuZero is likely to use strategies that lead to its winning the game”.
  • “A civilization of humans” is another good example: I don’t think you can look at the human brain neural architecture and loss functions etc., and figure out a priori that a civilization of humans will wind up inventing nuclear weapons. Right?

This book argues (convincingly IMO) that it’s impossible to communicate, or even think, anything whatsoever, without the use of analogies.

  • If you say “AI runs on computer chips”, then the listener will parse those words by conjuring up their previous distilled experience of things-that-run-on-computer-chips, and that previous experience will be helpful in some ways and misleading in other ways.
  • If you say “AI is a system that…” then the listener will parse those words by conjuring up their previous distilled experience of so-called “systems”, and that previous experience will be helpful in some ways and misleading in other ways.

Etc. Right?

If you show me an introduction to AI risk for amateurs that you endorse, then I will point out the “rhetorical shortcuts that imply wrong and misleading things” that it contains—in the sense that it will have analogies between powerful AI and things-that-are-not-powerful-AI, and those analogies will be misleading in some ways (when stripped from their context and taken too far). This is impossible to avoid.

 

Anyway, if someone says:

When it comes to governing technology, there are some areas, like inventing new programming languages, where it’s awesome for millions of hobbyists to be freely messing around; and there are other areas, like inventing new viruses, or inventing new uranium enrichment techniques, where we definitely don’t want millions of hobbyists to be freely messing around, but instead we want to be thinking hard about regulation and secrecy. Let me explain why AI belongs in the latter category…

…then I think that’s a fine thing to say. It’s not a rhetorical shortcut, rather it’s a way to explain what you’re saying, pedagogically, by connecting it to the listener’s existing knowledge and mental models.

This is an 800-word blog post, not 5 words. There’s plenty of room for nuance.

The way it stands right now, the post is supporting conversations like:

Person A: It’s not inconceivable that the world might wildly under-invest in societal resilience against catastrophic risks even after a “warning shot” for AI. Like for example, look at the case of bio-risks—COVID just happened, so the costs of novel pandemics are right now extremely salient to everyone on Earth, and yet, (…etc.).

Person B: You idiot, bio-risks are not at all analogous to AI. Look at this blog post by David Manheim explaining why.

Or:

Person B: All technology is always good, and its consequences are always good, and spreading knowledge is always good. So let’s make open-source ASI asap.

Person A: If I hypothetically found a recipe that allowed anyone to make a novel pandemic using widely-available equipment, and then I posted it on my blog along with clearly-illustrated step-by-step instructions, and took out a billboard out in Times Square directing people to the blog post, would you view my actions as praiseworthy? What would you expect to happen in the months after I did that?

Person B: You idiot, bio-risks are not at all analogous to AI. Look at this blog post by David Manheim explaining why.

Is this what you want? I.e., are you on the side of Person B in both these cases?

Right, and that wouldn’t apply to a model-based RL system that could learn an open-ended model of any aspect of the world and itself, right?

I think your “it is nearly impossible for any computationally tractable optimizer to find any implementation for a sparse/distant reward function” should have some caveat that it only clearly applies to currently-known techniques. In the future there could be better automatic-world-model-builders, and/or future generic techniques to do automatic unsupervised reward-shaping for an arbitrary reward, such that AIs could find out-of-the-box ways to solve hard problems without handholding.

I have to admit that I'm struggling to find these arguments at the moment

I sometimes say things kinda like that, e.g. here.

All the examples of "RL" doing interesting things that look like they involve sparse/distant reward involve enormous amounts of implicit structure of various kinds, like powerful world models.

I guess when you say “powerful world models”, you’re suggesting that model-based RL (e.g. MuZero) is not RL but rather “RL”-in-scare-quotes. Was that your intention?

I’ve always thought of model-based RL is a central subcategory within RL, as opposed to an edge-case.

Load More