Charlie Steiner

If you want to chat, message me!

LW1.0 username Manfred. PhD in condensed matter physics. I am independently thinking and writing about value learning.

Sequences

Alignment Hot Take Advent Calendar
Reducing Goodhart
Philosophy Corner

Wiki Contributions

Comments

I found someone's thesis from 2020 (Hoi Wai Lai) that sums it up not too badly (from the perspective of someone who wants to make Bohmian mechanics work and was willing to write a thesis about it).

For special relativity (section 6), the problem is that the motion of each hidden particle depends instantaneously on the entire multi-particle wavefunction. According to Lai, there's nothing better than to bite the bullet and define a "real present" across the universe, and have the hyperparticles sometimes go faster than light. What hypersurface counts as the real present is unobservable to us, but the motion of the hidden particles cares about it.

For varying particle number (section 7.4), the problem is that in quantum mechanics you can have a superposition of states with different numbers of particles. If there's some hidden variable tracking which part of the superposition is "real," this hidden variable has to behave totally different than a particle! Lai says this leads to "Bell-type" theories, where there's a single hidden variable, a hidden trajectory in configuration space. Honestly this actually seems more satisfactory than how it deals with special relativity - you just had to sacrifice the notion of independent hidden variables behaving like particles, you didn't have to allow for superluminal communication in a way that highlights how pointless the hidden variables are.

Warning: I have exerted basically no effort to check if this random grad student was accurate.

My understanding is that pilot wave theory (ie Bohmian mechanics) explains all the quantum physics

This is only true if you don't count relativistic field theory. Bohmian mechanics has mathematical troubles extending to special relativity or particle creation/annihilation operators.

Is there any reason at all to expect some kind of multiverse?

Depending on how big you expect the unobservable universe to be, there can also be a spacelike multiverse.

Wouldn't other people also like to use an AI that can collaborate with them on complex topics? E.g. people planning datacenters, or researching RL, or trying to get AIs to collaborate with other instances of themselves to accurately solve real-world problems?

I don't think people working on alignment research assistants are planning to just turn it on and leave the building, they on average (weighted by money) seem to be imagining doing things like "explain an experiment in natural language and have an AI help implement it rapidly."

So I think both they and this post are describing the strategy of "building very generally useful AI, but the good guys will be using it first." I hear you as saying you want a slightly different profile of generally-useful skills to be targeted.

I have now read the paper, and still think you did a great job.

One gripe I have is with this framing:

We believe our articulation of human values as constitutive attentional policies is much closer to “what we really care about”, and is thus less prone to over-optimization

If you were to heavily optimize for text that humans would rate highly on specific values, you would run into the usual problems (e.g. model incentivized to manipulate the human). Your success here doesn't come from the formulation of the values per se, but rather from the architecture that turns them into text/actions - rather than optimizing for them directly, you can prompt a LLM that's anchored on normal human text to mildly optimize them for you.

This difference implies some important points about scaling to more intelligent systems (even without making any big pivots):

  • we don't want the model to optimize for the stated values unboundedly hard, so we'll have to end up asking for something mild and human-anchored more explicitly.
  • If another use of AI is proposing changes to the moral graph, we don't want that process to form an optimization feedback loop (unless we're really sure).

The main difference made by the choice of format of values is where to draw the boundary between legible human deliberation, and illegible LLM common sense.

 

I'm excited for future projects that are sort of in this vein but try to tackle moral conflict, or that try to use continuous rather than discrete prompts that can interpolate values, or explore different sorts of training of the illegible-common-sense part, or any of a dozen other things.

Awesome to see this come to fruition. I think if a dozen different groups independently tried to attack this same problem head-on, we'd learn useful stuff each time.

I'll read the whole paper more thoroughly soon, but my biggest question so far is if you collected data about what happens to your observables if you change the process along sensible-seeming axes.

Regular AE's job is to throw away the information outside some low-dimensional manifold, sparse ~linear AE's job is to throw away the information not represented by sparse dictionary codes. (Also a low-dimensional manifold, I guess, just made from a different prior.)

If an AE is reconstructing poorly, that means it was throwing away a lot of information. How important that information is seems like a question about which manifold the underlying network "really" generalizes according to. And also what counts as an anomaly / what kinds of outliers you're even trying to detect.

Ah, yeah, that makes sense.

Even for an SAE that's been trained only on normal data [...] you could look for circuits in the SAE basis and use those for anomaly detection.

Yeah, this seems somewhat plausible. If automated circuit-finding works it would certainly detect some anomalies, though I'm uncertain if it's going to be weak against adversarial anomalies relative to regular ol' random anomalies.

Dictionary/SAE learning on model activations is bad as anomaly detection because you need to train the dictionary on a dataset, which means you needed the anomaly to be in the training set.

How to do dictionary learning without a dataset? One possibility is to use uncertainty-estimation-like techniques to detect when the model "thinks its on-distribution" for randomly sampled activations.

Answer by Charlie SteinerMar 28, 202430

Tracking your predictions and improving your calibration over time is good. So is practicing making outside-view estimates based on related numerical data. But I think diversity is good.

If you start going back through historical F1 data as prediction exercises, I expect the main thing that will happen is you'll learn a lot about the history of F1. Secondarily, you'll get better at avoiding your own biases, but in a way that's concentrated on your biases relevant to F1 predictions.

If you already want to learn more about the history of F1, then go for it, it's not hurting anyone :) Estimating more diverse things will probably better prepare you for making future non-F1 estimates, but if you're going to pay attention to F1 anyhow it might be a fun thing to track.

Load More