Charlie Steiner

If you want to chat, message me!

LW1.0 username Manfred. PhD in condensed matter physics. I am independently thinking and writing about value learning.

Sequences

Alignment Hot Take Advent Calendar
Reducing Goodhart
Philosophy Corner

Wiki Contributions

Comments

Nice. I tried to do something similar (except making everything leaky with polynomial tails, so 

y = (y+torch.sqrt(y**2+scale**2)) * (1+(y+threshold)/torch.sqrt((y+threshold)**2+scale**2)) / 4

where the first part (y+torch.sqrt(y**2+scale**2)) is a softplus, and the second part (1+(y+threshold)/torch.sqrt((y+threshold)**2+scale**2)) is a leaky cutoff at the value threshold.

But I don't think I got such clearly better results, so I'm going to have to read more thoroughly to see what else you were doing that I wasn't :)

I'm actually not familiar with the nitty gritty of the LLM forecasting papers. But I'll happily give you some wild guessing :)

My blind guess is that the "obvious" stuff is already done (e.g. calibrating or fine-tuning single-token outputs on predictions about facts after the date of data collection), but not enough people are doing ensembling over different LLMs to improve calibration.

I also expect a lot of people prompting LLMs to give probabilities in natural language, and that clever people are already combining these with fine-tuning or post-hoc calibration. But I'd bet people aren't doing enough work to aggregate answers from lots of prompting methods, and then tuning the aggregation function based on the data.

Humans using SAEs to improve linear probes / activation steering vectors might quickly get replaced by a version of probing / steering that leverages unlabeled data.

Like, probing is finding a vector along which labeled data varies, and SAEs are finding vectors that are a sparse basis for unlabeled data. You can totally do both at once - find a vector along which labeled data varies and is part of a sparse basis for unlabeled data.

This is a little bit related to an idea with the handle "concepts live in ontologies." If I say I'm going to the gym, this concept of "going to the gym" lives in an ontology where people and activites are basic components - it's probably also easy to use ideas like "You're eating dinner" in that ontology, but not "1,3-diisocyanatomethylbenzene." When you try to express one idea, you're also picking a "basis" for expressing similar ideas.

I found someone's thesis from 2020 (Hoi Wai Lai) that sums it up not too badly (from the perspective of someone who wants to make Bohmian mechanics work and was willing to write a thesis about it).

For special relativity (section 6), the problem is that the motion of each hidden particle depends instantaneously on the entire multi-particle wavefunction. According to Lai, there's nothing better than to bite the bullet and define a "real present" across the universe, and have the hyperparticles sometimes go faster than light. What hypersurface counts as the real present is unobservable to us, but the motion of the hidden particles cares about it.

For varying particle number (section 7.4), the problem is that in quantum mechanics you can have a superposition of states with different numbers of particles. If there's some hidden variable tracking which part of the superposition is "real," this hidden variable has to behave totally different than a particle! Lai says this leads to "Bell-type" theories, where there's a single hidden variable, a hidden trajectory in configuration space. Honestly this actually seems more satisfactory than how it deals with special relativity - you just had to sacrifice the notion of independent hidden variables behaving like particles, you didn't have to allow for superluminal communication in a way that highlights how pointless the hidden variables are.

Warning: I have exerted basically no effort to check if this random grad student was accurate.

My understanding is that pilot wave theory (ie Bohmian mechanics) explains all the quantum physics

This is only true if you don't count relativistic field theory. Bohmian mechanics has mathematical troubles extending to special relativity or particle creation/annihilation operators.

Is there any reason at all to expect some kind of multiverse?

Depending on how big you expect the unobservable universe to be, there can also be a spacelike multiverse.

Wouldn't other people also like to use an AI that can collaborate with them on complex topics? E.g. people planning datacenters, or researching RL, or trying to get AIs to collaborate with other instances of themselves to accurately solve real-world problems?

I don't think people working on alignment research assistants are planning to just turn it on and leave the building, they on average (weighted by money) seem to be imagining doing things like "explain an experiment in natural language and have an AI help implement it rapidly."

So I think both they and this post are describing the strategy of "building very generally useful AI, but the good guys will be using it first." I hear you as saying you want a slightly different profile of generally-useful skills to be targeted.

I have now read the paper, and still think you did a great job.

One gripe I have is with this framing:

We believe our articulation of human values as constitutive attentional policies is much closer to “what we really care about”, and is thus less prone to over-optimization

If you were to heavily optimize for text that humans would rate highly on specific values, you would run into the usual problems (e.g. model incentivized to manipulate the human). Your success here doesn't come from the formulation of the values per se, but rather from the architecture that turns them into text/actions - rather than optimizing for them directly, you can prompt a LLM that's anchored on normal human text to mildly optimize them for you.

This difference implies some important points about scaling to more intelligent systems (even without making any big pivots):

  • we don't want the model to optimize for the stated values unboundedly hard, so we'll have to end up asking for something mild and human-anchored more explicitly.
  • If another use of AI is proposing changes to the moral graph, we don't want that process to form an optimization feedback loop (unless we're really sure).

The main difference made by the choice of format of values is where to draw the boundary between legible human deliberation, and illegible LLM common sense.

 

I'm excited for future projects that are sort of in this vein but try to tackle moral conflict, or that try to use continuous rather than discrete prompts that can interpolate values, or explore different sorts of training of the illegible-common-sense part, or any of a dozen other things.

Awesome to see this come to fruition. I think if a dozen different groups independently tried to attack this same problem head-on, we'd learn useful stuff each time.

I'll read the whole paper more thoroughly soon, but my biggest question so far is if you collected data about what happens to your observables if you change the process along sensible-seeming axes.

Regular AE's job is to throw away the information outside some low-dimensional manifold, sparse ~linear AE's job is to throw away the information not represented by sparse dictionary codes. (Also a low-dimensional manifold, I guess, just made from a different prior.)

If an AE is reconstructing poorly, that means it was throwing away a lot of information. How important that information is seems like a question about which manifold the underlying network "really" generalizes according to. And also what counts as an anomaly / what kinds of outliers you're even trying to detect.

Ah, yeah, that makes sense.

Load More