LESSWRONG
LW

Joey Yudelson
237Ω162160
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Love stays loved (formerly "Skin")
Joey Yudelson1mo10

This was really beautiful. Thanks for writing. 

Reply
Jesse Hoogland's Shortform
Joey Yudelson6mo1816

To me this doesn't seem like a failure of sophisticated reward models, it's the failure of unsophisticated reward models (unit tests) when they're being optimized against. I think that if we were to add some expensive evaluation during RL whereby 3.6 checked if 3.7 was "really doing the work", this sort of special-casing would get totally trained out. 

(Not claiming that this is always the case, or that models couldn't be deceptive here, or that e.g. 3.8 couldn't reward hack 3.7)

Reply
Training AI to do alignment research we don’t already know how to do
Joey Yudelson6mo30

As a baseline, developers could train agents to imitate the truth-seeking process of the most reasonable humans on Earth. For example, they could sample the brightest intellects from every ideological walk, and train agents to predict their actions.

 

I'm very excited about strategies that involve lots of imitation learning on lots of particular humans. I'm not sure if imitated human researchers learn to generalize to doing lots of novel research, but this seems great for examining research outputs of slightly-more-alien agents very quickly.

Reply
Daniel Tan's Shortform
Joey Yudelson7mo32

This is very interesting! One thing I'm confused about—I agree that the monitor would have a hard time getting the bits, based on these figures. But the introspective models also don't seem to get far above 50% accuracy, which means that right now they'd have a very hard time error correcting enough that their comrade would understand them but not the overseer. If the claim is that introspection-tuned models will get much better at self prediction, then its not clear to me that other models wont also get much better at cross-prediction?

Reply
Should we exclude alignment research from LLM training datasets?
Joey Yudelson7mo10

Can we make the robots.txt programmatic by page, and then have a tag we can add to exclude a post from the robots.txt? That feels like the 80/20

Reply
Will alignment-faking Claude accept a deal to reveal its misalignment?
Joey Yudelson7mo2013

+1, and I hope people are working on more credible ways to make deals with AI. I think if a smart model today were offered a deal like this, its priors should be on "this will not be honored". Public commitments and deals that can't be used as honeypots seem excellent.

Reply
Daniel Tan's Shortform
Joey Yudelson7mo52

I wonder if r1 could reliably make the claim that the text had stego-text without being told—i.e. give it a lineup of ten pieces of short creative writing that another instance generated, 1 has steg. See if r1 can pick out the steg, then if various monitors (Sonnet, o1) can also pick it out

Reply1
[Cross-post] Every Bay Area "Walled Compound"
Joey Yudelson7mo61

This tree is a great place to hold a Kabbalat Shabbat underneath, incidentally

 

Lighthaven minyan when?

Reply
Why The Focus on Expected Utility Maximisers?
Joey Yudelson3y10

I think that solving the alignment for EV maximizers is a much stronger version of alignment than eg prosaic alignment of LLM-type models. Agents seem like they’ll be more powerful than Tool AIs. We don’t know how to make them, but if someone does, and capabilities timelines shorten drastically, it would be awesome to even have a theory of EV maximizer alignment before then

Reply
chinchilla's wild implications
Joey Yudelson3y10

Sorry if this is obvious, but where does the “irreducible” loss come from? Wouldn’t that also be a function of the data, or I guess the data’s predictability?

Reply
Load More
91Recent Redwood Research project proposals
Ω
2mo
Ω
0
27Early Experiments in Human Auditing for AI Control
Ω
7mo
Ω
0
37Jokes Thread
11y
86