Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Not sure what I'll do next yet. Views are my own & do not represent those of my current or former employer(s). I subscribe to Crocker's Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
I don't think I understand this yet, or maybe I don't see how it's a strong enough reason to reject my claims, e.g. my claim "If standard game theory has nothing to say about what to do in situations where you don't have access to an unpredictable randomization mechanism, so much the worse for standard game theory, I say!"
Seems like some measure of evidence -- maybe large, maybe tiny -- that "We don't know how to give AI values, just to make them imitate values" is false?
I'm not sure what view you are criticizing here, so maybe you don't disagree with me, but anyhow: I would say we don't know how to give AIs exactly the values we want them to have; instead we whack them with reinforcement from the outside and it results in values that are maybe somewhat close to what we wanted but mostly selected for producing behavior that looks good to us rather than being actually what we wanted.
I'd guess that the amount spent on image and voice is negligible for this BOTEC?
I do think that the amount spent on inference for customers should be a big deal though. My understanding is that OpenAI has a much bigger userbase than Anthropic. Shouldn't that mean that, all else equal, Anthropic has more compute to spare for training & experiments? Such that if Anthropic has about as much compute total, they in effect have a big compute advantage?
Are you saying Anthropic actually has more compute (in the relevant sense) than OpenAI right now? That feels like a surprising claim, big if true.
But I'm really not sure that training the overall system end-to-end is going to play a role. The success and relatively faithful CoT from r1 and QwQ give me hope that end-to-end training won't be very useful.
Huh, isn't this exactly backwards? Presumably r1 and QwQ got that way due to lots of end-to-end training. They aren't LMPs/bureaucracies.
...reading onward I don't think we disagree much about what the architecture will look like though. It sounds like you agree that probably there'll be some amount of end-to-end training and the question is how much?
My curiosity stems from:
1. Generic curiosity about how minds work. It's an important and interesting topic and MR is a bias that we've observed empirically but don't have a mechanistic story for why the structure of the mind causes that bias -- at least, I don't have such a story but it seems like you do!
2. Hope that we could build significantly more rational AI agents in the near future, prior to the singularity, which could then e.g. participate in massive liquid virtual prediction markets and improve human collective epistemics greatly.
This is helping, thanks. I do buy that something like this would help reduce the biases to some significant extent probably.
Will the overall system be trained? Presumably it will be. So, won't that create a tension/pressure, whereby the explicit structure prompting it to avoid cognitive biases will be hurting performance according to the training signal? (If instead it helped performance, then shouldn't a version of it evolve naturally in the weights?)
no need to apologize, thanks for this answer!
Question: Wouldn't these imperfect bias-corrections for LMA's also work similarly well for humans? E.g. humans could have a 'prompt' written on their desk that says "Now, make sure you spend 10min thinking about evidence against as well..." There are reasons why this doesn't work so well in practice for humans (though it does help); might similar reasons apply to LMAs? What's your argument that the situation will be substantially better for LMAs?
I'm particularly interested in elaboration on this bit:
Language model agents won't have as much motivated reasoning as humans do, because they're not probably going to use the same very rough estimated-value-maximization decision-making algorithm. (this is probably good for alignment; they're not maximizing anything, at least directly. They are almost oracle-based agents).
Unimportant: I don't think it's off-topic, because it's secretly a way of asking you to explain your model of why confirmation bias happens more and prove that your brain-inspired model is meaningful by describing a cognitive architecture that doesn't have that bias (or explaining why such an architecture is not possible). ;)
Thanks for the links! On brief skim they don't seem to be talking much about cognitive biases. Can you spell it out here how the bureaucracy/LMP of LMA's you describe could be set up to avoid motivated reasoning?
This is a great comment, IMO you should expand it, refine it, and turn it into a top-level post.
Also, question: How would you design a LLM-based AI agent (think: like the recent computer-using Claude but much better, able to operate autonomously for months) so as to be immune from this bias? Can it be done?
Curious what Nostalgebraist's reply to those points was. Or if anyone who disagrees with Scott wants to speak up and give a reply?