1582

LESSWRONG
LW

1581
Meta-PhilosophyPhilosophyRationality

42

Please, Don't Roll Your Own Metaethics

by Wei Dai
12th Nov 2025
AI Alignment Forum
3 min read
8

42

Ω 20

Meta-PhilosophyPhilosophyRationality

42

Ω 20

Please, Don't Roll Your Own Metaethics
33Raemon
2Wei Dai
2Raemon
2Wei Dai
8Garrett Baker
4Wei Dai
4lemonhope
2Lukas_Gloor
New Comment
8 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:31 AM
[-]Raemon2hΩ113316

What are you supposed to do other than roll your own metaethics?

Reply
[-]Wei Dai1hΩ22-2

"More research needed" but here are some ideas to start with:

  1. Try to design alignment/safety schemes that are agnostic or don't depend on controversial philosophical ideas. For certain areas that seem highly relevant and where there could potentially be hidden dependencies (such as metaethics), explicitly understand and explain why, under each plausible position that people currently hold, the alignment/safety scheme will result in a good or ok outcome. (E.g., why it leads to a good outcome regardless of whether moral realism or anti-realism is true, or any one of the other positions.)
  2. Try to solve metaphilosophy, where potentially someone could make a breakthrough that everyone can agree is correct (after extensive review), which can then be used to speed up progress in all other philosophical fields. (This could also happen in another philosophical field, but seems a lot less likely due to prior efforts/history. I don't think it's very likely in metaphilosophy either, but perhaps worth a try, for those who may have very strong comparative advantage in this.)
  3. If 1 and 2 look hard or impossible, make this clear to non-experts (your boss, company leaders/board, government officials, the public), don't let them accept a "roll your own metaethics" solution, or a solution with implicit/hidden philosophical assumptions.
  4. Support AI pause/stop.
Reply
[-]Raemon38mΩ120

Hmm, I like #1. 

#2 feels like it's injecting some frame that's a bit weird to inject here (don't roll your own metaethics... but rolling your own metaphilosophy is okay?)

But also, I'm suddenly confused about who this post is trying to warn. Is it more like labs, or more like EA-ish people doing a wider variety of meta-work?

Reply
[-]Wei Dai29mΩ220

#2 feels like it's injecting some frame that's a bit weird to inject here (don't roll your own metaethics... but rolling your own metaphilosophy is okay?)

Maybe you missed my footnote?

To preempt a possible misunderstanding, I don't mean "don't try to think up new metaethical ideas", but instead "don't be so confident in your ideas that you'd be willing to deploy them in a highly consequential way, or build highly consequential systems that depend on them in a crucial way". Similarly "don't roll your own crypto" doesn't mean never try to invent new cryptography, but rather don't deploy it unless there has been extensive review, and consensus that it is likely to be secure.

and/or this part of my answer (emphasis added)?

Try to solve metaphilosophy, where potentially someone could make a breakthrough that everyone can agree is correct (after extensive review)

But also, I'm suddenly confused about who this post is trying to warn. Is it more like labs, or more like EA-ish people doing a wider variety of meta-work?

I think I mostly had alignment researchers (in and out of labs) as the target audience in mind, but it does seem relevant to others so perhaps I should expand the target audience?

Reply
[-]Garrett Baker2h84

I think this fails to say how the analogy of cryptography transfers to metaethics. What properties of cryptography as a field make it such that you cannot roll your own. Is it just that many people have the experience of trying to come up with a croptographic scheme and failing, meanwhile there are perfectly good libraries nobody has found exploits to yet?

That doesn't seem very analogous with metaethics. As you say, it is hard to decisively show a metaethical theory is "wrong", and as far as I know there is no well-studied metaethical theory which has no exploits yet.

So what exactly is the analogy?

Reply
[-]Wei Dai36m40

The analogy is that in both fields people are by default very prone to being overconfident. In cryptography this can be seen by the phenomenon of people (especially newcomers who haven't learned the lesson) confidently proposing new cryptographic algorithms, which end up being way easier to break than they expect. In philosophy this is a bit trickier to demonstrate, but I think can be seen via a combination of:

  1. people confidently holding positions that are incompatible with other people's confident positions
  2. tendency to "bite bullets" or accepting implications that are highly counterintuitive to others or even to themselves, instead of adopting more uncertainty
  3. the total idea/argument space being exponentially vast and underexplored due to human limitations, therefore high confidence being unjustified in light of this
Reply
[-]lemonhope3hΩ042

Please just write the standard library!

Reply
[-]Lukas_Gloor2h20

By "metaethics," do you mean something like "a theory of how humans should think about their values"? 

I feel like I've seen that kind of usage on LW a bunch, but it's atypical. In philosophy, "metaethics" has a thinner, less ambitious interpretation of answering something like, "What even are values, are they stance-independent, yes/no?" 

And yeah, there is often a bit more nuance than that as you dive deeper into what philosophers in the various camps are exactly saying, but my point is that it's not that common, and certainly not necessary, that "having confident metaethical views," on the academic philosophy reading of "metaethics," means something like "having strong and detailed opinions on how AI should go about figuring out human values."

(And maybe you'd count this against academia, which would be somewhat fair, to be honest, because parts of "metaethics" in philosophy are even further removed from practicality, as they concern the analysis of the language behind moral claims, which, if we compare it to claims about the Biblical God and miracles, it would be like focusing way too much on whether the people who wrote the Bible thought they were describing real things or just metaphores, without directly trying to answer burning questions like "Does God exist?" or "Did Jesus live and perform miracles?")

Anyway, I'm asking about this because I found the following paragraph hard to understand: 

Behind a veil of ignorance, wouldn't you want everyone to be less confident in their own ideas? Or think "This isn't likely to be a subjective question like morality/values might be, and what are the chances that I'm right and they're all wrong? If I'm truly right why can't I convince most others of this? Is there a reason or evidence that I'm much more rational or philosophically competent than they are?"

My best guess of what you might mean (low confidence) is the following: 

You're conceding that morality/values might be (to some degree) subjective, but you're cautioning people from having strong views about "metaethics," which you take to be the question of not just what morality/values even are, but also a bit more ambitiously: how to best reason about them and how to (e.g.) have AI help us think about what we'd want for ourselves and others.

Is that roughly correct?

Because if one goes with the "thin" interpretation of metaethics, then "having one's own metaethics" could be as simple as believing some flavor of "morality/values are subjective," and it feels like you, in the part I quoted, don't sound like you're too strongly opposed to just that stance in itself, necessarily.

Reply
Moderation Log
More from Wei Dai
View more
Curated and popular this week
8Comments

One day, when I was an intern at the cryptography research department of a large software company, my boss handed me an assignment to break a pseudorandom number generator passed to us for review. Someone in another department invented it and planned to use it in their product, and wanted us to take a look first. This person must have had a lot of political clout or was especially confident in himself, because he rejected the standard advice that anything an amateur comes up with is very likely to be insecure and he should instead use one of the established, off the shelf cryptographic algorithms, that have survived extensive cryptanalysis (code breaking) attempts.

My boss thought he had to demonstrate the insecurity of the PRNG by coming up with a practical attack (i.e., a way to predict its future output based only on its past output, without knowing the secret key/seed). There were three permanent full time professional cryptographers working in the research department, but none of them specialized in cryptanalysis of symmetric cryptography (which covers such PRNGs) so it might have taken them some time to figure out an attack. My time was obviously less valuable and my boss probably thought I could benefit from the experience, so I got the assignment.

Up to that point I had no interest, knowledge, or experience with symmetric cryptanalysis either, but was still able to quickly demonstrate a clean attack on the proposed PRNG, which succeeded in convincing the proposer to give up and use an established algorithm. Experiences like this are so common, that everyone in cryptography quickly learns how easy it is to be overconfident about one's own ideas, and many viscerally know the feeling of one's brain betraying them with unjustified confidence. As a result, "don't roll your own crypto" is deeply ingrained in the culture and in people's minds.

If only it was so easy to establish something like this in "applied philosophy" fields, e.g., AI alignment! Alas, unlike in cryptography, it's rarely possible to come up with "clean attacks" that clearly show that a philosophical idea is wrong or broken. The most that can usually be hoped for is to demonstrate some kind of implication that is counterintuitive or contradicts other popular ideas. But due to "one man's modus ponens is another man's modus tollens", if someone is sufficiently willing to bite bullets, then it's impossible to directly convince them that they're wrong (or should be less confident) this way. This is made even harder because, unlike in cryptography, there are no universally accepted "standard libraries" of philosophy to fall back on. (My actual experiences attempting this, and almost always failing, are another reason why I'm so pessimistic about AI x-safety, even compared to most other x-risk concerned people.)

So I think I have to try something more meta, like drawing the above parallel with how easy is it to be overconfident in other fields, such as cryptography. Another meta line of argument is to consider how many people have strongly held, but mutually incompatible philosophical positions. Behind a veil of ignorance, wouldn't you want everyone to be less confident in their own ideas? Or think "This isn't likely to be a subjective question like morality/values might be, and what are the chances that I'm right and they're all wrong? If I'm truly right why can't I convince most others of this? Is there a reason or evidence that I'm much more rational or philosophically competent than they are?"

Unfortunately I'm pretty unsure any of these meta arguments will work either. If they do change anyone's minds, please let me know in the comments or privately. Or if anyone has better ideas for how to spread a meme of "don't roll your own metaethics"[1], please contribute. And of course counterarguments are welcome too, e.g., if people rolling their own metaethics is actually good, in a way that I'm overlooking.

  1. ^

    To preempt a possible misunderstanding, I don't mean "don't try to think up new metaethical ideas", but instead "don't be so confident in your ideas that you'd be willing to deploy them in a highly consequential way, or build highly consequential systems that depend on them in a crucial way". Similarly "don't roll your own crypto" doesn't mean never try to invent new cryptography, but rather don't deploy it unless there has been extensive review, and consensus that it is likely to be secure.