If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main "claims to fame":
One way you could apply it is by not endorsing so completely/confidently the kind of "rolling your own metaethics" that I argued against (that I see John as doing here), i.e., by saying "the distinction John is making here is correct, plus his advice on how to approach it." (Of course you wrote that before I posted, but I'm hoping this is one take-way people get from my post.)
Have you also seen https://www.lesswrong.com/posts/KCSmZsQzwvBxYNNaT/please-don-t-roll-your-own-metaethics which was also partly in response to that thread? BTW why is my post still in "personal blog"?
the distinction John is making here is correct, plus his advice on how to approach it
Really? Did you see this comment of mine? Do you endorse John's reply to it (specifically the part about the sadist)?
I hinted at it with "prior efforts/history", but to spell it out more, metaethics seems to have a lot more effort gone into it in the past, so there's less likely to be some kind of low hanging fruit in idea space, that once picked, everyone will agree is the right solution.
The problem is that we can't. The closest thing we have is instead a collection of mutually exclusive ideas where at most one (possibly none) is correct, and we have no consensus as to which.
Maybe something like "This post presents a simplified version of my ideas, intended as an introduction. For more details and advanced considerations, please see such and such posts."
#2 feels like it's injecting some frame that's a bit weird to inject here (don't roll your own metaethics... but rolling your own metaphilosophy is okay?)
Maybe you missed my footnote?
To preempt a possible misunderstanding, I don't mean "don't try to think up new metaethical ideas", but instead "don't be so confident in your ideas that you'd be willing to deploy them in a highly consequential way, or build highly consequential systems that depend on them in a crucial way". Similarly "don't roll your own crypto" doesn't mean never try to invent new cryptography, but rather don't deploy it unless there has been extensive review, and consensus that it is likely to be secure.
and/or this part of my answer (emphasis added):
Try to solve metaphilosophy, where potentially someone could make a breakthrough that everyone can agree is correct (after extensive review)
But also, I'm suddenly confused about who this post is trying to warn. Is it more like labs, or more like EA-ish people doing a wider variety of meta-work?
I think I mostly had alignment researchers (in and out of labs) as the target audience in mind, but it does seem relevant to others so perhaps I should expand the target audience?
The analogy is that in both fields people are by default very prone to being overconfident. In cryptography this can be seen by the phenomenon of people (especially newcomers who haven't learned the lesson) confidently proposing new cryptographic algorithms, which end up being way easier to break than they expect. In philosophy this is a bit trickier to demonstrate, but I think can be seen via a combination of:
"More research needed" but here are some ideas to start with:
Today I was author-banned for the first time, without warning and as a total surprise to me, ~8 years after banning power was given to authors, but less than 3 months since @Said Achmiz was removed from LW. It seems to vindicate my fear that LW would slide towards a more censorious culture if the mods went through with their decision.
Has anyone noticed any positive effects, BTW? Has anyone who stayed away from LW because of Said rejoined?