people being more likely to click on or read shortforms due to less perceived effort of reading (since they're often shorter and less formal)
And because you can read them without loading a new page. I think that's a big factor for me.
maybe they help workshop new analogies that eventually can be refined into If Anyone style books or podcast interviews.
I think it's helpful to write arguments multiple times. And I think it's sensible to write out the argument in a "you-shape" and then refine it and try to make it more appealing to a broader range of people.
This kind of post might also give fuel for someone else to make basically the same argument, but in a different style, which could end up being helpful.
we can never know our own true reason for doing something
I read Sam Harris's book on free will which I think is what you're referring to, but I don't recall him saying anything like that. If he did, I presume he meant something like "you don't know which set of physical inputs to your neurons caused your neurons to fire in a way that caused your behavior", which doesn't mean you can't have a belief about whether someone's motivation is (say) religious or geopolitical.
I think you could approximately define philosophy as "the set of problems that are left over after you take all the problems that can be formally studied using known methods and put them into their own fields." Once a problem becomes well-understood, it ceases to be considered philosophy. For example, logic, physics, and (more recently) neuroscience used to be philosophy, but now they're not, because we know how to formally study them.
So I believe Wei Dai is right that philosophy is exceptionally difficult—and this is true almost by definition, because if we know how to make progress on a problem, then we don't call it "philosophy".
For example, I don't think it makes sense to say that philosophy of science is a type of science, because it exists outside of science. Philosophy of science is about laying the foundations of science, and you can't do that using science itself.
I think the most important philosophical problems with respect to AI are ethics and metaethics because those are essential for deciding what an ASI should do, but I don't think we have a good enough understanding of ethics/metaethics to know how to get meaningful work on them out of AI assistants.
What does economics-as-moral-foundation mean?
He mainly used analogies from IABED. Off the top of my head I recall him talking about
I'm talking about my perception of the standards for a quick take vs. post. I don't know if my perception is accurate
My perception is that it's not exactly about goodness, it's more like, a post must conform to certain standards*. In the same way that a scientific paper must meet certain standards to get published in a peer-reviewed journal, but a non-publishable paper could still present novel and valuable scientific findings.
*and, even though I've been reading LW since 2012, I'm still not clear on what those standards are or how to meet them
On a meta level, I think this post is a paragon of how to reason about the cost-effectiveness of a highly uncertain decision, and I would love to see more posts like this one.
Why does Eliezer dislike the paperclip maximizer thought experiment?
Numerous times I have seen him correct people about it and say it wasn't originally about a totalizing paperclip factory, it was about an AI that wants to make little squiggly lines for inscrutable reasons. Why does the distinction matter? Both scenarios are about an AI that does something very different from what you want and ends up killing you.
My guess, although I'm not sure about this, is that the paperclip factory is an AI that did as instructed, but its instructions were bad and it killed everyone. Whereas the squiggly line thing is about AI not doing what you want. And perhaps the paperclip factory scenario could mislead people into believing that all you have to do is make sure the AI understands what you want.
FWIW I always figured the paperclip maximizer would know that people don't want it to turn the lightcone into paperclips, but it would do it anyway, so I still thought it was a reasonable example of the same principle as the squiggly-lines AI. But I can see how that conclusion requires two steps of reasoning whereas the squiggly-lines scenario only requires one step. Or perhaps the thing that Eliezer thinks is wrong with the paperclip-maximizer scenario is something else entirely.