What does economics-as-moral-foundation mean?
He mainly used analogies from IABED. Off the top of my head I recall him talking about
I'm talking about my perception of the standards for a quick take vs. post. I don't know if my perception is accurate
My perception is that it's not exactly about goodness, it's more like, a post must conform to certain standards*. In the same way that a scientific paper must meet certain standards to get published in a peer-reviewed journal, but a non-publishable paper could still present novel and valuable scientific findings.
*and, even though I've been reading LW since 2012, I'm still not clear on what those standards are or how to meet them
On a meta level, I think this post is a paragon of how to reason about the cost-effectiveness of a highly uncertain decision, and I would love to see more posts like this one.
I don't know Bores personally. I looked through some of his communications and social media, most of it seemed reasonable (I noticed his Twitter has an unusually small amount of mud-slinging). I did see one thread with some troubling comments:
This bill [SB 53] recognizes that in order to win the AI race, our AI needs to be both safe and trustworthy.
In this case, pro-safety is the pro-innovation position.
[...]
As a New Yorker, I have to point out that SB53 includes a cloud compute cluster & @GavinNewsom said in his signing memo "The future happens [in CA] first"
...but @KathyHochul established EmpireAI in April 2024. So, thanks to our Gov's vision, the future actually happens in NY 😉
Why I find this troubling:
Politicans are often pressured to say those sorts of things, so perhaps he would still support an AI pause if it became politically feasible. So these comments aren't overwhelmingly troubling. But they're troubling.
If those quotes accurately reflect his stance on AI innovation and arms races, then he might still be better than the average Democrat if the increased chance of getting weak-to-moderate AI safety regulations outweighs the decreased chance of getting strong regulations, but it's unclear to me.
I will note that this was the only worrying comment I saw from Bores, although I didn't find many comments on AI safety.
I think the strongest case for AI stocks being overpriced is to ignore any specific facts about how AI works and take the outside view on historical market behavior. I don't see a good argument being made in the quotes above so I will try to make a version of it.
I'm going based on memory instead of looking up sources, I'm pretty sure I'm wrong about the exact details of the claims below but I believe they're approximately true.
(These are five different perspectives on the same general market phenomenon, so they're not really five independent pieces of evidence.)
On the outside view, I think there's pretty good reason to believe that AI stocks are overpriced. However, on the inside view, the market sort of still doesn't seem like it appreciates how big a deal AGI could be. So on balance I'm pretty uncertain.
Importantly, AFAICT some Horizon fellows are actively working against x-risk (pulling the rope backwards, not sideways). So Horizon's sign of impact is unclear to me. For a lot of people, "tech policy going well" means "regulations that don't impede tech companies' growth".
As in, Horizon fellows / people who work at Horizon?
I think you could approximately define philosophy as "the set of problems that are left over after you take all the problems that can be formally studied using known methods and put them into their own fields." Once a problem becomes well-understood, it ceases to be considered philosophy. For example, logic, physics, and (more recently) neuroscience used to be philosophy, but now they're not, because we know how to formally study them.
So I believe Wei Dai is right that philosophy is exceptionally difficult—and this is true almost by definition, because if we know how to make progress on a problem, then we don't call it "philosophy".
For example, I don't think it makes sense to say that philosophy of science is a type of science, because it exists outside of science. Philosophy of science is about laying the foundations of science, and you can't do that using science itself.
I think the most important philosophical problems with respect to AI are ethics and metaethics because those are essential for deciding what an ASI should do, but I don't think we have a good enough understanding of ethics/metaethics to know how to get meaningful work on them out of AI assistants.