Meant as a neutral question. I'm not sure whether this would be good or bad on net: Suppose key elements of the US military took x-risk from misaligned strong AI very seriously. Specifically, I mean: * Key scientists at the Defense Threat Reduction Agency. They have a giant budget (~$3B/year)...
Risk neutrality is the idea that a 50% chance of $2 is equally valuable as a 100% chance of $1. This is a highly unusual preference, but it sometimes makes sense: > Exampleville is infested with an always-fatal parasite. All 1,024 residents have the worm. > > For each gold...
Yglesias is a widely read center-left journalist. Co-founder of Vox, ex-NYT. Note the implicit invitation: “I also try to make it clear to people who are closer to the object-level work that I’m interested in writing columns on AI policy if they have ideas for me, but they mostly don’t.”...
I read a lot of rationalist-adjacents. Outside of LessWrong and ACX, I hardly ever see posts on AI risk. Tyler Cowen of Marginal Revolution writes that, "it makes my head hurt" but hasn't engaged with the issue. Even Zvi spends very few posts on AI risk. This is surprising, and...
I'll be in Stuttgart, Germany for a few months this summer for work. It's about equidistant from Frankfurt, Munich, and Zurich. Do any of these cities have regular meetups where English is spoken, at least in part? About me -- I run a software startup, which primarily sells to large...
That is, explain creativity from more fundamental building blocks
This piece, which predates ChatGPT, is no longer endorsed by its author. Eliezer's recent discussion on AGI alignment is not optimistic. > I consider the present gameboard to look incredibly grim... We can hope there's a miracle that violates some aspect of my background model, and we can try to...