Yglesias is a widely read center-left journalist. Co-founder of Vox, ex-NYT. Note the implicit invitation: “I also try to make it clear to people who are closer to the object-level work that I’m interested in writing columns on AI policy if they have ideas for me, but they mostly don’t.”

Full article is ungated on his Substack. Relevant excerpt below:

The typical person’s marginal return on investment for efforts to reduce existential risk from misaligned artificial intelligence is going to diminish at an incredibly rapid pace. I have written several times that I think this problem is worth taking seriously and that the people working on it should not be dismissed as cranks. I’m a somewhat influential journalist, and my saying this has, I think, some value to the relevant people. But I write five columns a week and they are mostly not about this, because being tedious and repetitive on this point wouldn’t help anyone. I also try to make it clear to people who are closer to the object-level work that I’m interested in writing columns on AI policy if they have ideas for me, but they mostly don’t.

So am I “prioritizing” AGI risk as a cause? On one level, I think I am, in the sense that I do literally almost everything in my power to help address it. On another level, I clearly am not prioritizing this because I am barely doing anything.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 9:33 AM

I’m sure if he spent five minutes brainstorming he could come up with more things, or maybe I’m just wrongly calibrated on how much agency people have?