Timothy Chu
Timothy Chu has not written any posts yet.

Timothy Chu has not written any posts yet.

Addressing the post, a focus on AI risk feels like something worth experimenting with.
My lame model suggests that the main downside is that it risks the brand. If so, experimenting with AI risk in the CFAR context seems like a potentially high value avenue of exploration, and brand damage can be mitigated.
Yes. As far as I can tell, the current message of effective altruism sort of focuses in too strongly on "being effective" at its core. This is an anchoring bias that can prevent people from exploring high-leverage opportunities.
Some of the greatest things in the world come from random exploration of opportunities that may or may not have had anything to do with an end goal. For example, Kurt Godel came up with his vaunted Incompleteness Theorems by attempting to prove the existence of the Christian God. This was totally batshit, and I would not expect the current EA framework to reliably produce people like Godel (brilliant people off on a crazy personal... (read more)
This doesn't seem like a controversial of a claim (be specific and not vague is one of the most timeless heuristics out there), but does seem worth highlighting.
I would like to add that my favorite claim so far ("Effective Altruism's current message discourages creativity") was not particularly well-specified. ("creativity" and "EA's current message" are not very specific imo).