LESSWRONG
LW

Timothy Chu
12032
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Arbital Claims Are Significantly More Useful When They Are Fairly Well Specified And Unambiguous
Timothy Chu9y*50

This doesn't seem like a controversial of a claim (be specific and not vague is one of the most timeless heuristics out there), but does seem worth highlighting.

I would like to add that my favorite claim so far ("Effective Altruism's current message discourages creativity") was not particularly well-specified. ("creativity" and "EA's current message" are not very specific imo).

Reply1
CFAR Should Explicitly Focus On AI Safety
Timothy Chu9y*10

Addressing the post, a focus on AI risk feels like something worth experimenting with.

My lame model suggests that the main downside is that it risks the brand. If so, experimenting with AI risk in the CFAR context seems like a potentially high value avenue of exploration, and brand damage can be mitigated.

For example, if it turned out to be toxic for the CFAR brand, the same group of people could spin off a new program called something else, and people may not remember or care that it was the old CFAR folks.

Reply
The Current Message Of Effective Altruism Heavily Discourages Creativity
Timothy Chu9y*90

Yes. As far as I can tell, the current message of effective altruism sort of focuses in too strongly on "being effective" at its core. This is an anchoring bias that can prevent people from exploring high-leverage opportunities.

Some of the greatest things in the world come from random exploration of opportunities that may or may not have had anything to do with an end goal. For example, Kurt Godel came up with his vaunted Incompleteness Theorems by attempting to prove the existence of the Christian God. This was totally batshit, and I would not expect the current EA framework to reliably produce people like Godel (brilliant people off on a crazy personal side quest they S1 feel strongly about), and in fact would expect the current EA framework to condition people away from Godel ("this is nuts and we can't evaluate it and it doesn't even seem conjecturally plausible") or something like that.

It is possible I am straw-manning the current EA message, but that is my bad mental simulation of the EA message as I understand it (through various chance exposures to fringe elements of the community, who may not be representative of the whole).

Nonetheless, the core point is that until I can reliably see the EA movement reliably generating the "crazy but awesome" things that humans are known to be able to create, or at least not conditioning people away from them (if the core thought process is "EA-focused", this will naturally constrain your thought-space to hew closely to the EA concept), then I would suggest that the EA community message is potentially discouraging natural human creativity (among many other things).

Reply5
If CFAR suffered negative branding due to AI risk focus, it is possible to spin out a new program with the same people and avoid brand damage.
9y
The main downside of CFAR focusing on AI risk is that it may cause brand damage.
9y
No posts to display.