LESSWRONG
LW

Wikitags

CFAR should explicitly focus on AI safety

Discuss the wikitag on this page. Here is the place to ask questions and propose changes.
New Comment
7 comments, sorted by
top scoring
[-]Connor Flexman9y*10

Along with "Growing EA is net-positive", anything with a large search space + value judgment seems like it's going to have this issue.

Reply
[-]Timothy Chu9y*10

Addressing the post, a focus on AI risk feels like something worth experimenting with.

My lame model suggests that the main downside is that it risks the brand. If so, experimenting with AI risk in the CFAR context seems like a potentially high value avenue of exploration, and brand damage can be mitigated.

For example, if it turned out to be toxic for the CFAR brand, the same group of people could spin off a new program called something else, and people may not remember or care that it was the old CFAR folks.

Reply
[-]AnnaSalamon9y*150

I want a wrong question button!! :/

Reply5
[-]Eric Rogstad9y*220

I'd be interested to know if you find yourself having that feeling a lot, while interacting with claims.

If it's a small minority of the time, I think the solution is a "wrong question" button. If it happens a lot, we might need another object type --something like a prompt-for-discussion rather than a claim-to-be-agreed with.

Reply5
[-]AnnaSalamon9y*170

Uh, well, it's hard to reply-to, or something? Like, it wants to jam the conversation into questions about whether the claim is "true" or "false", instead of on questions about what is meant by it or what 3rd alternatives might be available or something?

Reply4
[-]Eric Rogstad9y*100

In other words, promoting this claim as worded, is misleading?

Reply1
[-]AnnaSalamon9y*130

CFAR should be about "Rationality for its own sake, for the sake of existential risk". Which is totally different. I just, um, haven't figured out how to say the actual thing clearly. Help very welcome.

Reply3
Moderation Log