New Comment
8 comments, sorted by Click to highlight new comments since: Today at 12:19 PM

edit: reposted this comment as a 'question' here https://www.lesswrong.com/posts/eQqk4X8HpcYyjYhP6/could-ai-be-used-to-engineer-a-sociopolitical-situation

What are the most cost effective alignment organizations to donate to? I'm aware of MIRI and https://futureoflife.org/ . 

Cost-effective in terms of what?  The term implies a ratio of cost vs effect - cost is easy to measure, but I don't know of any that have a mission which supports measurement of their effect.

i see, well i'm not sure what to do then. i inherited a lot of money and i wanna give most of it to alignment groups

Larks has done some AI charity comparisons, e.g. here's the one for 2021.

Sounds like the old adage about string pattern matching.  "You have a problem.  You decide to use regular expressions.  Now you have two problems."

If you've already decided to donate to this cause, presumably "cost-effective compared to other causes" wasn't part of the criteria.  Why is it important WITHIN the cause?  You should consider understanding what draws you to the topic, and what differs among donation targets on those dimensions.

I'm new to alignment (been casually reading for a couple months). I'm drawn to the topic by long-termist arguments. I'm a moral utilitarian so it seems highly important to me. However I have a feeling I misunderstood your post. Is this the kind of motive/draw you meant?

I meant to help you move from far-mode "I'm drawn generally to it" to near-mode "what should I do specifically".  Ideally, you'd examine WHAT draws you to the topic, and what metrics or indicators would show that some funding targets have impact on those dimensions.

In fact, that is what "cost-effective" means, but "effect" is the difficult part to identify.  If it turns out that nobody has an impact that you can see, then you're probably best off picking another topic.  If it turns out that some have impact, in ways that convince you even if it's not particularly quantitative, then that's cost-effective enough.

New to LessWrong?