The link is broken, I think + Didn't Alex Tabarrok do one better by creating the dominant assurance contract?
Ouch! I donated $135 (and asked my employer to match as well) on Nov 2, India time. I had been on a brief vacation and just returned. Now I re-read and found it is too late for the fundraiser.
Anyway, please take this as positive reinforcement for what it is worth. You're doing a good job. Take the money as part of fundraiser or off-fund raiser donations, whatever is appropriate.
This basically boils down to the root of the impulse to remove a chesterton's fence, doesn't it?
Those who believe that these impulses come from genuinely good sources (eg. learned university professors) like to take down those fences. Those who believe that these impulses come from bad sources (eg. status jockeying, holiness signalling) would like to keep them.
The reactionary impulse comes from the basic idea that the practice of repeatedly taking down chesterton's fences will inevitably auto-cannibalise and the system or the meta-system being used to defend all these previous demolitions will also fall prey to one such wave. The humans left after that catastrophe will be little better than animals, in some cases maybe even worse, lacking the ability and skills to survive.
Donated $100 to SENS. Hopefully, my company matches it. Take that, aging, the killer of all!
I'm not a physicist, but aren't this and the linked quanta article on Prof. England's work bad news? (great filter wise)
If this implies self-assembly is much more common in the universe, then that makes it worse for the latter proposed filters (i.e. makes them EDIT higher probability)
I donated $300 which I think my employer is expected to match. So $600 to AI value alignment here!
I feel for you. I agree with salvatier's point in the linked page. Why don't you try to talk to FHI directly? They should be able to get some funding your way.
Letting market prices reign everywhere, but providing a universal basic income is the usual economic solution.
Guys everyone on reddit/Hpmor seems to be talking about a spreadsheet with all solutions listed. Could anyone please post the link as a reply to this comment. Pretty please with sugar on top :)
A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.
To illustrate - You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI.
You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.