Wrote a cheque for $5,000.
(I put the redacted image of my donation online because someone else decided to start an ad-hoc fundraising effort for MIRI on FIMFiction.)
$10.00 isn't very much, but come on, it's not like it is worse than not donating anything at all.
:-)
$1,200 donated.
I'd like to remark on something that annoys me: Your "donation meter" (at least the one on your site, if not the one in the post above) ought either be certain to be updated daily, or at the very least it should note when it was last updated. I find the phrase "raised to date" frustrating and annoying when I can't trust that the "to date" is actually current.
I'd like to say that PMs from private_messaging disparaging this drive and my donations will NOT deter me from funding the mission I feel will help lead to the best possible future.
I donated $1000 and then went and bought Facing the Intelligence Explosion for the bare minimum price. (Just wanted to put that out there.)
I've also left myself a reminder to consider another donation a few days before this runs out. It'll depend on my financial situation, but I should be able to manage it.
Thanks! Who is your employer? We may need to send them some forms. We already have donation matching set up with Google, Microsoft, Boeing, Adobe, Fannie Mae, and several other companies through Network for Good and America's Charities.
You can also contact me privately via email.
The drafts came out unexciting according to reader reports. I suspect that magical writing energy ['magic' = not understood] was diverted from the rationality book into the first 63 chapters of HPMOR which I was doing in my 'off time' while writing the book, and which does have Yudkowskian magic according to readers. HPMOR and CFAR between them used up a lot of the marginal utility I thought we would get from the book which diminishes the marginal utility of completing it.
In part, we wanted to learn something about the degree to which donors are following the blog, following our newsletter, or following Less Wrong. I also wanted to be able to link from this post to a forthcoming interview with Benja Fallenstein that explains in more detail what we actually do at the workshops and why, but that was taking too long to complete, so I decided to just hurry up and post.
For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I've seen in the first category: CEV (9 years old!), Paul Christiano's man-in-a-box indirect normativity, Luke's decision neuroscience, Daniel Dewey's value learning... I really like these approaches but they are only very early starting points compared to what will eventually be required.
Do you have any plans to tackle the hu...
I think small donors should also state their donations amounts of 50-100 dollars. Having counted the medium and large donations in this thread to a rough total of 11,000 dollars, it seems unlikely that the goal is being reached with just those, and I have a feeling there will be some sort of "breaking the ice" effect if small donors chirped up about their chip ins, so to speak. Right now the number of medium and large donors represented in this thread eclipses the smalls.
(MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.)
Update 09-15-2013: The fundraising drive has been completed! My thanks to everyone who contributed.
The original post follows below...
Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made from now until (the end of) August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!
Donate Now!
Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.
This post is also a good place to ask your questions about our activities and plans — just post a comment!
If you have questions about what your dollars will do at MIRI, you can also schedule a quick call with MIRI Deputy Director Louie Helm: louie@intelligence.org (email), 510-717-1477 (phone), louiehelm (Skype).
Early this year we made a transition from movement-building to research, and we've hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on the future's most important problem.
Accomplishments in 2013 so far
Future Plans You Can Help Support
(Other projects are still being surveyed for likely cost and strategic impact.)
We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward.
If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.
† $200,000 of total matching funds has been provided by Jaan Tallinn, Loren Merritt, Rick Schwall, and Alexei Andreev.