From the SingInst blog:
Thanks to the generosity of several major donors†, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000.
(Visit the challenge page to see a progress bar.)
Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!
† $125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.
2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.
The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:
- Held our annual Singularity Summit, in San Francisco. Speakers included Ray Kurzweil, James Randi, Irene Pepperberg, and many others.
- Held the first Singularity Summit Australia and Singularity Summit Salt Lake City.
- Held a wildly successful Rationality Minicamp.
- Published seven research papers, including Yudkowsky’s much-awaited ‘Timeless Decision Theory‘.
- Helped philosopher David Chalmers write his seminal paper ‘The Singularity: A Philosophical Analysis‘, which has sparked broad discussion in academia, including an entire issue of Journal of Consciousness Studies and a book from Springer devoted to responses to Chalmers’ paper.
- Launched the Research Associates program.
- Brought MIT cosmologist Max Tegmark onto our advisory board, published our Singularity FAQ, and much more.
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in New York City this year.
- Publish three chapters in the upcoming academic volume The Singularity Hypothesis, along with several other papers.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As PayPal co-founder and Singularity Institute donor Peter Thiel said:
“I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Singularity Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”
Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.
I just put in 5100 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.
You deserve praise. Would you like some praise?
Thanks! :-)
Praise Rain, for being such a generous benefactor! :)
Thank you SO MUCH for the clarification VNKKET linked to. I was worried. I would usually discourage someone from donating all of their savings to any cause including this one, but in this case it looks like you have thought it through and what you are doing a) make sense and b) is the result of a well thought out lifestyle optimization process.
I'd be happy to talk with you or exchange email (my email is public) to discuss the details, both to better learn to optimize my life and to try to help you with yours, since I expect that efforts will be high return, given the evidence that you are a person who actually does the things that you think would be good lifestyle optimizations at least some of the time.
I'm also desperately interested in better characterizing people who optimize their lifestyles and who try to live without fear etc.
If you have an email exchange and neither of you minds making it public, please do so.
Nice! And for anyone freaked out by the "current balance of my bank account" part, there's an explanation here.
I am not a super hero or an ascetic. I'm a regular random internet person with a particular focus on the future. I only donated 26 percent of my gross income last year. And I have a Netflix subscription.
Your superpower is willpower and you exist as a hero to many :)
Good to hear about the successes, but I am still skeptical about this one:
I have yet to see any actual substantiation for this claim beyond the SIAI blog's say-so and a few qualitative individual self-reports. I have not seen any attempt to extend and replicate this success, nor evidence that would even be possible.
If it actually were a failure, how would we know? Would anyone there even admit it, or prefer to avoid making its leaders look bad?
Sorry to be the bad guy there, but this claim has been floating around for a while and looks like it will become one of those things "everyone knows".
Wasn't there something similar a while ago? ... yes there was. I can reasonably assume there will be others in the future. You are trying to get people to donate by appealing to an artificial sense of urgency ("Now is your chance to" , "Donate now" ). Beware that this triggers dark arts alarm bells.
Nevertheless, I have now donated an amount of money.
Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.
To be fair, it's just the heads that rise again, not the rest of the corpse... ah, I'm not helping, am I? :-)
I usually offer my name and ask them theirs.
I'm quite often asked about my necklace, and I'll say "It's my contract of immortality with the Cult of the Severed Head", or in some contexts, "It's my soul" or "It's my horcrux".
The key thing is for your voice to make it clear that you're not at all afraid and that you think this is what the high-prestige smart people do. Show the tiniest trace of defensiveness and they'll pounce.
Agreed; I'd personally like if a planned schedule for major grants was disclosed regularly, maybe annually.
Anyway, I donated 500 USD.
£300; 10% of my summer internship's salary, before tax etc.
I just donated Round(1000 Pi / 3) USD. I also had Google doing an employer match.
Strangely enough, I went through the 'donate publicly' link, but chose not to use facebook, and in the end it called me 'Anonymous Donor'.
I just donated.
I am happy to see that the success of the previous matching program is being followed up with additional matching funds, and that there is such a broad base of sponsors. I have donated $2000 on top of my typical annual donation.
There's a major conflict of interest in accepting donations from Clippy.
I would accept donations from Lucifer himself if he was silly enough to give them to me. I don't see a problem. :)
Do people here generally think that this is true? I don't see much of an intersection between Watson and AI; it seems like a few machine learning algorithms that approach Jeopardy problems in an extremely artificial way, much like chess engines approach playing chess. (Are chess engines artificial intelligence too?)
I actually do think it's a big deal, as well as being flashy, though not an extremely big deal. Something along the lines of the best narrow AI accomplishment of any given year and the flashiest of any given 3-5 year period.
$10k for the most efficient instrument of existential risk reduction, the most efficient way to do good.
I've donated $512 on top of my monthly donation.
The safety implications of advanced AI form one of the most important (and under-appreciated) ideas out there right now. It's an issue that humanity needs to think long and hard about. So I think that by organizing conferences and writing papers, SIAI are doing pretty much the right thing. I don't think they're perfect, but for me the way to help with that is by getting involved.
I am glad that people are standing up and showing their support, and also that people are voicing criticisms and showing that they are really thinking about the issue.
I hope to see some of you Oct 15-16 in New York!
I'm not entirely sure that I believe the premise of this game. Essentially, the claim is that 20 of SingInst's regular donors have extra money lying around that they are willing to donate to SingInst iff someone else donates the same amount. What do the regular donors intend to do with the money otherwise? Have they signed a binding agreement to all get together and blow the money on a giant party? Otherwise, why would they not just decide to donate it to SingInst at the end of the matching period anyway?
This seems relevant:
Here's my totally non-binding plan for my $1100 extra dollars that really were just lying around, budgeted but projected to not be spent: If we meet the full challenge, donate $1100 to SingInst and have Microsoft match it as well. If we meet only e.g. 80%, donate 80% of $1100 and have Microsoft match it, and spend the rest on a party I wouldn't have had otherwise and link y'all to tasteful pictures. That's a x3 multiplier on ~1% of the $125,000.
Before your post, bentarm, my plan was somewhat different but I estimate it gave at least a 2.9x multiplier.
I understand the SI needs money and I understand a lot of discussion about this has ensued elsewhere, but I'm still skeptical that I can have the most impact with my money by donating to the SI, when I could be funding malaria nets, for instance.
There are two questions here that deserve separate consideration: donating to existential risk reduction vs. other (nearer-term, lower-uncertainty) philanthropy, and donating to SI vs. other x-risk reduction efforts. It seems to me that you should never be weighing SI against malaria nets directly; if you would donate to (SI / malaria nets) conditional on their effectiveness, you've already decided (for / against) x-risk reduction and should only be considering alternatives like (FHI / vaccination programs).
Thanks. You're right I've been thinking about it wrong, I'll have to reconsider how I approach philanthropy. It's valuable to donate to research anyway, since research is what comes up with things like "malaria nets".
Glad I could help. Thanks for letting me know.
Good point; under uncertainty about x-risk vs. near-term philanthropy you might donate to organizations that could help answer that question, like GiveWell or SI/FHI.
I haven't watched the presentation, but 8 lives corresponds to only a one in a billion chance of averting human extinction per donated dollar, which corresponds (neglecting donation matching and the diminishing marginal value of money) to roughly a 1 in 2000 chance of averting human extinction from a doubling of the organization's budget for a year. That doesn't sound obviously crazy to me, though it's more than I'd attribute to an organization just on the basis that it claimed to be reducing extinction risk.
Note that the large number used in this particular back-of-envelope calculation is the world population of several billion, not the still much larger numbers involved in astronomical waste.
Keep in mind that estimation is the best we have. You can't appeal to Nature for not having been given a warning that meets a sufficient standard of rigor. Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
My impression is that the risk of immediate extinction due to nuclear war is very small but that a nuclear war could cripple civilization to the point of not being able to recover enough to affect a positive singularity; also it would plausibly increase other x-risks - intuitively, nuclear war would destabilize society, and people are less likely to take safety precautions in an unstable society when developing advanced technologies than they otherwise would be. I'd give a subjective estimate of 0.1% - 1% of nuclear war preventing a positive singularity.
Don't we make this choice daily by choosing our preferred brand over Ethical Bean at Starbucks?
I hear the ethics at Starbucks are rather low-quality and in any case, surely Starbucks isn't the cheapest place to purchase ethics.
Bah! Listen, Eliezer, I'm tired of all your meta-hipsterism!
"Hey, let's get some ethics at Starbucks" "Nah, it's low-quality; I only buy a really obscure brand of ethics you've probably never heard of called MIRI". "Hey man, you don't look in good health, maybe you should see a doctor" "Nah, I like a really obscure form of healthcare, I bet you're not signed up for it, it's called 'cryonics'; it's the cool thing to do". "I think I like you, let's date" "Oh, I'm afraid I only date polyamorists; you're just too square". "Oh man, I just realized I committed hindsight bias the other day!" "I disagree, it's really the more obscure backfire effect which just got published a year or two ago." "Yo, check out this thing I did with statistics" "That's cool. Did you use Bayesian techniques?"
Man, forget you!
/angrily sips his obscure mail-order loose tea, a kind of oolong you've never heard of (Formosa vintage tie-guan-yin)
If you keep looking down the utility gradient, it's harder to escape local maxima because you're facing backwards.
This comment has been brought to you by me switching from Dvorak to Colemak.
If you already know your decision the value of the research is nil.
Having $1000 pre-filled makes me feel uncomfortable. I can understand the reasoning behind anchoring to a higher number, and I can't explain much behind why it makes me feel uncomfortable about contributing at all. Perhaps a running average pre-fill like the indie game Humble Bundle 3 would be better.
I'm not sure if this is bad word choice, but if you genuinely don't know the results then it seems disingenuous to focus on one of the three specific results without offering any further support for that stance. (If you do know the results then I would love to see them ^.^)
Not many people heard about the Singularity Summit in Salt Lake City. Here is part of Luke Nosek's talk that struck me:
... (read more)Donated another $500.
I just noticed this hasn't been posted to SL4; I could do it, but maybe better someone from singinst?