** cross-posted from http://singinst.org/2011winterfundraiser/ **
Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work! -Louie
ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER
Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.
ACCOMPLISHMENTS IN 2011
2011 was our biggest year yet. Since the year began, we have:
- Held our annual Singularity Summit in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, Skeptic publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed Jeopardy! contestant Ken Jennings.
- Held a smaller Singularity Summit in Salt Lake City.
- Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.
- Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.
- Published our Singularity FAQ, IntelligenceExplosion.com, and Friendly-AI.com.
- Wrote three chapters for Springer's upcoming volume The Singularity Hypothesis, along with four other research papers.
- Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.
- Began outlining open problems in Singularity research to help outside collaborators better understand our research priorities.
FUTURE PLANS YOU CAN HELP SUPPORT
In the coming year, we plan to do the following:
- Hold our annual Singularity Summit, in San Francisco this year.
- Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
- Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.
- Add additional skilled researchers to our Research Associates program.
- Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.
- Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.
We appreciate your support for our high-impact work. As Skype co-founder Jaan Tallinn said:
We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines… Since we have only one shot at getting this transition right, the importance of Singularity Institute's work cannot be overestimated.
Now is your last chance to make a tax-deductible donation in 2011.
If you'd like to support our work: please donate now!
Just emptied out my altruism chip jar and donated $1850.
Be an informed donor. I advise reading the GiveWell interview with SIAI from last spring.
To me, it's not a good sign that SIAI said they had no immediate plans for what they would do with new funding.
Jasen Murray's answers to Holden's questions were problematic, and did not well-represent the Singularity Institute's positions. That is an old interview, and since that time we've done many things to explain what we plan to do with new funding. For example we published a strategic plan and I gave this video interview. Moreover, the donation page linked in the OP has the most up-to-date information on what we plan to do with new funding: see Future Plans You Can Help Support.
FWIW, the "Future Plans" list seems to me somewhat understating the value of a donation. I realize it's fairly accurate in that it represents the activities of SI. Yet it seems like it could be presented better.
For example, the first item is "hold the Summit". But I happen to know that the Summit generally breaks even or makes a little money, so my marginal dollar will not make or break the Summit. Similarly, a website redesign, while probably important, isn't exciting enough to be listed as the second item. The third item, publish the open problems document, is a good one, though you should make it seem more exciting.
I think the donation drive page should thoroughly make the case that SI is the best use of someone's charity dollars -- that it's got a great team, great leadership, and is executing a plan with the highest probability of working at every step. That page should probably exist on its own, assuming the reader hasn't read any of the rest of the site, with arguments for why working explicitly on rationality is worthwhile; why transparency matters; why outreach to other researchers matters; what the researchers are currently spending time on and why those are the correct things for them to be working on; and so on. It can be long: long-form copy is known to work, and this seems like a correct application for it.
In fact, since you probably have other things to do, I'll do a little bit of copywriting myself to try to discover if this is really a good idea. I'll post some stuff here tomorrow after I've worked on it a bit.
I shall not complain. :)
OK, here's my crack: http://techhouse.org/~lincoln/singinst-copy.txt
Totally unedited. Please give feedback. If it's good, I can spend a couple more hours on it. If you're not going to use it, please don't tell me it's good, because I have lots of other work to do.
It's good enough that if we use it, we will do the editing. Thanks!
The connection between AI and rationality could be made stronger.
Indeed, that's been my impression for a little while. I'm unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.
But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.
Thanks for pointing out the newer info. The different expansion plans seem sensible.
I'll chime in to agree with both lukeprog in pointing out that the interview is very outdated and with Holden in correcting Louie's account of the circumstances surrounding it.
He also talked to Jaan Tallinn. His best points in my opinion:
...
...
...
...
...
...
...
...
(Most of these considerations don't apply to developments in pure mathematics, which is my best guess at a fruitful mode of attacking FAI goals problem. The implementation-as-AGI aspect is a separate problem likely of a different character, but I expect we need to obtain basic theoretical understanding of FAI goals first to know what kinds of AGI progress are useful. Jumping to development of language translation software is way off-track.)
Thanks a lot for posting this link. The first point was especially good.
The "I feel" opening is telling. It does seem like the only way people can maintain this confusion beyond 10 seconds of thought is by keeping in the realm of intuition. In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.
Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?
The only way to approach general intelligence may be by emulating the human algorithms. The opinion that we are capable of inventing an artificial and simple algorithm exhibiting general intelligence is not a mainstream opinion among AI and machine learning researchers. And even if one assumes that all those scientists are not nearly as smart and rational as SI folks, they seem to have much headway when it comes to real world experience about the field of AI and its difficulties.
I actually share the perception that we have no reason to suspect that we could reach a level above ours without massive and time-costly experimentation (removing our biases merely sounds easy when formulated in English).
I think that you might be attributing too much to an expression uttered in an informal conversation.
What do you mean by "feelings" and "preferences". The use of intuition seems to be universal, even within the field of mathematics. I don't see how computational bounded agents could get around "feelings" when making predictions about subjects that are only vaguely understood and defined. Framing the problem in technical terms like "predictive algorithms" doesn't change anything about the fact that making predictions about subjects that are poorly understood is error prone.
Yes. He just doesn't seem to be someone whose opinion on artificial intelligence should be considered particularly important. He's just a layman making the typical layman guesses and mistakes. I'm far more interested in what he has to say on warps in spacetime!
Being an informed donor requires more than an outdated, non-representative interview. This examination has far more high-quality information and, according to the creator, will update soon (although he is apparently behind on the schedule he set for himself).
I was checking for the usual swag and membership tiers on the donation page and found nothing. Surely people would go for the t-shirts/hoodies/caps/posters/membership cards, being mentioned on the SI site, etc.
In the meanwhile, until we get that set up, I'll mail a Singinst t-shirt to anyone that donates $100 or more and emails me.
It's this design on the front, and the Singularity Institute logo on the back. http://www.imaginaryfoundation.com/index.php?pagemode=detail&type=Mens%20Sale&uid=C190B0
I never got my t-shirt.
How would one go about e-mailing you?
Unless you just meant sending a private message.
kfischer @$ gmail *@ com
I suggest making it easier to get bigger images of your designs-- they're detailed enough that what you've got on your site, or even view image and enlarge, don't show them adequately.
It's not our site, the Imaginary Foundation is kind of like a fake bizarro version of the Singularity Institute that's actually mostly a t-shirt company.
Out of curiosity, did SI talk to Imaginary Foundation and set up these shirts or are you modifying them personally, or what's the deal?
Personally I'd like a simple shirt with just the SI logo. As much as I enjoy most of the Imaginary Foundation's designs, this particular shirt has a "Three Wolf Moon" vibe.
Yes, the Director of Imaginary Foundation is perhaps unsurprisingly a long time movement Singularitarian.
Agreed. That's in the works, for the new website.
Should I assume that website will also include a links from your other comments on the donation page?
Sorry, what do you mean?
Cross-referencing. If you visit just the donation page, there are no prominent links to 'what this would be used for'-style information, i.e. what you put in your other comment. Obviously a minor issue at most, but you know how those work.
Though with that said, I've been wondering about that particular point. Website development, of all things.. there are probably dozens around here with the skills to do that, myself included, so it seems like the perfect option for in-kind donations. Do you know who I'd need to talk to about that, and whether or not there's any point? I can think of a few reasons you'd want to keep it in-house, not least confidentiality, but don't know which ones might apply.
Yes, the new donate page has links to explanations of what the money gets used for.
We are already a long ways down the road to the new website with a professional designer, but we have lots of other design and web development work that we love to give to volunteers when they are willing. If you're interested to donate-in-kind in that way, please contact luke [at] singularity.org.
Weird - I am a somewhat regular donor and did not hear about the drive until this post. Checked my email, nothing there.
I happened to have donated last week, and did it again for the drive.
"There will be a lot more whales if there is a Future"
Those of us who are poor are less-likely to straight-up donate. But there are things such as the Singularity Institute credit card that donates $50 when opened, and like 1% of purchases.
Personally, I would also donate to get more chapters of HPMoR up because I would consider it more similar to "buying a book" and not "giving money away." I remember there was an HPMoR drive before, and it seemed to work well.
JKR would have to approve that (unless it's just an accelerated release, like last time). Maybe EY can ask her nicely. Who knows, she might even decide to donate, he can be quite persuasive, I hear.
You are correct.
I would go for accelerated release, rather than trying to untangle a quagmire of copyright issues, though.
It wouldn't stop me from donating, but it's somewhat annoying that donations to US charities are not tax-deductible in Ireland. Before I spend time and money trying to find a workaround - can anyone else think of a solution?
Well, there's always the Future of Humanity Institute; I go back and forth on the relative merits of SIAI and FHI.
I was planning on giving a donation at tax return time