** cross-posted from http://singinst.org/2011winterfundraiser/ **

Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work!  -Louie


ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER

Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.


ACCOMPLISHMENTS IN 2011

2011 was our biggest year yet. Since the year began, we have:

  • Held our annual Singularity Summit in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, Skeptic publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed Jeopardy! contestant Ken Jennings.
  • Held a smaller Singularity Summit in Salt Lake City.
  • Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.
  • Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.
  • Published our Singularity FAQ, IntelligenceExplosion.com, and Friendly-AI.com.
  • Wrote three chapters for Springer's upcoming volume The Singularity Hypothesis, along with four other research papers.
  • Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.
  • Began outlining open problems in Singularity research to help outside collaborators better understand our research priorities.

 

FUTURE PLANS YOU CAN HELP SUPPORT

In the coming year, we plan to do the following:

  • Hold our annual Singularity Summit, in San Francisco this year.
  • Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
  • Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.
  • Add additional skilled researchers to our Research Associates program.
  • Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.
  • Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.


  • We appreciate your support for our high-impact work. As Skype co-founder Jaan Tallinn said:

    We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines… Since we have only one shot at getting this transition right, the importance of Singularity Institute's work cannot be overestimated.

 


Now is your last chance to make a tax-deductible donation in 2011.

If you'd like to support our work: please donate now!

New Comment
47 comments, sorted by Click to highlight new comments since:

Just emptied out my altruism chip jar and donated $1850.

Be an informed donor. I advise reading the GiveWell interview with SIAI from last spring.

To me, it's not a good sign that SIAI said they had no immediate plans for what they would do with new funding.

Jasen Murray's answers to Holden's questions were problematic, and did not well-represent the Singularity Institute's positions. That is an old interview, and since that time we've done many things to explain what we plan to do with new funding. For example we published a strategic plan and I gave this video interview. Moreover, the donation page linked in the OP has the most up-to-date information on what we plan to do with new funding: see Future Plans You Can Help Support.

FWIW, the "Future Plans" list seems to me somewhat understating the value of a donation. I realize it's fairly accurate in that it represents the activities of SI. Yet it seems like it could be presented better.

For example, the first item is "hold the Summit". But I happen to know that the Summit generally breaks even or makes a little money, so my marginal dollar will not make or break the Summit. Similarly, a website redesign, while probably important, isn't exciting enough to be listed as the second item. The third item, publish the open problems document, is a good one, though you should make it seem more exciting.

I think the donation drive page should thoroughly make the case that SI is the best use of someone's charity dollars -- that it's got a great team, great leadership, and is executing a plan with the highest probability of working at every step. That page should probably exist on its own, assuming the reader hasn't read any of the rest of the site, with arguments for why working explicitly on rationality is worthwhile; why transparency matters; why outreach to other researchers matters; what the researchers are currently spending time on and why those are the correct things for them to be working on; and so on. It can be long: long-form copy is known to work, and this seems like a correct application for it.

In fact, since you probably have other things to do, I'll do a little bit of copywriting myself to try to discover if this is really a good idea. I'll post some stuff here tomorrow after I've worked on it a bit.

I shall not complain. :)

OK, here's my crack: http://techhouse.org/~lincoln/singinst-copy.txt

Totally unedited. Please give feedback. If it's good, I can spend a couple more hours on it. If you're not going to use it, please don't tell me it's good, because I have lots of other work to do.

It's good enough that if we use it, we will do the editing. Thanks!

The connection between AI and rationality could be made stronger.

Indeed, that's been my impression for a little while. I'm unconvinced that AI is the #1 existential risk. The set of problems descending from the fact that known life resides in a single biosphere — ranging from radical climate change, to asteroid collisions, to engineered pathogens — seems to be right up there. I want all AI researchers to be familiar with FAI concerns; but there are more people in the world whose decisions have any effect at all on climate change risks — and maybe even on pathogen research risks! — than on AI risks.

But anyone who wants humanity to solve these problems should want better rationality and better (trans?)humanist ethics.

Thanks for pointing out the newer info. The different expansion plans seem sensible.

I'll chime in to agree with both lukeprog in pointing out that the interview is very outdated and with Holden in correcting Louie's account of the circumstances surrounding it.

Be an informed donor. I advise reading the GiveWell interview with SIAI from last spring.

He also talked to Jaan Tallinn. His best points in my opinion:

My reasoning is that it seems to me that if they have unique insights into the problems around AGI, then along the way they ought to be able to develop and publish/market innovations in benign areas, such as speech recognition and language translation programs, which could benefit them greatly both directly (profits) and indirectly (prestige, affiliations) - as well as being a very strong challenge to themselves and goal to hold themselves accountable to, which I think is worth quite a bit in and of itself.

...

I'm largely struggling for a way to evaluate the SIAI team. Certainly they've written some things I like, but I don't see much in the way of technical credentials or accomplishments of the kind I'd expect from people who are aiming to create useful innovations in the field of artificial intelligence.

...

I think that if you're aiming to develop knowledge that won't be useful until very very far in the future, you're probably wasting your time, if for no other reason than this: by the time your knowledge is relevant, someone will probably have developed a tool (such as a narrow AI) so much more efficient in generating this knowledge that it renders your work moot.

...

Instead, in order to build a program that is better at writing source code for AGIs than we are, it seems like you'd likely need to fundamentally understand and formalize what general intelligence consists of. How else can you tell the original program how to evaluate the "goodness" of different possible modifications it might make to its source code?

...

Another note is that even if the real world is more like chess than I think ... the actual story of the development of superhuman chess intelligences as I understand it is much closer to "humans writing the right algorithm themselves, and implementing it in hardware that can do things they can't" than to "a learning algorithm teaching itself chess intelligence starting with nothing but the rules."

...

...designing a dumberthan-humans computer to modify its source code all on its own until it becomes smarter than humans. I don't see how the latter would be possible for a general intelligence (for a specialized intelligence it could be done via trial-and-error in a simulated environment).

...

I feel like once we basically understand how the human predictive algorithm works, it may not be possible to improve on that algorithm (without massive and time-costly experimentation) no matter what the level of intelligence of the entity trying to improve on it. (The reason I gave: The human one has been developed by trial-and-error over millions of years in the real world, a method that won't be available to the GMAGI. So there's no guarantee that a greater intelligence could find a way to improve this algorithm without such extended trial-and-error)...

...

I don't think of the GMAGI I'm describing as necessarily narrow - just as being such that assigning it to improve its own prediction algorithm is less productive than assigning it directly to figuring out the questions the programmer wants (like "how do I develop superweapons"). There are many ways this could be the case.

...

I don't think "programming" is the main challenge in improving one's own source code. As stated above, I think the main challenge is improving on a prediction algorithm that was formed using massive trial-and-error, without having the benefit of the same trial-anderror process.

(Most of these considerations don't apply to developments in pure mathematics, which is my best guess at a fruitful mode of attacking FAI goals problem. The implementation-as-AGI aspect is a separate problem likely of a different character, but I expect we need to obtain basic theoretical understanding of FAI goals first to know what kinds of AGI progress are useful. Jumping to development of language translation software is way off-track.)

Thanks a lot for posting this link. The first point was especially good.

I feel like once we basically understand how the human predictive algorithm works, it may not be possible to improve on that algorithm (without massive and time-costly experimentation) no matter what the level of intelligence of the entity trying to improve on it. (The reason I gave: The human one has been developed by trial-and-error over millions of years in the real world, a method that won't be available to the GMAGI. So there's no guarantee that a greater intelligence could find a way to improve this algorithm without such extended trial-and-error)...

The "I feel" opening is telling. It does seem like the only way people can maintain this confusion beyond 10 seconds of thought is by keeping in the realm of intuition. In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.

Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?

It does seem like the only way people can maintain this confusion beyond 10 seconds of thought...

The only way to approach general intelligence may be by emulating the human algorithms. The opinion that we are capable of inventing an artificial and simple algorithm exhibiting general intelligence is not a mainstream opinion among AI and machine learning researchers. And even if one assumes that all those scientists are not nearly as smart and rational as SI folks, they seem to have much headway when it comes to real world experience about the field of AI and its difficulties.

I actually share the perception that we have no reason to suspect that we could reach a level above ours without massive and time-costly experimentation (removing our biases merely sounds easy when formulated in English).

The "I feel" opening is telling.

I think that you might be attributing too much to an expression uttered in an informal conversation.

In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.

What do you mean by "feelings" and "preferences". The use of intuition seems to be universal, even within the field of mathematics. I don't see how computational bounded agents could get around "feelings" when making predictions about subjects that are only vaguely understood and defined. Framing the problem in technical terms like "predictive algorithms" doesn't change anything about the fact that making predictions about subjects that are poorly understood is error prone.

Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?

Yes. He just doesn't seem to be someone whose opinion on artificial intelligence should be considered particularly important. He's just a layman making the typical layman guesses and mistakes. I'm far more interested in what he has to say on warps in spacetime!

Being an informed donor requires more than an outdated, non-representative interview. This examination has far more high-quality information and, according to the creator, will update soon (although he is apparently behind on the schedule he set for himself).

[+]Louie-60
[-]Shmi100

fundraiser only 20% filled so far

I was checking for the usual swag and membership tiers on the donation page and found nothing. Surely people would go for the t-shirts/hoodies/caps/posters/membership cards, being mentioned on the SI site, etc.

In the meanwhile, until we get that set up, I'll mail a Singinst t-shirt to anyone that donates $100 or more and emails me.

It's this design on the front, and the Singularity Institute logo on the back. http://www.imaginaryfoundation.com/index.php?pagemode=detail&type=Mens%20Sale&uid=C190B0

[-]Rain10

I never got my t-shirt.

How would one go about e-mailing you?

Unless you just meant sending a private message.

kfischer @$ gmail *@ com

I suggest making it easier to get bigger images of your designs-- they're detailed enough that what you've got on your site, or even view image and enlarge, don't show them adequately.

It's not our site, the Imaginary Foundation is kind of like a fake bizarro version of the Singularity Institute that's actually mostly a t-shirt company.

Out of curiosity, did SI talk to Imaginary Foundation and set up these shirts or are you modifying them personally, or what's the deal?

Personally I'd like a simple shirt with just the SI logo. As much as I enjoy most of the Imaginary Foundation's designs, this particular shirt has a "Three Wolf Moon" vibe.

Yes, the Director of Imaginary Foundation is perhaps unsurprisingly a long time movement Singularitarian.

Agreed. That's in the works, for the new website.

Should I assume that website will also include a links from your other comments on the donation page?

Sorry, what do you mean?

Cross-referencing. If you visit just the donation page, there are no prominent links to 'what this would be used for'-style information, i.e. what you put in your other comment. Obviously a minor issue at most, but you know how those work.

Though with that said, I've been wondering about that particular point. Website development, of all things.. there are probably dozens around here with the skills to do that, myself included, so it seems like the perfect option for in-kind donations. Do you know who I'd need to talk to about that, and whether or not there's any point? I can think of a few reasons you'd want to keep it in-house, not least confidentiality, but don't know which ones might apply.

Yes, the new donate page has links to explanations of what the money gets used for.

We are already a long ways down the road to the new website with a professional designer, but we have lots of other design and web development work that we love to give to volunteers when they are willing. If you're interested to donate-in-kind in that way, please contact luke [at] singularity.org.

Weird - I am a somewhat regular donor and did not hear about the drive until this post. Checked my email, nothing there.

I happened to have donated last week, and did it again for the drive.

"There will be a lot more whales if there is a Future"

Those of us who are poor are less-likely to straight-up donate. But there are things such as the Singularity Institute credit card that donates $50 when opened, and like 1% of purchases.

Personally, I would also donate to get more chapters of HPMoR up because I would consider it more similar to "buying a book" and not "giving money away." I remember there was an HPMoR drive before, and it seemed to work well.

[-]Shmi110

Personally, I would also donate to get more chapters of HPMoR up because I would consider it more similar to "buying a book" and not "giving money away."

JKR would have to approve that (unless it's just an accelerated release, like last time). Maybe EY can ask her nicely. Who knows, she might even decide to donate, he can be quite persuasive, I hear.

You are correct.

I would go for accelerated release, rather than trying to untangle a quagmire of copyright issues, though.

It wouldn't stop me from donating, but it's somewhat annoying that donations to US charities are not tax-deductible in Ireland. Before I spend time and money trying to find a workaround - can anyone else think of a solution?

Well, there's always the Future of Humanity Institute; I go back and forth on the relative merits of SIAI and FHI.

I was planning on giving a donation at tax return time