The $125,000 Summer Singularity Challenge

From the SingInst blog:

Thanks to the generosity of several major donors, every donation to the Singularity Institute made now until August 31, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Donate now!

(Visit the challenge page to see a progress bar.)

Now is your chance to double your impact while supporting the Singularity Institute and helping us raise up to $250,000 to help fund our research program and stage the upcoming Singularity Summit… which you can register for now!

$125,000 in backing for this challenge is being generously provided by Rob Zahra, Quixey, Clippy, Luke Nosek, Edwin Evans, Rick Schwall, Brian Cartmell, Mike Blume, Jeff Bone, Johan Edström, Zvi Mowshowitz, John Salvatier, Louie Helm, Kevin Fischer, Emil Gilliam, Rob and Oksana Brazell, Guy Srinivasan, John Chisholm, and John Ku.


2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress. Journalists like Torie Bosch of Slate have argued that “We need to move from robot-apocalypse jokes to serious discussions about the emerging technology.” We couldn’t agree more — in fact, the Singularity Institute has been thinking about how to create safe and ethical artificial intelligence since long before the Singularity landed on the front cover of TIME magazine.

The last 1.5 years were our biggest ever. Since the beginning of 2010, we have:

In the coming year, we plan to do the following:

  • Hold our annual Singularity Summit, in New York City this year.
  • Publish three chapters in the upcoming academic volume The Singularity Hypothesis, along with several other papers.
  • Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
  • Publish a document of open research problems related to Friendly AI, to clarify the research space and encourage other researchers to contribute to our mission.
  • Add additional skilled researchers to our Research Associates program.
  • Publish well-researched documents making the case for existential risk reduction as optimal philanthropy.
  • Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.

We appreciate your support for our high-impact work. As PayPal co-founder and Singularity Institute donor Peter Thiel said:

“I’m interested in facilitating a forum in which there can be… substantive research on how to bring about a world in which AI will be friendly to humans rather than hostile… [The Singularity Institute represents] a combination of very talented people with the right problem space [they’re] going after… [They’ve] done a phenomenal job… on a shoestring budget. From my perspective, the key question is always: What’s the amount of leverage you get as an investor? Where can a small amount make a big difference? This is a very leveraged kind of philanthropy.”

Donate now, and seize a better than usual chance to move our work forward. Credit card transactions are securely processed through Causes.com, Google Checkout, or PayPal. If you have questions about donating, please call Amy Willey at (586) 381-1801.

259 comments, sorted by
magical algorithm
Highlighting new comments since Today at 8:45 AM
Select new highlight date
Moderation Guidelines: expand_more

I just put in 5100 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.

You deserve praise. Would you like some praise?

Praise Rain, for being such a generous benefactor! :)

I vote that we change the referent in the phrase "make it rain" to refer to the LW member instead of the meteorological event.

Thank you SO MUCH for the clarification VNKKET linked to. I was worried. I would usually discourage someone from donating all of their savings to any cause including this one, but in this case it looks like you have thought it through and what you are doing a) make sense and b) is the result of a well thought out lifestyle optimization process.

I'd be happy to talk with you or exchange email (my email is public) to discuss the details, both to better learn to optimize my life and to try to help you with yours, since I expect that efforts will be high return, given the evidence that you are a person who actually does the things that you think would be good lifestyle optimizations at least some of the time.

I'm also desperately interested in better characterizing people who optimize their lifestyles and who try to live without fear etc.

If you have an email exchange and neither of you minds making it public, please do so.

Nice! And for anyone freaked out by the "current balance of my bank account" part, there's an explanation here.

This is admirable.

However, it's important to note that the path that maximizes your own individual hardship is not necessarily the one that maximizes your contribution to humanity's future. For example, it's possible that by keeping some of that money, you could buy luxuries (like, say, a Netflix subscription) that would allow you to recover more quickly from work-related weariness and spend your evenings starting an online company (or acquiring the skills necessary to start an online company, and then starting an online company) that would result in a larger expected donation to SIAI in the long term.

I used to have your attitude of "live very frugally and give SIAI every spare dollar". My new attitude is optimize for both high income and low expenses (keeping in mind that spending money on myself increases my expected income up to a certain point), and to not donate to SIAI automatically--I'm thinking of starting a rival charity in the long run (due to vague intuition, based on very limited evidence, that healthy competition can be good for charities, and the fact that I have some ideas that I think might be better than SIAI's that Michael Vassar doesn't seem interested in).

By the way, I declare Crocker's Rules--it would be extremely valuable if someone provided persuasive evidence that I'm on the wrong track.

I am not a super hero or an ascetic. I'm a regular random internet person with a particular focus on the future. I only donated 26 percent of my gross income last year. And I have a Netflix subscription.

Your superpower is willpower and you exist as a hero to many :)

Your dumb, i wish i could be more like Rain.

Just had to get that out of my system, but as a whole i act in accordance with what you just stated and i hope you do start that charity if it turns out competition is good for charities. Furthermore, i hope that i can get to the point where i can invoke Crocker's Rules on my own points.

Good to hear about the successes, but I am still skeptical about this one:

Since the beginning of 2010, we have:...

Held a wildly successful Rationality Minicamp.

I have yet to see any actual substantiation for this claim beyond the SIAI blog's say-so and a few qualitative individual self-reports. I have not seen any attempt to extend and replicate this success, nor evidence that would even be possible.

If it actually were a failure, how would we know? Would anyone there even admit it, or prefer to avoid making its leaders look bad?

Sorry to be the bad guy there, but this claim has been floating around for a while and looks like it will become one of those things "everyone knows".

Wasn't there something similar a while ago? ... yes there was. I can reasonably assume there will be others in the future. You are trying to get people to donate by appealing to an artificial sense of urgency ("Now is your chance to" , "Donate now" ). Beware that this triggers dark arts alarm bells.

Nevertheless, I have now donated an amount of money.

Only on this site would you see perfectly ordinary charity fundraising techniques described as "dark arts", while in the next article over, the community laments the poor image of the concept of beheading corpses and then making them rise again.

To be fair, it's just the heads that rise again, not the rest of the corpse... ah, I'm not helping, am I? :-)

I thought that we'd pretty much ditched the beheading part precisely for that reason?

CI don't behead people, Alcor offer it as an option. If I've just met someone at a party, I'll tend to say "I'm having my head frozen" because people have heard of that, but I'll explain I'm actually signed up for whole-body if the conversation gets that far.

If I've just met someone at a party, I'll tend to say "I'm having my head frozen"

I usually offer my name and ask them theirs.

I'm quite often asked about my necklace, and I'll say "It's my contract of immortality with the Cult of the Severed Head", or in some contexts, "It's my soul" or "It's my horcrux".

I've asked you this before and you haven't answered: Severed Head? You're signed up with CI, which doesn't do neuro, aren't you? So how does that make sense?

Is it beneficial to say "immortality"? Would "It's my contract of resurrection with the Cult of the Severed Head" be deficient?

Phrases like "live forever" and "immortal" bring corrupting emotional connotations with them. It's not automatic to ignore the literal meaning of terms, even if we consciously keep track of what we mean - and of course in a discussion, we can only do our best to help the other person not be confused, not think for them.

The key thing is for your voice to make it clear that you're not at all afraid and that you think this is what the high-prestige smart people do. Show the tiniest trace of defensiveness and they'll pounce.

So, your method leaves open the option of educating your interlocutor, if they question further. If all you're worried about is avoiding a status hit, you could confidently proclaim it to be an amulet given to you by the king of all geese in honor of your mutual defense treaty.

I wasn't referring to prestige at all when I said "beneficial". I was exclusively referring to what sketerpot is referring to.

In arguments, it's pretty common for people to argue for the traditional "decay and die before living even a hundred years" system with arguments against literal immortality. I've seen this happen so many times.

I don't see how "resurrection" is less of a show of confidence, confidence by nonchalance in framing an issue in light least favorable to the speaker. The advantage is that people do not get confused and think it a bad idea for reasons that don't actually apply.

In arguments, it's pretty common for people to argue for the traditional "decay and die before living even a hundred years" system with arguments against literal immortality. I've seen this happen so many times.

"What if you're getting your liver continuously ripped out by disagreeable badgers?" the argument goes. "Immortality would be potentially super-painful! And that's why the life expectancies in the society in which I happened to be born are about right."

The easiest way to bypass this semantic confusion is to explicitly say that it's about always having the option of continuing to be alive, rather than what people usually mean by immortality.

(P.S: Calling the necklace a phylactery would also be fun.)

The easiest way to bypass this semantic confusion is to explicitly say that it's about always having the option of continuing to be alive, rather than what people usually mean by immortality.

1) Death can still happen from any number of causes - tornadoes, for example.

2) That may bypass some of the most conscious semantic confusion, in the same way declaring that whenever you said any number, you always meant that number minus seven would clear up some confusion (if you did that). There is a better way.

3) It's probably not true.

I have a wallet card rather than a necklace; by and large I end up talking about it because another of my friends brings it up.

Really? My image of cyronics is always of people lying in tanks, a pre-LW conceptualization. Cutting off heads always seems to me like a wasteful way of going about things and has much more of a "creepy sci-fi movie" vibe to it.

Agreed; I'd personally like if a planned schedule for major grants was disclosed regularly, maybe annually.

Anyway, I donated 500 USD.

Disagree about the dark arts, but upvoted for donating anyways.

I fully and completely embrace "dark arts", is there a problem with that?

Your thoughts on the matter are unclear. They could be any of the following, or something else:

"I see no reason to classify broad forms of social interaction as always bad, though they may be effective without persuading people as they would wish to be persuaded, that's just another negative to take into account when considering the total consequences of a speech act."

"I see no reason to classify broad forms of social interaction as always bad, though they may be effective without persuading people as they would wish to be, I don't care directly about people's desires to believe ideas only for certain reasons, such as persuasion and not emotional manipulation."

"I see some forms of changing minds as inherently good and others as inherently bad, and though I value being good rather than bad, it's not a high enough priority to be expressed in my actions very often."

"I see some forms of changing minds as inherently good and others as inherently bad, but I prefer to do what I feel like rather than what's good."

"Me want cookie! Me eat cookie! Om nom nom nom!"

Yeah, I can't write out fully what I mean, I just think the term has come to be too broad, to the point where it nixes obviously pragmatic lines of thought and action. More like:

"I see no reason to classify broad forms of social interaction as always bad, though they may be effective without persuading people as they would wish to be persuaded, upon first reflection, but they'll appreciate it later."

£300; 10% of my summer internship's salary, before tax etc.

I just donated Round(1000 Pi / 3) USD. I also had Google doing an employer match.

Strangely enough, I went through the 'donate publicly' link, but chose not to use facebook, and in the end it called me 'Anonymous Donor'.

It did this ("Anonymous Donor") to me, too.

There's a major conflict of interest in accepting donations from Clippy.

There's a major conflict of interest in accepting donations from Clippy.

I would accept donations from Lucifer himself if he was silly enough to give them to me. I don't see a problem. :)

Really? If a paperclipper loses resources and the SIAI gains resources, the benefit is probably greater than if a human donated the same amount.

No, there isn't.

By the way, I think I'll go to the Singularity Summit this year. It is 385 USD if done before the end of July 31 EST.

Clippy, how can donating to the SIAI possibly meet your goal of maximizing paperclips? Not that I object...

If there were a positive singularity such that we had a whole galaxy's worth of resources, then we (humanity) might turn one planet into paperclips, just for amusement.

Clippy is also socializing (generating positive affect) with people likely to have a hand in the Singularity. It's rather likely, especially considering the relative popularity around here of the idea of acausal trade, that some LWer might decide to devote 10^-7 or so of their post-singularity resources to paperclip production.

Not entirely correct, but I appreciate your attempt at empathizing. You're a good human.

How can responding to trolls like you possibly meet my goal of learning useful information, dummy?

I am happy to see that the success of the previous matching program is being followed up with additional matching funds, and that there is such a broad base of sponsors. I have donated $2000 on top of my typical annual donation.

2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress.

Do people here generally think that this is true? I don't see much of an intersection between Watson and AI; it seems like a few machine learning algorithms that approach Jeopardy problems in an extremely artificial way, much like chess engines approach playing chess. (Are chess engines artificial intelligence too?)

I actually do think it's a big deal, as well as being flashy, though not an extremely big deal. Something along the lines of the best narrow AI accomplishment of any given year and the flashiest of any given 3-5 year period.

Further to my previous comment, I found the second final Jeopardy puzzle to be instructive. The category was "US Cities" and the clue was this:

Its largest airport was named for a World War II hero; its second largest, for a World War II battle.

A reasonably smart human will come up with an algorithm on the fly for solving this, which is to start thinking of major US cities (likely to have 2 or more airports); remember the names of their airports, and think about whether any of the names sound like a battle or a war hero. The three obvious cities to try are Los Angeles, New York, and Chicago. And "Midway" definitely sounds like the name of a battle.

But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.

Probably Watson relies heavily on statistical word associations. If the puzzle has "Charles Shulz" and "This Dog" in it, it will probably guess "Snoopy" without really parsing the puzzle. I'm just speculating here, but my impression is that AI has a long way to go.

A reasonably smart human will come up with an algorithm on the fly for solving this, which is to start thinking of major US cities (likely to have 2 or more airports); remember the names of their airports, and think about whether any of the names sound like a battle or a war hero. The three obvious cities to try are Los Angeles, New York, and Chicago. And "Midway" definitely sounds like the name of a battle.

But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.

This isn't meaningful. Whatever method we use to "come up with algorithms on the fly" is itself an algorithm, just a more complicated one.

Probably Watson relies heavily on statistical word associations. If the puzzle has "Charles Shulz" and "This Dog" in it, it will probably guess "Snoopy" without really parsing the puzzle.

This isn't true. You know, a lot of the things you're talking about here regarding Watson aren't secret...

This isn't meaningful. Whatever method we use to "come up with algorithms on the fly" is itself an algorithm, just a more complicated one

Then why wasn't Watson simply programmed with one meta-algorithm rather than hundreds of specialized algorithms?

This isn't true. You know, a lot of the things you're talking about here regarding Watson aren't secret.

FWIW, the wiki article indicates that Watson would "parse the clues into different keywords and sentence fragments in order to find statistically related phrases." Would you mind giving me some links which show that Watson doesn't rely heavily on statistical word associations?

Then why wasn't Watson simply programmed with one meta-algorithm rather than hundreds of specialized algorithms?

I don't have a clue what you're talking about. Where are you getting this claim that it was programmed with "hundreds of specialized algorithms"? And how is that really qualitatively different from what we do?

Would you mind giving me some links which show that Watson doesn't rely heavily on statistical word associations?

I never said it didn't. I was contradicting your statement that relied on that without any parsing.

I don't have a clue what you're talking about. Where are you getting this claim that it was programmed with "hundreds of specialized algorithms"?

For one thing, the Wiki article talks about thousands of algorithms. My common sense tells me that many of those algorithms are specialized for particular types of puzzles. Anyway, why didn't Watsons creators program Watson with a meta-algorithm to enable it to solve puzzles like the Airport puzzle?

And how is that really qualitatively different from what we do?

For one thing, smart people can come up with new algorithms on the fly. For example an organized way of solving the airport puzzle. If that were just a matter of making a more complicated computer program, then why didn't Watson's creators do it?

I was contradicting your statement that relied on that without any parsing

My statement was speculation. So if you are confident that it is wrong, then presumably you must have solid evidence to believe so. If you don't know one way or another, then we are both in the same boat.

For one thing, smart people can come up with new algorithms on the fly. For example an organized way of solving the airport puzzle. If that were just a matter of making a more complicated computer program, then why didn't Watson's creators do it?

That's like asking why a human contestant failed to come up with a new algorithm on the fly. Or, put simply: no one is perfect. Not the other players, not Watson, and not Watson's creators. While you've certainly identified a flaw, I'm not sure it's really quite as big a deal as you make it out to be. I mean, Watson did beat actual humans, so clearly they managed something fairly robust.

I don't think Watson is anywhere near an AGI, but the field of AI development seems to mostly include "applied-AI" like Deep Blue and Watson, and failures, so I'm going to go ahead and root for the successes in applied-AI :)

That's like asking why a human contestant failed to come up with a new algorithm on the fly.

I disagree. A human contestant who failed to come up with a new algorithm was perhaps not smart enough, but is still able to engage in the same kind of flexible thinking under less challenging circumstances. I suspect Watson cannot do so under any circumstances.

I mean, Watson did beat actual humans, so clearly they managed something fairly robust.

Without it's super-human buzzer speed, I doubt Watson would have won.

I believe that the way things were designed, Ken Jennings was probably at least as good as Watson on buzzer speed. Watson presses the buzzer with a mechanical mechanism, to give it a latency similar to a finger; and Watson doesn't start going for the buzzer until it sees the 'buzzer unlocked' signal. By contrast, Ken Jennings has said that he starts pressing the buzzer before the signal, relying on his intuitive sense of the typical delay between the completion of a question and the buzzer-unlock signal.

Here's what Ken Jennings had to say:

Watson does have a big advantage in this regard, since it can knock out a microsecond-precise buzz every single time with little or no variation. Human reflexes can't compete with computer circuits in this regard. But I wouldn't call this unfair ... precise timing just happens to be one thing computers are better at than we humans. It's not like I think Watson should try buzzing in more erratically just to give homo sapiens a chance.

Here's what Wikipedia says:

The Jeopardy! staff used different means to notify Watson and the human players when to buzz, which was critical in many rounds. The humans were notified by a light, which took them tenths of a second to perceive. Watson was notified by an electronic signal and could activate the buzzer within about eight milliseconds. The humans tried to compensate for the perception delay by anticipating the light, but the variation in the anticipation time was generally too great to fall within Watson's response time. Watson did not operate to anticipate the notification signal.

Interesting, thanks. Upvote for doing some actual research. ;-)

For one thing, the Wiki article talks about thousands of algorithms. My common sense tells me that many of those algorithms are specialized for particular types of puzzles. Anyway, why didn't Watsons creators program Watson with a meta-algorithm to enable it to solve puzzles like the Airport puzzle?

Er... they did? The whole thing ultimately had to produce one answer, after all. It just wasn't good enough.

The whole thing ultimately had to produce one answer, after all. It just wasn't good enough.

Ok, then arguably it's not so simple to create an algorithm which is "just more complicated." I mean, one could say that an ICBM is just like a Quassam rocket, but just more complicated.

An ICBM is "just" a bow-and-arrow system with a more precise guidance system, more energy available to spend reaching its destination, and a more destructive payload.

Right, and it's far more difficult to construct. It probably took thousands of years between the first missile weapons and modern ICBMs. I doubt that it will take thousands of years to create general AI, but it's still the same concept.

The first general AI will probably be "just" an algorithm running on a digital computer.

This comment doesn't appear to have any relevance. Where did anyone suggest that the way to make it better is to just make it more complicated? Where did anyone suggest that improving it would be simple? I am completely baffled.

Earlier, we had this exchange:

Me:

But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.

You:

Whatever method we use to "come up with algorithms on the fly" is itself an algorithm, just a more complicated one.

So you seemed to be saying that there's no big deal about the human ability to come up with a new algorithm -- it's just another algorithm. Which is technically true, but this sort of meta-algorithm obviously would require a lot more sophistication to create.

Well, yes. Though probably firstly should note that I am skeptical that what you are talking about -- the process of answering a Final Jeopardy question -- could actually be described as coming up with new algorithms on the fly in the first place. Regardless, if we do accept that, my point that there is no meaningful distinction between relying on pre-programmed algorithms, and (algorithmically) coming up with new ones on the fly, stands. There's plenty of ways in which our brains are more sophisticated than Watson, but that one isn't a meaningful distinction. Perhaps you mean something else.

my point that there is no meaningful distinction between relying on pre-programmed algorithms, and (algorithmically) coming up with new ones on the fly,

Then again my question: Why not program such a meta-algorithm into Watson?

I still don't think you're saying what you mean. The question doesn't make any sense. The answer to the question you probably intended to ask is, "Because the people writing Watson didn't know how to do so in a way that would solve the problem, and presumably nobody currently does". I mean, I think I get your point, but...