Michael Anissimov posted the following on the SIAI blog:

Thanks to the generosity of two major donors; Jaan Tallinn, a founder of Skype and Ambient Sound Investments, and Edwin Evans, CEO of the mobile applications startup Quinly, every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Interested in optimal philanthropy — that is, maximizing the future expected benefit to humanity per charitable dollar spent? The technological creation of greater-than-human intelligence has the potential to unleash an “intelligence explosion” as intelligent systems design still more sophisticated successors. This dynamic could transform our world as greatly as the advent of human intelligence has already transformed the Earth, for better or for worse. Thinking rationally about these prospects and working to encourage a favorable outcome offers an extraordinary chance to make a difference. The Singularity Institute exists to do so through its research, the Singularity Summit, and public education.

We support both direct engagements with the issues as well as the improvements in methodology and rationality needed to make better progress. Through our Visiting Fellows program, researchers from undergrads to Ph.Ds pursue questions on the foundations of Artificial Intelligence and related topics in two-to-three month stints. Our Resident Faculty, up to four researchers from three last year, pursues long-term projects, including AI research, a literature review, and a book on rationality, the first draft of which was just completed. Singularity Institute researchers and representatives gave over a dozen presentations at half a dozen conferences in 2010. Our Singularity Summit conference in San Francisco was a great success, bringing together over 600 attendees and 22 top scientists and other speakers to explore cutting-edge issues in technology and science.

We are pleased to receive donation matching support this year from Edwin Evans of the United States, a long-time Singularity Institute donor, and Jaan Tallinn of Estonia, a more recent donor and supporter. Jaan recently gave a talk on the Singularity and his life at a entrepreneurial group in Finland. Here’s what Jaan has to say about us:

“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us. Since we have only one shot at getting the transition right, the importance of SIAI’s work cannot be overestimated. Not finding any organisation to take up this challenge as seriously as SIAI on my side of the planet, I conclude that it’s worth following them across 10 time zones.”
– Jaan Tallinn, Singularity Institute donor

Make a lasting impact on the long-term future of humanity today — make a donation to the Singularity Institute and help us reach our $125,000 goal. For more detailed information on our projects and work, contact us at institute@intelligence.org or read our new organizational overview.

-----

Kaj's commentary: if you haven't done so recently, do check out the SIAI publications page. There are several new papers and presentations, out of which I thought that Carl Shulman's Whole Brain Emulations and the Evolution of Superorganisms made for particularly fascinating (and scary) reading. SIAI's finally starting to get its paper-writing machinery into gear, so let's give them money to make that possible. There's also a static page about this challenge; if you're on Facebook, please take the time to "like" it there.

(Full disclosure: I was an SIAI Visiting Fellow in April-July 2010.)

38

378 comments, sorted by Highlighting new comments since Today at 12:40 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I just put in 2700 USD, the current balance of my bank account, and I'll find some way to put in more by the end of the challenge.

Not that I don't think your donation is admirable, but I'm curious how you are able to donate your entire bank account without running the risk of not being able to respond to a black-swan event appropriately and your future well-being and ability to donate to SIAI being compromised?

Do you think it's rational in general for people to donate all their savings to the SIAI?

I have a high limit credit card which I pay off every month, no other form of debt, no expenses until my next paycheck, a very secure, well-paying job with good health insurance, significant savings in the form of stocks and bonds, and several family members and friends who would be willing to help me in the event of some catastrophe.

I prepare and structure my life such that I can take action without fear. I attribute most of this to reading the book Your Money Or Your Life while I was in college. My only regret is that I can afford to give more, but fail to have the cash on hand due to lifestyle expenditures and saving for my own personal future.

Thanks for the reply. Bravo on structuring your life the way you have!

3wedrifid11yHave a reliable source of income and an overdraft available.
1anonym11yI don't think those two alone are sufficient for it to be rational. I work for a mid-sized (in the thousands of employees), very successful, privately held company with a long, stable history, and I feel very secure in my job. I would say I have a reliable source of income, but even so, I wouldn't estimate the probability of finding myself suddenly and unexpectedly out of work in the next year at less than 1%, and if somebody has school loans, a mortgage, etc., then in that situation, it seems more rational to have at least enough cash to stay afloat for a few months or so (or have stocks, etc., that could be sold if necessary) while finding a new job.
5wedrifid11yThey are sufficient to make the "entire bank account" factor irrelevant and the important consideration the $2,700 as an absolute figure. "Zero" is no longer an absolute cutoff and instead a point at which costs potentially increase.
1anonym11yOkay, let's think this through with a particular case. Assume only your two factors: John has a reliable source of income and overdraft protection on an account. Since you assert that those two factors are sufficient, we can suppose John doesn't have any line of credit, doesn't own anything valuable that could be converted to cash, doesn't know anybody that could give him a loan or a job, etc John donates all his savings, and loses his job the next day. He has overdraft protection on his empty bank account, which will save him from some fees when he starts bouncing checks, but the overdraft protection will expire pretty quickly once checks start bouncing. Things will spiral out of control quickly unless John is able to get another source of income sufficient to cover his recurring expenses or there is some other compensating factor than the two you mentioned (which shows they are not sufficient). Or do you think he's doing okay a month later when the overdraft protection is no longer in effect, he has tons of bills due, needs to pay his rent, still hasn't found a job, has run out of food, etc.? And if he hasn't found work within a few months more -- which is quite possible -- he'll be evicted from his home and his credit will be ruined from not having paid any of his bills for several months. ETA: the point isn't that all of that will happen or is even likely to happen, but that a bank account represents some amount of time that the person can stay afloat while they're looking for work. It greatly increases the likelihood that they will find a new source of income before they hit the catastrophe point of being evicted and having their credit ruined.
4Alicorn11yIt looks to me like you're ignoring the "reliable" bit in "reliable source of income".
2wnoise11yThere's no such thing as "reliable" at that level.
-2anonym11yNo, I'm not. I'm assuming that even if one has a reliable source of income, one still might lose that source of income. Maybe you're interpreting 'reliable' as 'certain' or something very close to that. To give some numbers, I would consider a source for which there is a 1% to 3% chance of losing it within 1 year as a reliable source, and my point remains that in that situation, with no other compensating factors than overdraft protection on a bank account, it is not rational to donate all your savings to charity.
2shokwave11yAnd so John bets his current lifestyle that he won't lose his job. That bet looks like: 97%: SIAI gets 2700 dollars. (27 utils) 3%: SIAI gets 2700 dollars, hardship for John. (neg 300 utils) The bet you're recommending is: 97%: Savings continue to grow (1 util) 3%: Savings wiped out to prevent hardship for John. (0 util) The first bet comes out at E(util): 17.19, the second at 0.97 utils. You need to be very very risk-averse for the second option to be preferable. So risk-averse that I would not consider you rational (foregoing 16.22 utils to avoid a 3% chance of neg 300 utils?)
6randallsquared11yEr, you've implied the level of risk aversion by assigning utils, so of course it would be irrational to act more risk averse than John actually is, but it's a weird way to phrase it. If John were more risk averse, the utils for hardship might be considerably lower.
0shokwave11yAssuming we can speak cardinally of utility, John has to value his hardship as 33 times as bad as SIAI getting money is good for the first bet not to come out positive at all. If John cares not one whit for the SIAI that makes sense. If John is a codeword for the case that started this discussion [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38d0] that doesn't make sense. Maybe I should have just made my point with the ratios.
-1anonym11yI think most people would evaluate the hardship of having their credit ruined and being evicted as far greater than 33 times as bad as SIAI getting money (or SIAI getting money in 1 year when a 6-month cash reserve safety net has been built up). Also, it's actually greater than 33 times, because you also have to include the probability that they won't get another income source before they hit the catastrophe point, but even including that, I think most people would rate their life being ruined as orders of magnitude more negative utils than SIAI getting the donation is positive utils. John is definitely not the case that started this discussion. My entire point is that Rain has a bunch of other compensating factors, which one could easily argue make it rational in that case. The issue under debate is whether somebody with none of those addtional compensating factors that Rain has would be rational to do the same, given just a reliable source of income and overdraft protection.
2paulfchristiano11yIf you are a consequentialist and decide that donating $10,000 to SIAI is a good idea, then you already believe the benefit of $10,000 of the SIAI's work significantly exceeds the benefit of saving the life of a child in the developing world. So now, would you say it is obvious that being evicted is 33 times worse than letting a child die because they lack basic medical care? Would you have even a handful of children die so that you can keep your credit rating? I don't know. I am pretty unconvinced that the SIAI will this much good in the world, but I would not call someone irrational if they risked everything they had to help ameliorate abject poverty and I would not necessarily call someone irrational if they believed that working on AI was more important than saving or improving lives directly.
2anonym11yTo answer your questions, I don't think it's obvious that being evicted is 33 times worse than letting a child die (ignoring that the original question was about $2700, not $10,000), but it might actually turn out to be the case, since if somebody is evicted and has their credit ruined (and by hypothesis has none of the oher safeguards), it's quite possible that they will never recover, and thus will never be financially secure enough to make future donations of vastly more consequence than the difference between a large donation now and a large donation in 1 year (after they've established an emergency fund). I think the question is really whether it's rational to donate all your savings now (if you have no reliable way of handling the unexpected case of losing your income source). Doing so greatly increases the probability of a personal catastrophe that one might not properly recover from. A more rational alternative, I would submit, is to donate a smaller amount now, while continuing to save until you have a sufficient emergency fund, and then donate more at that point. It is more rational, I believe, because the end results are quite similar (the same amount donated over the long term), but the personal risk (and the risk of not being able to make future donations) is greatly lessened.
0[anonymous]11yI would call someone irrational if they risked a significant part of their potential future income (donations) to help ameliorate poverty, or help SIAI, immediately. Altruists need to look out for themselves.
0shokwave11yMy apologies. I didn't check the ancestors. I would note that wedrifid's comment can be repaired with "sufficiently" in front of reliable.
1anonym11yI think the rational "fix" is to make sure you can stay afloat for at least a few months if a catastrophe happens. That is also the standard advice of every financial planning book I've ever read. And a Google search finds plenty of sites like: http://www.mainstreet.com/article/moneyinvesting/savings/how-much-should-you-save-rainy-day [http://www.mainstreet.com/article/moneyinvesting/savings/how-much-should-you-save-rainy-day] I'd like to see somebody find a financial advice site or book that says you can periodically wipe out your sayings if you have a reliable source of income and overdraft protection on the empty account (and no other compensating factors, to beat the dead horse). Sometimes it amazes me the things that people on LW will argue for just for the sake of argument.
0shokwave11yI may be motivated to argue the point because I just recently wiped out my savings to purchase a car and an amp - and I don't have any overdraft, or even a credit card.
0wedrifid11yTo be honest that is what I saw you doing, hence the disengagement.
0anonym11yHmm, I thought I was correcting an obvious error you made, and I expected you to immediately explain what you really meant or add some extra condition or retract your claim, and then we would have been done with the discussion.
0Rain11yWhen the question boils down to, "should someone with completely different circumstances do the exact same thing?", my guess is the answer will typically be, "no." I challenge most hypotheticals by pointing out my method of averting crises: I build the appropriate circumstances such that the conflict will not occur.
2wnoise11yThe question, of course comes down to what utils are reasonable to assign. I too could choose numbers that would make either option look better than the other. There's also a wide range of available options between the two extremes you consider, and risk aversion should make one of them preferable..
0gwern11yYeah; I would think that generic health problems alone would be in that area of probability.

I just put in another 850 USD.

7wedrifid11yWow. I'm impressed. This kind of gesture brings back memories of parable that still prompts a surge of positive affect in me, that of the widow donating everything she had (Mark 12:40-44). It also flagrantly violates related hyperbolic exhortation "do not let the left hand know what the right hand is doing". Since that is message that I now dismiss as socially, psychologically and politically naive your public your public declaration seems beneficial. There are half a dozen factors of influence that you just invoked and some of them I can feel operating on myself even now.
3gwern11y/munches popcorn
3tammycamp11yBravo! That's hardcore. Way to pay it forward! Tammy

I just wrote a check for $13,200.

9Costanza11yAs I write, this comment has earned only 5 karma points (one of them mine). According to Larks' [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/399z] exchange rate of $32 dollars to one karma point, this donation has more than four hundred upvotes to go. Wait ... I assume you're planning to actually mail the check too?

Wait ... I assume you're planning to actually mail the check too?

Yes, I mailed the check, too, just after writing the comment. (And I wrote and mailed it to SIAI. No tricks, it really is a donation.)

I would be surprised if karma scaled linearly with dollars over that range.

And to encourage others to donate, let it be known that I just made a 500 euro (about 655 USD) donation.

[-][anonymous]11y 49

I sent 500 USD.

$500. I can wait a little longer to get a new laptop.

I sent 640 dollars.

640 dollars ought to be enough for anyone :-)

0[anonymous]11y640k, perhaps. :-)

$100 from a poor college student. I can't not afford it.

I donated $100 yesterday. I hope to donate more by the end of the matching period, but for now that's around my limit (I don't have much money).

I put in $500, really pinches in Indian rupees (Rs. 23,000+). Hoping for the best to happen next year with a successful book release and promising research to be done.

On the one hand, I absolutly abhore SIAI. On the other hand, I'd love to turn my money into karma...

/joke

$100

7Larks11yAt the moment, my comment has 15 karma, while Leonhart's, which was posted before, and for more money, has 14. As £1 = $1.5, $32 = 1 karma, and thus my donation is only worth around 3 karma. So it seems my joke must have been worth 12 karma, or $386. I never realised my comparative advantage was in humour...
7[anonymous]11yI imagine karma and donation amounts, if they correlate at all, correlate on a log scale. We'd therefore expect your comment to get 14/log(300 x 1.5) x log(100) karma from the donation amount alone, which comes to about 10.5 karma. Therefore 4.5 of your karma came from your joke. Unfortunately, we can't convert your joke karma into dollars in any consistent way. But if you hadn't donated any money, and made an equally good joke, you would have gotten about as much karma as someone donating $7, assuming our model holds up in that range. Edit: also a factor is that I'm sure many people on LessWrong don't actually know the conversion factor between $ and £.

I just donated $1,370. The reason why it's not a round number is interesting, and I'll write a Discussion post about it in a minute. EDIT: Here it is.

Also, I find it interesting that (before my donation) the status bar for the challenge was at $8,500, and the donations mentioned here totaled (by my estimation) about $6,700 of that...

I seem to remember reading a comment saying that if I make a small donation now, it makes it more likely I'll make a larger donation later, so I just donated £10.

Ben Franklin effect, as well as consistency bias. Good on you for turning a bug into a feature.

2timtyler11yDoes that still work, once you know about the sunk cost fallacy [http://en.wikipedia.org/wiki/Sunk_costs#Loss_aversion_and_the_sunk_cost_fallacy] ?
2Perplexed11yPerhaps it works due to warm-and-fuzzy slippery slopes [http://morningerection.wordpress.com/2009/10/13/cant-eat-just-one-pistachio/], rather than sunk costs.
1Paul Crowley11yDon't know - I guess we'll find out!

Donated $500 CAD just now.

By the way, SIAI is still more than 31,000 US dollars away from its target.

Donation made. Here's to optimal philanthropy!

Happy Holidays,

Tammy

I just donated $512.

Donated $120

Darn it; I j just made my annual donation a few days ago, but hopefully my employer's matching donation will come in during the challenge period. I will make sure to make my 2011 donation during the matching period (i.e. well before January 20th), in an amount no less than $1000.

5Benquo11yFollowed up today with my 2011 donation.
3wedrifid11yWhoops. The market just learned.
4Rain11yYou can't time the market. The accepted strategy in a state of uncertainty is continuous, automatic investment. That's why I have a monthly donation set up, in addition to giving extra during matching periods.
5Benquo11yThe matching donor presumably wants the match to be used. So unless the match is often exhausted and I'd be displacing someone else's donation that would only be given if there were a match, it's in no one's interest (who supports the cause) to try to outsmart or prevent a virtuous cycle of donations. And there are generally just 2 states, a 1 for 1 match and a 0 for 1 match, so in the worst case, you can always save up your annual donations, and give them on December 31st if no match is forthcoming. That said, if I weren't using credit to give, I'd use your system.
0wedrifid11yYou are referring to a general principle that has slightly negative relevance in this instance.

$1000 - looking forward to a good year for SIAI in 2011.

Wow, SIAI has succeeded in monetizing Less Wrong by selling karma points. This is either a totally awesome blunder into success or sheer Slytherin genius.

Just donated $200.

$50 - it's definitely a different cause to the usual :)

I have donated a small amount of money.

The Singularity is now a little bit closer and safer because of your efforts. Thank you. We will send a receipt for your donations and our newsletter at the end of the year. From everyone at the Singularity Institute – our deepest thanks.

I do hope they mean they will send a receipt and newsletter by e-mail, and not by physical mail.

-2David_Gerard11yI understood that this was considered pointless hereabouts: that the way to effective charitable donation is to pick the most effective charity and donate your entire charity budget to it [http://www.slate.com/id/2034/]. Thus, the only appropriate donations to SIAI would be nothing or everything. Or have I missed something in the chain of logic? (This is, of course, from the viewpoint of the donor rather than that of the charity.) Edit: Could the downvoter please explain? I am not at all personally convinced by that Slate story, but it really is quite popular hereabouts.

I feel rather uncomfortable at seeing someone mention that he donated, and getting a response which indirectly suggests that he's being irrational and should have donated more.

5shokwave11yIt is indirect, but I believe David is trying to highlight the possibility of problems with the Slate article. Once we have something to protect (a donor) we will be more motivated to explore its possible failings instead of taking it as gospel.
1David_Gerard11yI don't think that, as I have noted. I'm not at all keen on the essay in question. But it is popular hereabouts.
0Kaj_Sotala11yOkay, good. But it still kinda comes off that way, at least to me.

The idea is that the optimal method of donation is to donate as much as possible to one charity. Splitting your donations between charities is less effective, but still benefits each. They actually have a whole page about how valuable small donations are, so I doubt they'd hold a grudge against you for making one.

-1David_Gerard11yYes, I'm sure the charity has such a page. I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest; I was speaking of putative benefit to the donors.

Actions which increase utility but do not maximise it aren't "pointless". If you have two charities to choose from, £100 to spend, and you get a constant 2 utilons/£ for charity A and 1 utilon/£ for charity B, you still get a utilon for each pound you donate to B, even if to get 200 utilons you should donate £100 to A. It's just the wrong word to apply to the action, even assuming that someone who says he's donated a small amount is also saying that he's donated a small proportion of his charitable budget (which it turns out wasn't true in this case).

I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest;

Not the largest, the neediest.

As charities become larger, the marginal value of the next donation goes down; they become less needy. In an efficient market for philanthropy you could donate to random charities and it would work as well as buying random stocks. We do NOT have an efficient market in philanthropy.

2David_Gerard11yNo, I definitely meant size, not need (or effectiveness or quality of goals or anything else). A larger charity can mount more effective campaigns than a smaller one. This is from the Iron Law of Institutions perspective, in which charities are blobs for sucking in money from a more or less undifferentiated pool of donations. An oversimplification, but not too much of one, I fear - there's a reason charity is a sector in employment terms.
6Eliezer Yudkowsky11yIt is necessary at all times to distinguish whether we are talking about humans or rational agents, I think. If you expect that larger organizations mount more effective marketing campaigns and do not attend to their own diminishing marginal utility and that most people don't attend to the diminishing marginal utility either, you should look for maximum philanthropic return among smaller organizations doing very important, almost entirely neglected things that they have trouble marketing, but not necessarily split your donation up among those smaller organizations, except insofar as, being a human, you can donate more total money if you split up your donations to get more glow.Marketing campaign? What's a marketing campaign?
2shokwave11yVoted up because swapping those tags around is funny.
2wedrifid11yRational agents are not necessarily omniscient agents. There are cases where providing information to the market is a practical course of action.
0shokwave11yCan't rational agents then mostly discount your information due to publication bias? In any case where providing information is not to your benefit, you would not provide it.
2wedrifid11yDiscount but not discard. Others have their own agenda and if it were directly opposed to mine such that all our interactions were zero sum then I would ignore their communication. But in most cases there is some overlap in goals or at least compatibility. In such cases communication can be useful. Particularly when the information is verifiable. There will be publication bias but that is a bias not a completely invalidated signal.
2Eliezer Yudkowsky11yIn which case the nonprovision of that info is also information. But it wouldn't at all resemble marketing as we know it, either way.
0shokwave11yAlthough I now will treat all marketing as a specific instantiation of the clever arguer.
0Nick_Tarleton11yTo amplify Eliezer's response: What Evidence Filtered Evidence? [http://lesswrong.com/lw/jt/what_evidence_filtered_evidence/] and comments thereon.
0TheOtherDave11yA mechanism for making evidence that supports certain conclusions more readily available to agents whose increased confidence in those conclusions benefits me.
0Nick_Tarleton11yHow does everyone splitting donations go against the interests of the neediest charities, if we don't have an efficient market in philanthropy and the lumped donations would have gone to the most popular (hypothetically = largest) charities rather than the neediest? Or did you interpret "splitting donations" as referring to something other than everyone doing so?
0[anonymous]11yIf everyone splitting donations were against the interests of the neediest charities, wouldn't that imply that we did have an efficient market in philanthropy — that the lumped donations would have gone to the neediest charities, rather than the most popular (hypothetically = largest)?
9Plasmon11yMy donations are as effective as possible, I have never before donated anything to any organisation (except indirectly, via tax money). I am too cautious to risk "black-swan events" [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38d5]. I am probably overly cautious. It could well be argued that donating more would be more cautious, depending on the probability of both black-swan events and UFAI, and the effectiveness of SIAI, but I'm sure there are plenty of threads about that already.
3[anonymous]11yUnless, of course, you believe that the decisions of other people donating to charity are correlated with your own. In this case, a decision to donate 100% of your money to SIAI would mean that all those people implementing a decision process sufficiently similar to your own would donate 100% of their money to SIAI. A decision to donate 50% of your money to SIAI and 50% to Charity Option B would imply a similar split for all those people as well. If there are enough people like this, then the total amount of money involved may be large enough that the linear approximation does not hold. In that case, it seems natural to me to assume that, if both charity options are worthwhile, significantly increasing the successfulness of both charities is more important than increasing SIAI's successfulness even more significantly. Thus, you would donate 50%/50%. Overall, the argument you link to seems to me to parallel (though inexactly) the argument that voting is pointless considering how unlikely your vote is to swing the outcome.
2Caspian11yAlso your errors in choosing a charity won't necessarily be random. For example, if you trust your reasoning to pick the best three charities, but suspect if you had to pick just one you'd end up influenced by deceptive marketing, bad arguments, or your biases you'd rather not act on, and the same applies to other people, you may be better off not choosing between them, and better off if other people don't try to choose between them.
2paulfchristiano11yThis only applies if people donate simultaneously, which I doubt is the case in practice.
0[anonymous]11yI don't understand. Could you please clarify?
7paulfchristiano11yThis argument assumes that the people using a similar decision process are faced with the same evidence. In particular, if they made their decision significantly later then they would know about your donation (not directly, but if SIAI now had significantly more funds they could know about it). If all decision makers were perfectly rational and omniscient, but didn't have to make their decisions at the same time, then you wouldn't expect to see the 50/50 splitting. You would expect everyone to donate to the charity for which the current marginal usefulness is greatest. In the situation you envision, the marginal usefulness would decrease over time, until eventually donors would notice that it was no longer the best option, and then start diverting their funding. Perhaps once this sort of equilibrium is reached splitting your money is advisable, but we are extremely unlikely to be anywhere near such an equilibrium (with respect to my personal values) unless there is an explicit mechanism pushing us towards it. This would probably require postulating a lot of brilliant rational donors with identical values.
2David_Gerard11yI'm not keen on it myself, but I've seen it linked here (and pushed elsewhere by LessWrong regulars) quite a lot.
1MichaelVassar11yThe slate article is correct, but its desirable to be polite as well as accurate if you actually want to communicate something. Also, if someone wants to donate to feel good, that feeling good is an actively good thing that they are purchasing and its undesirable to try to damage it.
4SilasBarta11yWhat's the status on this? The picture on the page [http://intelligence.org/tallinn-evans_challenge] suggests the $125,000 matching maximum was met, but nothing says for sure. What time on Thursday is the deadline?
2curiousepic11yMousing over the image gives the total $121,616.
4SilasBarta11ySweet, I can still be the one to push it over! [1] [1] so long as you disregard the fungibility of money and therefore my contribution's indistinguishability from that of all the others.
2SilasBarta11yWait, if I do an echeck through Paypal today, would it count toward the challenge? Paypal says it takes a few days to process :-/ EDIT: n/m, I guess I can just do it via credit card, though SIAI gets less that way.
4AnnaSalamon11yDonations count toward the challenge if they're dated before the end, even if they aren't received until a few days later.
0SilasBarta11yThanks. How long until a donation is reflected in the picture? Is it possible the 125k goal is already met?
2MichaelAnissimov11yI update it daily.
0SilasBarta11yVictory! The $125k challenge has been met, according to the current site's picture [http://intelligence.org/tallinn-evans_challenge]! (mouse over the image) Though of course it still encourages you to donate to help meet ... that same $125k goal.
8MichaelAnissimov11yThank you everyone, I really appreciate all your contributions. We've had a wonderful past year and the fulfillment of this matching challenge really capped it off. http://singinst.org/blog/2011/01/20/tallinn-evans-challenge-grant-success/ [http://singinst.org/blog/2011/01/20/tallinn-evans-challenge-grant-success/]
0shokwave11yThe guy from GiveWell [http://lesswrong.com/lw/3gj/efficient_charity_do_unto_others/38u2?context=3] linked to this [http://blog.givewell.org/2010/03/08/nothing-wrong-with-selfish-giving-just-dont-call-it-philanthropy/] , which seems relevant to your point.
0shokwave11yYour conclusion doesn't follow from the premise. A small amount of money could reasonably be Plasmon's entire charity budget; when you say "nothing or everything" you do not qualify it with "of your charity budget". edit: Oy, if I'd scrolled down! [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38ir]
-8timtyler11y

New Year's resolution is not to donate to things until I check if there's a matching donation drive starting the next week :( Anyway, donated a little extra because of all the great social pressure from everyone's amazing donations here. Will donate more when I have an income.

6Benquo11yAt first I felt a little better that someone else made the same mistake, but on reflection I should feel worse.
2Dorikka11yI would avoid the phrase "I should feel worse" in most scenarios due to pain and gain motivation [http://xuenay.livejournal.com/328583.html].
0John_Maxwell11yOn reflection I shouldn't feel bad about much of anything.
5Kevin11yI don't think it actually matters, unless the matching drive isn't fulfilled. Even then, I would be really surprised if Jaan and Edwin take their money back. So in some sense it is better to have donated before the drive, as it allows someone else to have their donation matched who might not have donated without the promise of matching.
2NancyLebovitz11yI wonder if there's empirical research on how much in advance to announce matching donation drives so as to maximize revenue. Any observations of how established charities handle this?

Just donated $500.

(At one time I had an excuse for waiting. But plainly I won't get confirmation on a price for cryonics-themed life insurance by the deadline, and should likely have donated sooner).

I donated 1000 USD. (This puts them at ~$122,700 ... so close!)

In a possibly bad decision, I put a $1000 check in the mailbox with the intent of going out and transferring the money to my checking account later today. That puts them at $123,700 using Silas' count.

6anon89511y...yep, didn't make it. I'll have to get to the bank early tomorrow and hope the mail is slow.
3anon89511yEnded up making the transfer over the phone.

I have not donated a significant amount before, but will donate $500 IF someone else will (double) match it.

Why did the SIAI remove the Grant Proposals page? http://singinst.org/grants/challenge#grantproposals

EDIT: Donated $500, in response to wmorgan's $1000

Your comment spurred me into donating an additional $1,000.

Excellent! Donated $500. Whether yours is a counter-bluff or not ;)

This is by far the most I've donated to a charity. I spent yesterday assessing my financial situation, something I've only done in passing because of my fairly comfortable position. It has always felt smart to me to ignore the existence of my excess cash, but I have a fair amount of it and the recent increase of discussion about charity has made me reassess where best to locate it. I will be donating to SENS in the near future, probably more than I have to SIAI. I'm aware of the argument for giving everything to a single charity, but it seems even Eli is conflicted about giving advice about SIAI vs. SENS, given this discussion.

I recently read that investing in the stock market (casually, not as a trader or anything) in the hopes that your wealth will grow such that you can donate even more at a later time is erroneous because the charity could be doing the same thing, with more of it. Is this true, and does anyone know if the SIAI, or SENS does this? It seems to me that both of these organizations have immediate use for pretty much all money they receive and do not invest at all. How much would my money have to make in an investment account to be able to contribute more (adjusting for inflation) in the future?

6endoself11yThe logic of donating now is that if a charity would use your money now, it is because less money now is more useful than more money later. Not all charities may be smart enough to realize whether they should invest, but I feel confidant that if investing money rather than spending it right away were the best approach for their goals, the people at the SIAI would be smart enough to do so.
1Dorikka11yI think that a rational agent would donate the $500 eventually either way because the utility value of a $500 contribution would be greater than that of a $0 contribution, if the matching $500 was not forthcoming. Thus, the precommitment to withhold the donation if it is not matched seems to be a bluff (for even if the agent reported that he had not donated the money, he could do so privately without fear of exposure) Therefore, it seems to me that the matching arrangement is a device designed to convince irrational agents, because the matcher's contribution does not affect the amount of the original donor's contribution. Am I missing something?
3endoself11yHe may actually refrain from donating, by the reasoning that such offers would work iff someone deems them reasonable and that person is more likely to deem it reasonable if he does, by TDT/UDT. I could see myself doing such a thing.
0Dorikka11yBut whether he does or doesn't donate does not affect how such offers are responded to in the future, since he is free to lie without fear of exposure. Given such, it seems that he should always maximize utility by donating.
0endoself11yFuture offers do not matter. His precommitment not to donate if others do not acausally effects how this offer is responded to.
0Dorikka11yI'm not sure I understand what you mean. Would you mind explaining?
0endoself11yAre you familiar with UDT? There's a lot about it written on this site. It's complex and non-intuitive, but fascinating and a real conceptual advance. You can start by reading about http://wiki.lesswrong.com/wiki/Counterfactual_mugging [http://wiki.lesswrong.com/wiki/Counterfactual_mugging] . In general, decision theory is weird, much weirder than you'd expect.
0Dorikka11yI've read some of the posts on Newcomblike problems, but am not very familiar with UDT. I'll take a look -- thanks for the link.

a book on rationality, the first draft of which was just completed

If Eliezer's reading this: Congratulations!

I just sent 15 USD to each the SIAI, VillageReach and The Khan Academy.

I am aware of and understand this but felt more comfortable to diversify right now. I also know it is not much, I'll have to somehow force myself to buy less shiny gadgets and rather donate more. Generally I have to be less inclined to the hoarding of money in favor of giving.

I donated $250 on the last day of the challenge.

up until January 20, 2010

2011?

6Kaj_Sotala11yGood catch. I e-mailed the SIAI folks about that typo, which seems to be both on the holiday challenge page and the blog posts. It'll probably get fixed in a jiffy. EDIT: It's now been fixed on the challenge page and the blog post.
0[anonymous]11yYeah that's apparently a typo from the original source, I emailed Mike A about it.
0[anonymous]11yJust testing something

every contribution to the Singularity Institute up until January 20, 2011 will be matched dollar-for-dollar, up to a total of $125,000.

Anyone willing to comment on that as a rationalist incentive? Presumably I'm supposed to think "I want more utility to SIAI so I should donate at a time when my donation is matched so SIAI gets twice the cash" and not "they have money which they can spare and are willing to donate to SIAI but will not donate it if their demands are not met within their timeframe, that sounds a lot like coercion/blackmail&q... (read more)

It's a symmetrical situation. Suppose that A prefers having $1 in his personal luxury budget to having $1 in SIAI, but prefers having $2 in SIAI to having a mere $1 in his personal luxury budget. Suppose that B has the same preferences (regarding his own personal luxury budget, vs SIAI).

Then A and B would each prefer not-donating to donating, but they would each prefer donating-if-their-donation-gets-a-match to not-donating. And so a matching campaign lets them both achieve their preferences.

This is a pretty common situation -- for example, lots of people are unwilling to give large amounts now to save lives in the third world, but would totally be willing to give $1k if this would cause all other first worlders to do so, and would thereby prevent all the cheaply preventable deaths. Matching grants are a smaller version of the same.

4steven046111yIt seems like it would be valuable to set up ways for people to make these deals more systematically than through matching grants.
3wedrifid11yIndeed. It seems to be essentially 'solving a cooperation problem'.
1timtyler11yThe sponsor gets publicity for their charitable donation - while the charity stimulates donations - by making donors feel as though they are getting better value for money. If the sponsor proposes the deal, they can sometimes make the charity work harder at their fund-raising effort for the duration - which probably helps their cause. If the charity proposes the deal, the sponsor can always pay the rest of their gift later.

According to the page, they (we) made it to the full $125,000/250,000! Does anyone know what percentage this is of all money the SIAI has raised?

1Rain11yTheir annual budget is typically in the range of $500,000, so this would be around half.
6XiXiDu11yI wonder at what point donations to the SIAI would hit diminishing returns and contributing to another underfunded cause would be more valuable? Suppose for example Bill Gates was going to donate 1.5 billion US$, would my $100 donation still be best placed with the SIAI?
2Rain11yMarginal contributions are certainly important to consider, and it's one of the reasons I mentioned in my original post about why I support them. Even asteroid discovery, long considered underfunded, is receiving hundreds of millions.
[-][anonymous]11y 0

I just gave $512.

[-][anonymous]11y -3

Try to be objective and consider whether a donation to the Singularity Institute is the most efficient charitable "investment"? Here's a simple argument that it's most unlikely. What's the probability that posters would stumble on the very most efficient investment: it requires research. Rationalists don't accede this way to the representativeness heuristic, which leads the donor to choose the recipient readily accessible to consciousness.

Relying on heuristics where their deployment is irrational, however, isn't the main reason the Singularity In... (read more)

Your argument applies to any donation of any sort, in fact to any action of any sort. What is the probability that the thing I am currently doing is the best possible thing to do? Why, its basically zero. Should I therefore not do it?

Referring to the SIAI as a cause "some posters stumbled on" is fairly inaccurate. It is a cause that a number of posters are dedicating their lives to, because in their analysis it is among the most efficient uses of their energy. In order to find a more efficient cause, I not only have to do some research, I have to do more research than the rational people who created SIAI (this isn't entirely true, but it is much closer to the truth than your argument). The accessibility of SIAI in this setting may be strong evidence in its favor (this isn't a coincidence; one reason to come to a place where rational people talk is that it tends to make good ideas more accessible than bad ones).

I am not donating myself. But for me there is some significant epistemic probability that the SIAI is in fact fighting for the most efficient possible cause, and that they are the best-equipped people currently fighting for it. If you have some information or an arg... (read more)

6Aharon11yI'm curious: If you have the resources to donate (which you seem to imply by the statement that you have resources for which you can make a decision), and think it would be good to donate to the SIAI, then why don't you donate? (I don't donate because I am not convinced unfriendly AI is such a big deal. I am aware that this may be lack of calibration on my part, but from the material I have read on on other sites, UFAI just doesn't seem to be that big a risk. (There were some discussions on the topic on stardestroyer.net. While the board isn't as dedicated to rationality as this board is, the counterarguments seemed well-founded, although I don't remember the specifics right now. If anybody is interested, I will try to dig them up.)

I don't know if it is a good idea to donate to SIAI. From my perspective, there is a significant chance that it is a good idea, but also a significant chance that it isn't. I think everyone here recognizes the possibility that money going to the SIAI will accomplish nothing good. I either have a higher estimate for that possibility, or a different response to uncertainty. I strongly suspect that I will be better informed in the future, so my response is to continue earning interest on my money and only start donating to anything when I have a better idea of what is going on (or if I die, in which case the issue is forced).

The main source of uncertainty is whether the SIAI's approach is useful for developing FAI. Based on its output so far, my initial estimate is "probably not" (except insofar as they successfully raise awareness of the issues). This is balanced by my respect for the rationality and intelligence of the people involved in the SIAI, which is why I plan to wait until I get enough (logical) evidence to either correct "probably not" or to correct my current estimates about the fallibility of the people working with the SIAI.

-3[anonymous]11yThis posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes. Don't tell me there isn't irrational prejudice here! The argument that any donation is subject to similar objections is silly because it's obvious that a human-welfare maximizer would plug for the the donation the donor believes best, despite the unlikelihood of finding the absolute best. It should also be obvious that my argument is that it's unlikely that the Singularity Institute comes anywhere near the best donation, and one reason it's unlikely is related to the unlikelihood of picking the best, even if you have to forgo the literal very best! Numerous posters wouldn't pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient. As Yvain said in the previous entry: "Deciding which charity is the best is hard." Rationalists should detect the irrationality of making an exception when one option is the Singularity Institute. (As to whether signaling is rational, completely irrelevant to the discussion, as we're talking about the best donation from a human-welfare standpoint. To argue that the contribution makes sense because signaling might be as rational as donating, even if plausible, is merely to change the subject, rather than respond to the argument.) Another argument for the Singularity Institute donation I can't be dismiss so easily. I read the counter-argument as saying that the Singularity Institute is clearly the best donation conceivable. To that I don't have an answer, not any more than I have a counter-argument for many outright delusions. I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain's preceding entry, where each $500 saves a human life. Before downvoting this, ask yourself whether you're saying my point is unintelligent o
8Vaniver11yEnvy is unbecoming; I recommend against displaying it. You'd be better off starting with your 3rd sentence and cutting the word "silly." They have worked out this math, and it's available in most of their promotional stuff that I've seen. Their argument is essentially "instead of operating on the level of individuals, we will either save all of humanity, present and future, or not." And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it's a better bet than giving $500 to get one guaranteed life (and that only looks at present lives). The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don't know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).
-3[anonymous]11yTheir logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal's Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven's infinite rewards. One of the argument's fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes. The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute's activities make matters worse. They aren't entitled to assume their efforts to control matters won't have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn't precisely what will send one to hell. We just don't know (can't know) about god's nature by merely postulating his possible existence: we can't know that the miniscule effects don't run the other way. Similarly if not exactly the same, there's no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects. When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.

there's no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.

When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?

1[anonymous]11yDepends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before? Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit? Because if it was the latter, I'd bet pretty strongly against you not getting there...
4Nick_Tarleton11yThe point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.
0[anonymous]11yIn a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you're looking in the wrong direction, I'd seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.
0Desrtopa11yIf you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you'd intentionally stop looking if you arrived at one, and not if you didn't. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.

I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we're calibrated for are unreliable.

For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say "well, geez, 1e-10 is such a tiny number, why not?"

Which demonstrates that my brain isn't calibrated to work with numbers in that range, which is no surprise.

So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.

7Vaniver11yThey're aware of this and have written about it [http://lesswrong.com/lw/z0/the_pascals_wager_fallacy_fallacy/]. The argument is "just because something looks like a known fallacy doesn't mean it's fallacious." If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn't sound like Pascal's Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method. It's not clear to me that it's as easily, and I think that's where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they're still ahead by 1e-6. With Pascal's Wager, you don't have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It's like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there's still a chance malevolent god is the one you end up with, but it's a better bet than picking solo (and you're screwed anyway if you get a malevolent god). I agree with you that it's not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.
-5[anonymous]11y
7timtyler11yYou know this is a blog started by and run by Eliezer Yudkowsky - right? Many of the posters are fans. Looking at the rest of this thread, signaling seems to be involved in large quantities - but consider also the fact that there is a sampling bias.
4Desrtopa11yDo you have any argument for why the SIAI is unlikely to be the best other than the sheer size of the option space? This is a community where a lot of the members have put substantial thought into locating the optimum in that option space, and have well developed reasons for their conclusion. Further, there are not a lot of real charities clustered around that optimum. Simply claiming a low prior probability of picking the right charity is not a strong argument here. If you have additional arguments, I suggest you explain them further. (I'll also add that I personally arrived at the conclusion that an SIAI-like charity would be the optimal recipient for charitable donations before learning that it existed, or encountering Overcoming Bias, Less Wrong, or any of Eliezer's writings, and in fact can completely discount the possibility that my rationality in reaching my conclusion was corrupted by an aura effect around anyone I considered to be smarter or more moral than myself.)
2paulfchristiano11yIt is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence. This is, or at least was, basically Eliezer's blog; if the thing that unites its readers is respect for his intelligence and judgment, then you should be completely unsurprised to see that many support SIAI. It is not clear how this is a form of irrationality, unless you are claiming that the facts are so clearly against the SIAI that we should be interpreting them as evidence against the intelligence of supporters of the SIAI. Someone who is trying to have an effect on the course of an intelligence explosion is more likely to than someone who isn't. I think many readers (myself included) believe very strongly that an intelligence explosion is almost certainly going to happen eventually and that how it occurs will have a dominant influence on the future of humanity. I don't know if the SIAI will have a positive, negative, or negligible influence, but based on my current knowledge all of these possibilities are still reasonably likely (where even 1% is way more than likely enough to warrant attention).

Upvoting but nitpicking one aspect:

It is obvious that a number of smart people have decided that SIAI is currently the most important cause to devote their time and money to. This in itself constitutes an extremely strong form of evidence.

No. It isn't very strong evidence by itself. Jonathan Sarfati is a chess master, published chemist, and a prominent young earth creationist. If we added all the major anti-evolutionists together it would easily include not just Sarfati but also William Dembski, Michael Behe, and Jonathan Wells, all of whom are pretty intelligent. There are some people less prominently involved who are also very smart such as Forrest Mims.

This is not the only example of this sort. In general, we live in a world where there are many, many smart people. That multiple smart people care about something can't do much beyond locate the hypothesis. One distinction is that they most smart people who have looked at the SIAI have come away not thinking they are crazy, which is a very different situation from the sort of example given above, but by itself smart people having an interest is not strong evidence.

(Also, on a related note, see this subthread here which made it clear that what smart people think, even if one has a general consensus among smart people is not terribly reliable.)

6paulfchristiano11yThere are several problems with what I said. My use of "extremely" was unequivocally wrong. I don't really mean "smart" in the sense that a chess player proves their intelligence by being good at chess, or a mathematician proves their intelligence by being good at math. I mean smart in the sense of good at forming true beliefs and acting on them. If Nick Bostrom were to profess his belief that the world was created 6000 years ago, then I would say this constitutes reasonably strong evidence that the world was created 6000 years ago (when combined with existing evidence that Nick Bostrom is good at forming correct beliefs and reporting them honestly). Of course, there is much stronger evidence against this hypothesis (and it is extremely unlikely that I would have only Bostrom's testimony---if he came to such a belief legitimately I would strongly expect there to be additional evidence he could present), so if he were to come out and say such a thing it would mostly just decrease my estimate of his intelligence rather than decreasing my estimate for the age of the Earth. The situation with SIAI is very different: I know of little convincing evidence bearing one way or the other on the question, and there are good reasons that intelligent people might not be able to produce easily understood evidence justifying their positions (since that evidence basically consists of a long thought process which they claim to have worked through over years). Finally, though you didn't object, I shouldn't really have said "obvious." There are definitely other plausible explanations for the observed behavior of SIAI supporters than their honest belief that it is the most important cause to support.
5Vladimir_Nesov11yThere is a strong selection effect. Most people won't even look too closely, or comment on their observations. I'm not sure in what sense we can expect what you wrote to be correct.
1David_Gerard11yThis comment, on this post, in this blog, comes across as a textbook example of the Texas Sharpshooter Fallacy. You don't form your hypothesis after you've looked at the data, just as you don't prove what a great shot you are by drawing a target around the bullet hole.
4paulfchristiano11yI normally form hypotheses after I've looked at the data, although before placing high credence in them I would prefer to have confirmation using different data. I agree that I made at least one error in that post (as in most things I write). But what exactly are you calling out? I believe an intelligence explosion is likely (and have believed this for a good decade). I know the SIAI purports to try to positively influence an explosion. I have observed that some smart people are behind this effort and believe it is worth spending their time on. This is enough motivation for me to seriously consider how effective I think that the SIAI will be. It is also enough for me to question the claim that many people supporting SIAI is clear evidence of irrationality.
1David_Gerard11yYes, but here you're using your data to support the hypothesis you've formed.
1jsteinhardt11yIf I believe X and you ask me why I believe X, surely I will respond by providing you with the evidence that caused me to believe X?
-1wedrifid11yExternal reality is not changed by the temporal location of hypothesis formation.
3JoshuaZ11yNo, but when hypotheses are formed is relevant to evaluating their likelyhood given standard human cognitive biases.
2Aharon11yI'm sorry, I didn't find the thread yet. I lurked there for a long time and just now registered to use their search function and find it again. The main objection I clearly remember finding convincing was that nanotech can't be used in the way many proponents of the Singularity propose, due to physical constraints, and thus an AI would be forced to rely on existing industry etc.. I'll continue the search, though. The point was far more elaborated than one sentence. I face a similar problem as with climate science here: I thoroughly informed myself on the subject, came to the conclusion that climate change deniers are wrong, and then, little by little, forgot the details of the evidence that led to this conclusion. My memory could be better :-/
5Kaj_Sotala11yOf course, the Singularity argument in no way relies on nanotech.
2XiXiDu11yWithout advanced real-world nanotechnology it will be considerable more difficult for an AI to FOOM and therefore pose an existential risk. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU's. It will have to rely on puny humans for a lot of tasks. It won't be able to create new computational substrate without the whole economy of the world supporting it. It won't be able to create an army of robot drones overnight without it either. Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But more importantly it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn't just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place. So the absence of advanced nanotechnology constitutes an immense blow to any risk estimates including already available nanotech. Further, if one assumes that nanotech is a prerequisite for AI going FOOM then another question arises. It should be easier to create advanced replicators to destroy the world than creating AGI that then creates advanced replicators that then fails hold and then destroys the world. Therefore one might ask what is the bigger risk here.
4Kaj_Sotala11yTo be honest, I think this [http://blanu.net/curious_yellow.html] is far scarier AI-go-FOOM scenario than nanotech is.
0XiXiDu11yGiving the worm-scenario a second thought, I do not see how an AGI would benefit from doing that. An AGI incapable of acquiring resources by means of advanced nanotech assemblers would likely just pretend to be friendly to get humans to build more advanced computational substrates. Launching any large-scale attacks on the existing infrastructure would cause havoc but also damage the AI itself because governments (China etc.) would shut-down the whole Internet rather than living with such an infection. Or even nuke the AI's mainframe. And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level. This in turn considerably reduces the existential risk posed by an AI. That is not to say that it wouldn't be a huge catastrophe as well, but there are other catastrophes on the same scale that you would have to compare. Only by implicitly making FOOMing the premise one can make it the most dangerous high-impact risk (never mind aliens, the LHC etc.).
8Kaj_Sotala11yYou don't see how an AGI would benefit from spreading itself in a distributed form to every computer on the planet, control and manipulate all online communications, encrypt the contents of hard drives and keep their contents hostage, etc.? You could have the AGI's code running on every Internet-connected computer on the planet, which would make it virtually impossible to get rid of. And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I'm pretty sure that that'll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups. Add to that the fact that we've already had one case of a computer virus infecting the control computers in an off-line facility and, according to one report [http://www.langner.com/en/2010/12/26/the-short-path-from-cyber-missiles-to-dirty-digital-bombs/] , delaying the nuclear program of country by two years. Add to that the fact that even people's normal phones are increasingly becoming smartphones, which can be hacked, and that simpler phones have already shown themselves to be vulnerable to being crashed by a well-crafted SMS [http://www.conceivablytech.com/4786/business/killer-sms-demonstrated/]. Let 20 more years pass and us become more and more dependant on IT, and an AGI could probably keep all of humanity as its hostage - shutting down the entire Internet simply wouldn't be an option. This is nonsense. The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don't, having humans build whatever you want is relatively trivial. Besides, we already have early stage self-replicating [http://reprap.org/wiki/Main_Page] machinery today: you don't need nanotech for that.
3XiXiDu11yI understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can't be disturbed that easily and is an enemy you can't pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks. It isn't trivial. There is a strong interdependence of resources and manufacturers. The AI won't be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won't be able to coordinate a world-conspiracy because it hasn't been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You'd probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
6Kaj_Sotala11yThe point was that you can't use an EMP if that means bringing down the whole human computer network. Why would people need to get suspicious? If you have tabs on all the communications in the world, you can make a killing on the market, even if you didn't delay the orders from your competitors. One could fully legitimately raise enough money by trading to hire people to do everything you wanted. Nobody needs to ever notice that there's something amiss, especially not if you do it via enough shell corporations. Of course, the AGI could also use more forceful means, though it's by no means necessary. If the AGI revealed itself and the fact that it was holding all of humanity's networked computers hostage, it could probably just flat-out tell the humans "do this or else". Sure, not everyone would obey, but some would. Also, disrupt enough communications and manufacture enough chaos, and people will be too distracted and stressed to properly question forged orders. Social engineering is rather easy with humans, and desperate people are quite prone to wishful thinking. This claim strikes me as bizarre. Why would you need nanotech to acquire more resources for self-improvement? Some botnets have been reported to have around 350,000 members. Currently, the distributed computing project Folding@Home, with 290,000 active clients composed mostly of volunteer home PCs and Playstations, can reach speeds in the 10^15 FLOPS range [http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats]. Now say that an AGI gets developed in 20 years from now. A relatively conservative estimate that presumed an AGI couldn't hack into more computers than the best malware practicioners of today, that a personal computer would have a hundred times the computing power of today, and that an AGI required a minimum of 10^13 FLOPS to run, would suggest that an AGI could either increase its own computational capacity 12,000-fold, or spawn 12,000 copies of itself. Alternatively, if it wanted to avo
0[anonymous]11yHere my reply [http://lesswrong.com/r/discussion/lw/3kg/the_revelation/].
2Dr_Manhattan11yAnd what kind of computer controls the EMP? Or is it hand-cranked?
2XiXiDu11yThe point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness. If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.
-1gwern11yAre you aware of what the most common EMPs are? Nukes [http://en.wikipedia.org/wiki/Electromagnetic_pulse]. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball. What kind of computer indeed!
0Dr_Manhattan11yI used this very example to argue with Robin Hanson during after-lecture QA [http://www.vimeo.com/groups/designexrisk] (it should be in Parsons Part 2), it did not seem to help :)
2JoshuaZ11yI'm not convinced of this. As time progresses there are more and more vulnerable systems on the internet, many which shouldn't be. That includes nuclear power plants, particle accelerators, conventional power plants and others. Other systems likely have some methods of access such as communication satellites. Soon this will also include almost completely automated manufacturing plants. An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power. The extra processing power means that the AI can potentially crack cryptosystems that are being used by secure parts of the internet or non-internet systems that use radio to communicate. That said, I agree that without strong nanotech this seems like an unlikely scenario.
1XiXiDu11yYes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology? The point is that the only reason for a donation to the SIAI to have an higher expected pay off is the premise that AI can FOOM and kill all humans and take over the universe. In all other cases dumb risks are as or more likely and can accomplish to wipe us out as well. So why the SIAI? I'm trying to get a more definite answer to that question. I at least have to consider all possible arguments I can come up with in the time it takes to write a few comments and see what feedback I get. That way I can update my estimates and refine my thinking.
3Kaj_Sotala11yAsteroid impacts and solar flares are relatively 'dumb' risks, in that they can be defended against once you know how. They don't constantly try to outsmart you. This question is a bit like asking "yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons". Bioweapons and nanotechnology are particular special cases of "dangerous technologies that humans might come up with". An AGI is potentially employing all of the dangerous technologies humans - or AGIs - might come up with.
-2XiXiDu11yYour comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn't follow because if such an AGI is as likely as the other risks then it doesn't matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.
0JoshuaZ11yWell, one doesn't need to think that that it intrinsically different. One would just need to think that the marginal return here is high because we aren't putting in much resources now to look at the problem. Someone could potentially make that sort of argument for any existential risk.
2XiXiDu11yYes. I am getting much better responses from you than from some of the donors that replied or the SIAI itself. Which isn't very reassuring. Anyway, you are of course right there. The SIAI is currently looking into the one existential risk that is most underfunded. As I said before, I believe that the SIAI should exist and therefore should be supported. Yet I still can't follow some of the more frenetic supporters. That is, I don't see the case being as strong as some portray it. And there is not enough skepticism here, although people reassure me constantly that they have been skeptic but were eventually convinced. They just don't seem very convincing to me.
3Rain11yI guess I should stop trying then? Have I not provided anything useful? And do I come across as "frenetic"? That's certainly not how I feel. And I figured 90 percent chance we all die to be pretty skeptical. Maybe you weren't referring to me...
1XiXiDu11yI'm sorry, I shouldn't have phrased my comment like that. No, I was referring to this [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38v9] and this comment [http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/38vc] that I just got. I feel too tired to reply to those right now because I feel they do not answer anything and that I have already tackled their content in previous comments. I'm sometimes getting a bit weary when the amount of useless information gets too high. They probably feel the same about me and I should be thankful that they take the time at all. I can assure you that my intention is not to attack anyone or the SIAI personally just to discredit them. I'm honestly interested, simply curious.
1Rain11yOK, cool. Yeah, this whole thing does seem to go in circles at times... it's the sort of topic where I wish I could just meet face to face and hash it out over an hour or so.
0XiXiDu11yA large solar-outburst can cause similar havoc. Or some rouge group buys all Google stocks, tweaks its search algorithm and starts to influence election outcomes by slightly tweaking the results in favor of certain candidates while using its massive data repository to spy on people. There are a lot of scenarios. But the reason to consider the availability of advanced nanotechnology regarding AI associated existential risks is to reassess their impact and probability. An AI that can make use of advanced nanotech is certainly much more dangerous than one taking over the infrastructure of the planet by means of cyber-warfare. The question is if such a risk is still bad enough to outweigh other existential risks. That is the whole point here, comparison of existential risks to assess the value of contributing to the SIAI. If you scale back to an AGI incapable of quick self-improvement by use of nanotech and instead infrastructure take-over the difference between working to prevent such a catastrophe is not as far detached anymore from working on building an infrastructure more resistant to electromagnetic impulse weapons or sun flares.
7Kaj_Sotala11yThe correct way to approach a potential risk is not to come up with a couple of specific scenarios relating to the risk, evaluate those, and then pretend that you've done a proper analysis of the risk involved. That's analogous to trying to make a system secure by patching security vulnerabilities as they show up and not even trying to employ safety measures such as firewalls, or trying to make a software system bug-free simply by fixing bugs as they get reported and ignoring techniques such as unit tests, defensive programming, etc. It's been tried and conclusively found to be a bad idea by both the security and software engineering communities. If you want to be safe, you need to take into account as many possibilities as you can, not just concentrate on the particular special cases that happened to rise to your attention. The proper unit of analysis here are not the particular techniques that an AI might use to take over. That's pointless: for any particular technique that we discuss here, there might be countless of others that the AI could employ, many of them ones nobody has even thought of yet. If we'd be in an alternate universe where Eric Drexler was overrun by a car before ever coming up with his vision of molecular nanotechnology, the whole concept of strong nanotech might be unknown to us. If we then only looked at the prospects for cyberwar, and concluded that an AI isn't a big threat because humans can do cyberwarfare too, we could be committing a horrible mistake by completely ignoring nanotech. Of course, since in that scenario we couldn't know about nanotech, our mistake wouldn't be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle. So what is the right unit of analysis? It's the power of intelligence [http://intelligence.org/blog/2007/07/10/the-power-of-intelligence/]. It's the historical case of a new form of intelligence showing up on the planet and completely reshaping its
2XiXiDu11yYou have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can't make use of nanotechnology it might make use of something we haven't even thought about. What, magic?
4Kaj_Sotala11yYes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it's plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up. For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens. No it's not: think about this again. "Aliens of a superior intelligence might wipe us out by some means we don't know" is symmetric to "an AGI with superior intelligence might wipe us out by some means we don't know". But "aliens of superior intelligence might appear out of nowhere" is not symmetric to "an AGI with superior intelligence might wipe us out by some means we don't know".
0XiXiDu11yI didn't mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can't just say that ruling out many possibilities of how an AI could be dangerous doesn't make it less dangerous because it might come up with something we haven't thought about. That line of reasoning would allow you to undermine any evidence to the contrary. I'll be back tomorrow.
1Kaj_Sotala11yNot quite. Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there's no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn't be any more dangerous.) Now if I couldn't come up with any examples where having a superior intelligence would help, then that would be evidence against the "a superior intelligence helps overall". But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
0XiXiDu11yWhat I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn't need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments [http://lesswrong.com/lw/304/what_i_would_like_the_siai_to_publish/2wpb] for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
4timtyler11yLarge brains can be dangerous to those who don't have them. Look at the current human-caused mass extinction.
2Kaj_Sotala11yYes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out. You don't need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That's just the result of 200 years of technological development and knowledge acquisition, and doesn't even require us to be more intelligent than the humans of 2000. We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We've blanketed the Earth with lights that can be seen anywhere where we've bothered to establish habitation [http://visibleearth.nasa.gov/view_rec.php?id=1438]. We've changed the Earth so much that we're disturbing global climate patterns, and now we're talking about large-scale engineering work to counteract those disturbances [https://secure.wikimedia.org/wikipedia/en/wiki/Geoengineering]. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet's continents (though admittedly Antarctica is probably a bit tricky from a tourist's point of view).
1timtyler11yIt seems as though it is rather easy to imagine humans being given the "Deep Blue" treatment in a wide range of fields. I don't see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
1shokwave11yUh, that's us. wave In case you didn't realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee. Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence. Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
0[anonymous]11yContributing to the creation of FAI doesn't just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.
0timtyler11y"The Singularity argument"? What's that, then?
-3shokwave11y1. FOOM is possible 2. FOOM is annihilation 3. Expected value should guide your decisions From 1 and 2: 4. Expected value of FOOM is "huge bad" From 3 and 4: 5. Make decisions to reduce expected value of FOOM The SIAI corollary is: 6. There exists a way to turn FOOM = annihilation into FOOM = paradise 7. There exists a group "SIAI" that is making the strongest known effort towards that way From 5, 6 and 7: 8. Make decisions to empower SIAI edit: reformulating the SIAI corollary to bring out hidden assumptions.
-3timtyler11y...and what is "FOOM"? Or are 1 and 2 supposed to serve as a definition? Either way, this is looking pretty ridiculous :-(
6shokwave11yI was going to give a formal definition¹ but then I noticed you said either way. Assume that 1 and 2 are the definition of FOOM: that is a possible event, and that it is the end of everything. I challenge you to substantiate your claim of "ridiculous", as formally as you can. Do note that I will be unimpressed with "anything defined by 1 and 2 is ridiculous". Asteroid strikes and rapid climate change are two non-ridiculous concepts that satisfy the definition given by 1 and 2. ¹. And here it is: FOOM is the concept that self-improvement is cumulative and additive and possibly fast. Let X be an agent's intelligence, and let X + f(X) = X^ be the function describing that agent's ability to improve its intelligence (where f(X) is the improvement generated by an intelligence of X, and X^ is the intelligence of the agent post-improvement). If X^ > X, and X^ + f(X^) evaluates to X^^, and X^^ > X^, the agent is said to be a recursively self-improving agent. If X + f(X) evaluates in a short period of time, the agent is said to be a FOOMing agent.
2timtyler11yRidiculousness is in the eye of the beholder. Probably the biggest red flag was that there was no mention of what was supposedly going to be annihilated - and yes, it does make a difference. The supposedly formal definition tells me very little - because "short" is not defined - and because f(X) is not a specified function. Saying that it evaluates to something positive is not sufficient to be useful or meaningful.
2Will_Sawin11yFast enough that none of the other intelligences in Earth can copy its strengths or produce countermeasures sufficient to stand a chance in opposing it.
-3timtyler11yYes - though it is worth noting that if Google wins, we may have passed that point without knowing it back in 1998 sometime.
1JoshuaZ11yFooming has been pretty clearly described. Fooming amounts to an entity drastically increasing both its intelligence and ability to manipulate reality around it in a very short time, possibly a few hours or weeks, by successively improving its hardware and/or software.
0timtyler11yUh huh. Where, please? Possibly a few hours or weeks?!? [emphasis added] Is is a few hours? Or a few weeks? Or something else entirely? ...and how much is "drastically". Vague definitions are not worth critics bothering attacking. In an attempt to answer my own question, this one [http://lesswrong.com/lw/we/recursive_selfimprovement/] is probably the closest I have seen from Yudkowsky. It apparently specifies less than a year - though seems "rather vague" about the proposed starting and finishing capabilities.
2JoshuaZ11yExample locations where this has been defined include Mass Driver's post here [http://lesswrong.com/lw/2df/what_if_ai_doesnt_quite_go_foom/] where he defined it slightly differently as "to quickly, recursively self-improve so as to influence our world with arbitrarily large strength and subtlety". I think he meant indefinitely large there, but the essential idea is the same. I note that you posted comments in that thread, so presumably you've seen that before, and you explicitly discussed fooming. Did you only recently decide that it wasn't sufficiently well-defined? If so, what caused that decision? Well, I've seen different timelines used by people in different contexts. Note that this isn't just a function of definitions, but also when one exactly has an AI start doing this. An AI that shows up later, when we have faster machines and more nanotech, can possibly go foom faster than an AI that shows up earlier when we have fewer technologies to work with. But for what it is worth, I doubt anyone would call it going foom if the process took more than a few months. If you absolutely insist on an outside estimate for purposes of discussion, 6 weeks should probably be a decent estimate. It isn't clear to me what you are finding too vague about the definition. Is it just the timeline or is it another aspect?
3NancyLebovitz11yThis might be a movie threat notion-- if so, I'm sure I'll be told. I assume the operational definition of FOOM is that the AI is moving faster than human ability to stop it. As theoretically human-controlled systems become more automated, it becomes easier for an AI to affect them. This would mean that any humans who could threaten an AI would find themselves distracted or worse by legal, financial, social network reputational, and possibly medical problems. Nanotech isn't required.
0JoshuaZ11yYes, that seems like a movie threat notion to me, if an AI has the power to do those things to arbitrary people it likely can scale up from there so quickly to full control that it shouldn't need to bother with such steps, although it is minimally plausible that a slow growing AI might need to do that.
0timtyler11yNo, I've been aware for the issue for a loooong time.
1JoshuaZ11yOk. So what caused you to use the term as if it had a specific definition when you didn't think it did? Your behavior is very confusing. You've discussed foom related issues on multiple threads. You've been here much longer than I have; I don't understand why we are only getting to this issue now.
-1timtyler11yI did raise this closely-related issue [http://lesswrong.com/lw/wf/hard_takeoff/pcu] over two years ago. To quote the most relevant bit: There may well be other instances in between - but scraping together references on the topic seems as though it would be rather tedious. I did what, exactly?
1JoshuaZ11yThe quote you give focuses just on the issue of time-span. It also has already been addressed in this thread. Machine intelligence in the sense it is often used is not at all the same as artificial general intelligence. This has in fact been addressed by others in this subthread. (Although it does touch on a point you've made elsewhere that we've been using machines to engage in what amounts to successive improvement which is likely relevant.) I would have thought that your comments in the previously linked thread started by Mass Driver would be sufficient, like when you said: And again in that thread where you said: Although rereading your post, I am now wondering if you were careful to put "anti-foom" in quotation marks because it didn't have a clear definition. But in that case, I'm slightly confused to how you knew enough to decide that that was an anti-foom argument.
0timtyler11yRight - so, by "anti-foom factor", I meant: factor resulting in relatively slower growth in machine intelligence. No implication that the "FOOM" term had been satisfactorily quantitatively nailed down was intended. I do get that the term is talking about rapid growth in machine intelligence. The issue under discussion is: how fast is considered to be "rapid".
-2timtyler11ySix weeks - from when? Machine intelligence has been on the rise since the 1950s. Already it exceeds human capabilities in many domains. When is the clock supposed to start ticking? When is it supposed to stop ticking? What is supposed to have happened in the middle?
6shokwave11yThere is a common and well-known distinction between what you mean by 'machine intelligence' and what is meant by 'AGI'. Deep Blue is a chess AI. It plays chess. It can't plan a stock portfolio because it is narrow. Humans can play chess and plan stock portfolios, because they have general intelligence. Artificial general intelligence, not 'machine intelligence', is under discussion here.
-3timtyler11yNothing is "arbitrarily large" in the real world. So, I figure that definition confines FOOM to the realms of fantasy. Since people are still discussing it, I figure they are probably talking about something else.
1JoshuaZ11yTim, I have to wonder if you are reading what I wrote, given that the sentence right after the quote is "I think he meant indefinitely large there, but the essential idea is the same. " And again, if you thought earlier that foom wasn't well-defined what made you post using the term explicitly in the linked thread? If you have just now decided that it isn't well-defined then a) what do you have more carefully defined and b) what made you conclude that it wasn't narrowly defined enough?
0timtyler11yWhat distinction are you trying to draw between "arbitrarily large" and "indefinitely large" that turns the concept into one which is applicable to the real world? Maybe you can make up a definition - but what you said was "fooming has been pretty clearly described". That may be true, but it surely needs to be referenced. What exactly am I supposed to have said in the other thread under discussion? Lots of factors indicate that "FOOM" is poorly defined - including the disagreement surrounding it, and the vagueness of the commonly referenced sources about it. Usually, step 1 in those kinds of discussions is to make sure that people are using the terms in the same way - and have a real disagreement - and not just a semantic one. Recently, I participated in this exchange [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38pd] - where a poster here gave p(FOOM) = 0.001 - and when pressed they agreed that they did not have a clear idea of what class of events they were referring to.
0JoshuaZ11yArbitrarily large means just that in the mathematical sense. Indefinitely large is a term that would be used in other contexts. In the contexts that I've seen "indefinitely" used and the way I would mean it, it means so large as to not matter as the exact value for the purpose under discussion (as in "our troops can hold the fort indefinitely"). Disagreement about something is not always a definitional issue. Indeed, when dealing with people on LW where people try to be rational as possible and have whole sequences about tabooing words and the like, one shouldn't assign a very high probability to disagreements being due to definitions. Moreover, as one of the people who assigns a low probability to foom and have talked to people here about those issues, I'm pretty sure that we aren't disagreeing on definitions. Our estimates for what the world will probably look like in 50 years disagree. That's not simply a definitional issue. Ok. So why are you now doing step 1 years later? And moreover, how long should this step take as you've phrased it, given that we know that there's substantial disagreement in terms of predicted observations about reality in the next few years? That can't come from definitions. This is not a tree in a forest. Yes! Empirical evidence. Unfortunately, it isn't very strong evidence. I don't know if he meant in that context that he didn't have a precise definition or just that he didn't feel that he understood things well enough to assign a probability estimate. Note that those aren't the same thing.
0timtyler11yI don't see how the proposed word substitution is supposed to help. If FOOM means: "to quickly, recursively self-improve so as to influence our world with indefinitely large strength and subtlety", we still face the same issues - of how fast is "quickly" and how big is "indefinitely large". Those terms are uncalibrated. For the idea to be meaningful or useful, some kind of quantification is needed. Otherwise, we are into "how long is a piece of string?" territory. I did also raise the issue two years ago [http://lesswrong.com/lw/wf/hard_takeoff/pcu]. No response, IIRC. I am not too worried if FOOM is a vague term. It isn't a term I use very much. However, for the folks here - who like to throw their FOOMs around - the issue may merit some attention.
0JoshuaZ11yIf indefinitely large is still too vague, you can replace it with ""to quickly, recursively self-improve so as to influence our world with sufficient strength and subtlety such that it can a) easily wipe out humans b) humans are not a major threat to it achieving almost any goal set and c) humans are sufficiently weak that it doesn't gain resources by bothering to bargain with us." Is that narrow enough?
0timtyler11yThe original issues were: * When to start the clock? * When to stop the clock? * What is supposed to have happened in the mean time? You partly address the third question - and suggest that the clock is stopped "quickly" after it is started. I don't think that is any good. If we have "quickly" being the proposed-elsewhere "inside six weeks", it is better - but there is still a problem, which is that there are no constraints being placed on the capabilities of the humans back when the clock was started. Maybe they were just as weak back then. Since I am the one pointing out this mess, maybe I should also be proposing solutions: I think the problem is that people want to turn the "FOOM" term into a binary categorisation - to FOOM or not to FOOM. Yudkowsky's original way of framing the issue doesn't really allow for that. The idea is explicitly and deliberately not quantified in his post on the topic [http://lesswrong.com/lw/wf/hard_takeoff/]. I think the concept is challenging to quantify - and so there is some wisdom in not doing so. All that means is that you can't really talk about: "to FOOM or not to FOOM". Rather, there are degrees of FOOM. If you want to quantify or classify them, it's your responsibility to say how you are measuring things. It does look as though Yudkowsky has tried this elsewhere [http://lesswrong.com/lw/we/recursive_selfimprovement/] - and made an effort to say something a little bit more quantitative.
1JoshuaZ11yI'm puzzled a bit by your repeated questions about when to "start the clock" and this seems like it is possibly connected to the issue that people when discussing fooming are discussing a general intelligence going foom. They aren't talking about little machine intelligences, whether neural networks or support vector machines or matchbox learning systems. They are talking about artificial general intelligence. The "clock" starts from when a a general intelligence with intelligence about as much as a bright human goes online. Huh? I don't follow.
1Rain11yFoom is finding the slider bar in the config menu labeled 'intelligence' and moving it all the way to the right.
0timtyler11yI don't know what it is - but I am pretty sure that is not it. If you check with: http://lesswrong.com/lw/wf/hard_takeoff/ [http://lesswrong.com/lw/wf/hard_takeoff/] ...you will see there's a whole bunch of vague, hand-waving material about how fast that happens.
6Rain11yIf you're willing to reject every definition presented to you, you can keep asking the question as long as you want. I believe this is typically called 'trolling'. What is your definition of foom?
2Kevin11yI'm interested!
0DSimon11yThirded interest.
0shokwave11yI am also interested.

The rational reasons to signal are outlined in the post Why Our Kind Can't Cooperate, and there are more good articles with the Charity tag.

My personal reasons for supporting SIAI are outlined entirely in this comment.

Please inform me if anyone knows of a better charity.

6XiXiDu11yAs long as you presume that the SIAI saves a potential galactic civilization from extinction (i.e. from being created), and assign a high enough probably to that outcome, nobody is going to be able to inform you of a charity with an higher payoff. At least as long as no other organization is going to make similar claims (implicitly or explicitly). If you don't mind I would like you to state some numerical probability estimates [http://lesswrong.com/lw/304/what_i_would_like_the_siai_to_publish/2voa]: 1. The risk of human extinction by AI (irrespective of countermeasures). 2. Probability of the SIAI succeeding to implement an AI (see 3.) taking care of any risks thereafter. 3. Estimated trustworthiness of the SIAI (signaling common good (friendly AI/CEV) while following selfish objectives (unfriendly AI)). I'd also like you to tackle some problems I see regarding the SIAI in its current form: Transparency How do you know that they are trying to deliver what they are selling? If you believe the premise of AI going FOOM and that the SIAI is trying to implement a binding policy based on which the first AGI is going to FOOM, then you believe that the SIAI is an organisation involved in shaping the future of the universe. If the stakes are this high there does exist a lot of incentive for deception. Can you conclude that because someone writes a lot of ethical correct articles and papers that that output is reflective of their true goals? Agenda and Progress The current agenda [http://intelligence.org/research/researchareas] seems to be very broad and vague. Can the SIAI make effective progress given such an agenda compared to specialized charities and workshops focusing on more narrow sub-goals? * How do you estimate their progress? * What are they working on right now? * Are there other organisations working on some of the sub-goals that make better progress? As multifoliaterose implied here [http://lesswrong.com/r/discussion/lw/3aa/friendly_a

I consider the above form of futurism to be the "narrow view". It considers too few possibilities over too short a timespan.

  • AI is not the only extinction risk we face.
  • AI is useful for a LOT more than just preventing extinction.
  • FOOM isn't necessary for AI to cause extinction.
  • AI seems inevitable, assuming humans survive other risks.
  • Human extinction by AI doesn't require the AI to swallow its light cone (Katja).
  • My interpretation of Ben's article is that he's saying SIAI is correct in everything except the probability that they can change the outcome.
  • You didn't mention third parties who support SIAI, like Nick Bostrom, who I consider to be the preeminent analyst on these topics.

I'm not academic enough to provide the defense you're looking for. Instead, I'll do what I did at the end of the above linked thread, and say you should read more source material. And no, I don't know what the best material is. And yes, this is SIAI's problem. They really do suck at marketing. I think it'd be pretty funny if they failed because they didn't have a catchy slogan...

I will give one probability estimate, since I already linked to it: SIAI fails in their mission AND all homo sapiens are extinct by the year 2100: 90 percent. I'm donating in the hopes of reducing that estimate as much as possible.

2XiXiDu11yOne of my main problems regarding risks from AI is that I do not see anything right now that would hint at the possibility of FOOM. I am aware that you can extrapolate from the chimpanzee-human bridge. But does the possibility of superchimpanzee-intelligence really imply superhuman-intelligence? Even if that was the case, which I consider sparse evidence to neglect other risks for, I do not see that it implies FOOM (e.g. vast amounts of recursive self-improvement). You might further argue that even human-level intelligence (EMS or AI) might pose a significant risk when speed-up or by means of brute-force. In any case, I do believe that the associated problems to create any such intelligence are vastly greater than the problem to limit an intelligence, its scope of action. I believe that it is reasonable to assume that there will be a gradual development with many small-impact mistakes that will lead to a thoroughly comprehension of intelligence and its risks before any superhuman-intelligence could pose an existential risk.
4Rain11yI see foom as a completely separate argument from FAI or AGI or extinction risks. Certainly it would make things more chaotic and difficult to handle, increasing risk and uncertainty, but it's completely unnecessary for chaos, risk, and destruction to occur - humans are quite capable of that on their own. Once an AGI is "out there" and starts getting copied (assuming no foom), I want to make sure they're all pointed in the right direction, regardless of capabilities, just as I want that for nuclear and other weapons. I think there's a possibility we'll be arguing over the politics of enemy states getting an AGI. That doesn't seem to be a promising future. FAI is arms control, and a whole lot more.
1XiXiDu11yI do not see that. The first AGI will likely be orders of magnitude slower (not less intelligent) than a standard human and be running on some specialized computational substrate (supercomputer). If you remove FOOM from the equation then I see many other existential risks being as dangerous as AI associated risks.
3Rain11yAgain, a point-in-time view. Maybe you're just not playing it out in your head like I am? Because when you say, "the first AGI will likely be orders of magnitude slower", I think to myself, uh, who cares? What about the one built three years later that's 3x faster and runs on a microcomputer? Does the first one being slow somehow make that other one less dangerous? Or that no one else will build one? Or that AGI theory will stagnate after the first artificial mind goes online? (?!?!) Why does it have to happen 'in one day' for it to be dangerous? It could take a hundred years, and still be orders of magnitude more dangerous than any other known existential risk.
1XiXiDu11yYes, because I believe that the development will be gradually enough to tackle any risks on the way to a superhuman AGI, if superhuman capability is possible at all. There are certain limitations. Shortly after the invention of rocket science people landed on the moon. But the development eventually halted or slowed down. We haven't reached other star systems yet. By that metaphor I want highlight that I am not aware of good arguments or other kinds of evidence indicating that an AGI would likely result in a run-away risk at any point of its development. It is possible but I am not sure that because of its low-probability we can reasonable neglect other existential risks. I believe [http://lesswrong.com/lw/304/what_i_would_like_the_siai_to_publish/2wpb] that once we know how to create artificial intelligence capable of learning on a human level our comprehension of its associated risks and ability to limit its scope will have increased dramatically as well.
2Rain11yYou're using a different definition of AI than me. I'm thinking of 'a mind running on a computer' and you're apparently thinking of 'a human-like mind running on a computer', where 'human-like' includes a lot of baggage about 'what it means to be a mind' or 'what it takes to have a mind'. I think any AI built from scratch will be a complete alien, and we won't know just how alien until it starts doing stuff for reasons we're incapable of understanding. And history has proven that the more sophisticated and complex the program, the more bugs, and the more it goes wrong in weird, subtle ways. Most such programs don't have will, intent, or the ability to converse with you, making them substantially less likely to run away. And again, you're positing that people will understand, accept, and put limits in place, where there's substantial incentives to let it run as free and as fast as possible.
0XiXiDu11ySorry, I meant human-level learning capability when I said human like.
1timtyler11yWe have had quite a bit of self-improvement so far - according to my own: http://www.alife.co.uk/essays/the_intelligence_explosion_is_happening_now/ [http://www.alife.co.uk/essays/the_intelligence_explosion_is_happening_now/] Progress is accelerating in a sense - due to synergy between developments making progress easier. ...when progress is at its fastest, things might well get pretty interesting. Much in the way of "existential" risk seems pretty unlikely to me - but with speculative far-future events, it is hard to be certain. What does look as though it will go up against the wall are unmodified human beings - and other multicellular DNA-based lifeforms. There is no way these can compete against engineering and intelligent design - and they look like an unsuitable foundation for building directly on top of. Some will paint that as an apocalypse - though to me it looks like a sensible, obvious and practically inevitable move. The most reasonable hope for the continued existence of biological humans is in future equivalents of museums and historical simulations - IMO.
6Rain11yTo restate my original question, is there anyone out there doing better than your estimated 0.0000003%? Even though the number is small, it could still be the highest.
2XiXiDu11yNone whose goal is to save humanity from an existential risk. Although asteroid surveillance might come close, I'm not sure. It is not my intention to claim that donating to the SIAI is worthless, I believe that the world does indeed need an organisation that does tackle the big picture. In other words, I am not saying that you shouldn't be donating to the SIAI, I am happy someone does (if only because of LW). But the fervor in this thread seemed to me completely unjustified. One should seriously consider if there are other groups worthy of promotion or if there should be other groups doing the same as the SIAI or being dealing with one of its sub-goals. My main problem is how far I should go to neglect other problems in favor of some high-impact low-probability event. If your number of possible beings of human descent is high enough, and you assign each being enough utility, you can outweigh any low probability. You could probably calculate not to help someone who is drowning because 1.) you'd risk your own life and all the money you could make to donate to the SIAI 2.) in that time you could tell 5 people about existential risks from AI. I am exaggerating to highlight my problem. I'm just not educated enough yet, I have to learn more math, especially probability. Right now I feel that it is unreasonable to donate my whole money (or a lot) to the SIAI. It really saddens me to see how often LW perceives any critique of the SIAI as ill-intentioned. As if people want to destroy the world. There are some morons out there, but most really would like to save the world if possible. They just don't see that the SIAI is a reasonable choice to do so.
3Rain11yI agree with SIAI's goals. I don't see it as "fervor". I see it as: I can do something to make this world a better place (according to my own understanding, in a better way than any other possible), therefore I will do so. I compartmentalize. Humans are self-contradictory in many ways. I can send my entire bank account to some charity in the hopes of increasing the odds of friendly AI, and I can buy a hundred dollar bottle of bourbon for my own personal enjoyment. Sometimes on the same day. I'm not ultra-rational or pure utilitarian. I'm a regular person with various drives and desires. I save frogs [http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2a27] from my stairwell rather than driving straight to work and earning more money. I do what I can.
2Rain11yI have seriously considered it. I have looked for such groups, here and elsewhere, and no one has ever presented a contender. That's why I made my question as simple and straightforward as possible: name something more important. No one's named anything so far, and I still read for many hours each week on this and other such topics, so hopefully if one arises, I'll know and be able to evaluate it. I donate based on relative merit. As I said at the end of my original supporting post: so far, no one else seems to come close to SIAI. I'm comfortable with giving away a large portion of my income because I don't have much use for it myself. I post it here because it encourages others to give of themselves. I think it's the right thing to do. I know it's hard to see why. I wish they had better marketing materials. I was really hoping the last challenge, with projects like a landing page, a FAQ, etc., would make a difference. So far, I don't see much in the way of results, which is upsetting. I still think it's the right place to put my money.
6TheOtherDave11yIf you're going to do this sort of explicit decomposition at all, it's probably also worth thinking explicitly about the expected value of a donation. That is: how much does your .0001 estimate of SIAI's chance of preventing a humanity-destroying AI go up or down based on an N$ change in its annual revenue?
7XiXiDu11yThanks, you are right. I'd actually do a lot more but I feel I am not yet ready to tackle this topic mathematically. I only started getting into math in 2009. I asked several times for an analysis with input variables I could use to come up with my own estimations of the expected value of a donation to the SIAI. I asked people who are convinced of the SIAI to provide a decision procedure on how they were convinced. I asked them to lay it open to public inspection so people could reassess the procedure and calculations to compute their own conclusion. In response they asked me to do so [http://lesswrong.com/lw/304/what_i_would_like_the_siai_to_publish/2wah] myself. I do not take it amiss, they do not have to convince me. I am not able to do so yet. But while learning math I try to encourage other people to think about it.
3XiXiDu11yI feel that this deserves a direct answer. I think it is not just about money. The question would be, what would they do with it, would they actually hire experts? I will assume the best-case scenario here. If the SIAI would be able to obtain a billion dollars I'd estimate the chance of the SIAI to prevent a FOOMing uFAI 10%.
1Emile11yThis part is the one that seems the most different from my own probabilities: So, do you think the default case is a friendly AI? Or at least innocuous AI? Or that friendly AI is easy enough so that whoever first makes a fooming AI will get the friendliness part right with no influence from the SIAI?
0XiXiDu11yNo, I do not believe that the default case is friendly AI. But I believe that AI going FOOM is, if possible at all, very hard to accomplish. Surely everyone agrees here. But at the moment I do not share the opinion that friendliness, that is to implement scope boundaries, is a very likely failure mode. I see it this way, if one can figure out how to create an AGI that FOOM's (no I do not think AGI implies FOOM [http://lesswrong.com/lw/304/what_i_would_like_the_siai_to_publish/2w6h]) then you have a thorough comprehension of intelligence and its associated risks. I just don't see that a group of researchers (I don't believe a mere group is enough anyway) will be smart enough to create an AGI that does FOOM but somehow fail to limit its scope. Please consider reading this comment [http://lesswrong.com/lw/304/what_i_would_like_the_siai_to_publish/2wpb] where I cover this topic in more detail. That is why I believe that only 5% of all AI's going FOOM will be an existential risk to all of humanity. That is my current estimation, I'll of course update on new evidence (e.g. arguments).
0timtyler11yI looked at http://lesswrong.com/lw/wf/hard_takeoff/ [http://lesswrong.com/lw/wf/hard_takeoff/] I was left pretty puzzled about what "AI go FOOM" was actually intended to mean. The page shies away from making any kind of quantifiable statement. You seem to be assigning probabilities to this - as though it is a well defined idea - but what is it supposed to mean?
3XiXiDu11yI know (I don't), but since I asked Rain to assign probabilities to it I felt that I had to state my own as well. I asked him to do so because I read that some people are arguing in favor of making probability estimates, to say a number. But since I haven't come across much analysis that actually does state numbers I thought I'd ask a donor who contributed [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38d0] the current balance of his bank account.
-5timtyler11y
2Vaniver11yMy understanding is it means "the AI gets to a point where software improvements allow it to outpace us and trick us into doing anything it wants us to, and understand nanotechnology at a scale that it soon has unlimited material power." Instead of 1e-4 I'd probably put that at 1e-6 to 1e-9, but I have little experience accurately estimating very low probabilities. (The sticking point of my interpretation is something that seems glossed over in the stuff I've read about it- that the AI only has complete access to software improvements. If it's working on chips made of silicon, all it can do is tell us better chip designs (unless it's hacked a factory, and is able to assemble itself somehow). Even if it's as intelligent as EY imagines it can be, I don't see how it could derive GR from a webcam quality picture; massive intelligence is no replacement for scant evidence. Those problems can be worked around- if it has access to the internet, it's got a lot of evidence and a lot of power- but suggest that in some limited cases FOOM is very improbable.)
0timtyler11yI am pretty sure that the "FOOM" term is an attempt to say something about the timescale of the growth of machine intelligence. So, I am sceptical about definitions which involve the concept of trickery. Surely rapid growth need not necessarily involve trickery. My FOOM sources [http://lesswrong.com/lw/wf/hard_takeoff/] don't seem to mention trickery. Do you have any references relating to the point?
3ata11yThe bit about "trickery" was probably just referencing the weaknesses of AI boxing [http://yudkowsky.net/singularity/aibox]. You are correct that it's not essential to the idea of hard takeoff.
4Eliezer Yudkowsky11yWhy Our Kind Can't Cooperate [http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/], exhibit #37B.
2AlexU11yWhy has this comment been downvoted so much? It's well-written and makes some good points. I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

It's been downvoted - I guess - because it sits on the wrong side of a very interesting dynamic: what I call the "outside view dismissal" or "outside view attack". It goes like this:

A: From the outside, far too many groups discover that their supported cause is the best donation avenue. Therefore, be skeptical of any group advocating their preferred cause as the best donation avenue.

B: Ah, but this group tries to the best of their objective abilities to determine the best donation avenue, and their cause has independently come out as the best donation avenue. You might say we prefer it because it's the best, not the other way around.

A: From the outside, far too many groups claim to prefer it because it's the best and not the other way around. Therefore, be skeptical of any group claiming they prefer a cause because it is the best.

B: Ah, but this group has spent a huge amount of time and effort training themselves to be good at determining what is best, and an equal amount of time training themselves to notice common failure modes like reversing causal flows because it looks better.

A: From the outside, far too many groups claim such training for it to be true. Th... (read more)

2XiXiDu11yWhere is the evidence?
1shokwave11yAll of the evidence that an AI is possible¹, then the best method of setting your prior for the behavior of an AI². ¹. Our brains are proof of concept. That it is possible for a lump of flesh to be intelligent means AI is possible - even under pessimistic circumstances, even if it means simulating a brain with atomic precision and enough power to run the simulation faster than 1 second per second. Your pessimism would have to reach "the human brain is irreducible" in order to disagree with this proof, by which point you'd have neurobiologists pointing out you're wrong. ². Which would be equal distribution over all possible points in relevant-thing-space, in this case mindspace [http://lesswrong.com/lw/rm/the_design_space_of_mindsingeneral/].
5TheOtherDave11yJust to clarify: are you asserting that this comment, and the associated post about the size of mindspace, represent the "convince even a skeptic" collection of evidence you were alluding to in its grandparent (which XiXiDu quotes)? Or was there a conversational disconnect somewhere along the line?
1shokwave11yI didn't provide all of the evidence that an AI is possible, just one strong piece. All the evidence, plus a good prior for how likely the AI is to turn us into more useful matter, should be enough to convince even a skeptic. However, the brain-as-proof-of-concept idea is really strong: try and formulate an argument against that position. Unless they're a skeptic like A above, or they're an "UFAI-denier" (in the style of climate change deniers) posing as a skeptic, or they privilege what they want to believe over what they ought to believe. There are probably half a dozen more failure modes I haven't spotted.
7TheOtherDave11ySounds like a conversational disconnect to me, then: at least, going back through the sequence of comments, it seems the sequence began with an expression of skepticism of the claim that "a donation to the Singularity Institute is the most efficient charitable investment," and ended with a presentation of an argument that UFAI is both possible and more likely than FAI. Thanks for clarifying. Just to pre-emptively avoid being misunderstood myself, since I have stepped into what may well be a minefield of overinterpretation, let me state some of my own related beliefs: I consider human-level, human-produced AGI possible (confidence level ~1) within the next century (C ~.85-.99, depending on just what "human-level" means and assuming we continue to work on the problem), likely not within the next 30 years (C<.15-.5, depending as above). I consider self-improving AGI and associated FOOM, given human-level AGI, a great big question mark: I'd say >99% of HLAGIs we develop will be architected in such a way that significant self-improvement is unlikely (much as our own architectures make it unlikely for us), but the important question is whether the actual number of exceptions is 0 or 1, and I have no confidence in my intuitions about that (see my comments elsewhere [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38r1] about expected results based on small probabilities of large magnitudes). I consider UFAI given self-improving AGI practically a certainty: >99% of SIAGIs will be UFAIs, and again the important question is whether the number of exceptions is 0 or 1, and whether the exception comes first. (The same thing is true about non-SI AGIs, but I care about that less.) Whether SIAI can influence that last question at all, and if so by how much and in what direction, I haven't a clue about; if I wanted to develop an opinion about that I'd have to look into what SIAI actually does day-to-day. If any of that is symptomatic of fallacy, I'd appreci
2shokwave11yThere's an argument chain I didn't make clear; "If UFAI is both more possible and more likely than FAI, then influencing this in favour of FAI is a critical goal" and "SIAI is the most effective charity working towards this goal". The only part I would inquire about is Humans don't have the ability to self-modify (at least, our neuroscience is too underdeveloped to count for that yet) but AGIs will probably be made from explicit programming code, and will probably have some level of command over programming code (it seems like one of the ways in which it would be expected to interact with the world, creating code that achieves its goals). So its architecture is more conducive to self-modification (and hence self-improvement) than ours is. Of course, a more developed point is that humans are very likely to build a fixed AGI if they can. If you're making that point, and not that AGIs simply won't self-improve, then I see no issues.
5TheOtherDave11yRe: argument chain... I agree that those claims are salient. Observations that differentially support those claims are also salient, of course, which is what I understood XiXiDu to be asking for [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38p3], which is why I asked you initially [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38s7] to clarify what you thought you were providing. Re: self-improvement... I agree that AGIs will be better-suited to modify code than humans are to modify neurons, both in terms of physical access and in terms of a functional understanding of what that code does. I also think that if humans did have the equivalent ability to mess with their own neurons, >99% of us would either wirehead or accidentally self-lobotomize rather than successfully self-optimize. I don't think the reason for that is primarily in how difficult human brains are to optimize, because humans are also pretty dreadful at optimizing systems other than human brains. I think the problem is primarily in how bad human brains are at optimizing. (While still being way better at it than their competition.) That is, the reasons have to do with our patterns of cognition and behavior, which are as much a part of our architecture as is the fact that our fingers can't rewire our neural circuits. Of course, maybe human-level AGIs would be way way better at this than humans would. But if so, it wouldn't be just because they can write their own cognitive substrate, it would also be because their patterns of cognition and behavior were better suited for self-optimization. I'm curious as to your estimate of what % of HLAGIs will successfully self-improve?
1shokwave11yI guess all AGIs that aren't explicity forbidden to will self-modify (75%); self-modification will mostly start with a backup (code has this option) (95%), and maybe half the methods of backup/compare will approve improvements and throw out undesirable changes. So 35% will self-improve successfully. I also estimate that humans will keep making AGIs until they get one that self-improves.
1JoshuaZ11yReally? This seems to ignore that certain structures will have a lot of trouble self-modifying. For example, consider an AI that is a hard-encoded silicon chip with a fixed amount of RAM. Unless it is already very clever, there's no way it can self-improve.
5TheOtherDave11yThis actually illustrates nicely some issues with the whole notion of "self-improving." Suppose Sally is an AI on a hard-encoded silicon chip with fixed RAM. One day Sally is given the job of establishing a policy to control resource allocation at the Irrelevant Outputs factory, and concludes that the most efficient mechanism for doing so is to implement in software on the IO network the same algorithms that its own silicon chip implements in hardware, so it does so. The program Sally just wrote can be thought of as a version of Sally that is not constrained to a particular silicon chip. (It probably also runs much slower, though that's not entirely clear.) In this scenario, is Sally self-modifying? Is it self-improving? I'm not even sure those are the right questions.
1shokwave11yHard-coding onto chips, or even making specific structures electromechanical in nature, is one way of how humans would achieve "explicitly forbidden to self-modify" in AIs. I estimated that one in every four AGI projects will desire to forbid their project from self-modification. I thought this was optimistic; I haven't seen any discussion of fixed AGI. Although maybe that might be something military research and development is interested in.
1JoshuaZ11yMy point was that even in some cases where people aren't thinking about self-modification, self-modification won't happen by default.
3JoshuaZ11yThis doesn't address the most controversial aspect, which is that AI would go foom. If extreme fooming doesn't occur this isn't nearly as big an issue. That is an issue where many people have discussed it and not all have come away convinced. Robin Hanson had a long debate with Eliezer over this and Robin was not convinced. Personally, I consider fooming to be unlikely but plausible. But how likely one thinks it is matters a lot.
4Rain11yEven without foom, AI is a major existential risk, in my opinion.
0shokwave11yFoom is included in that proof concept. Human intelligence has made faster and faster computation; a human intelligence sped up could reasonably expect to increase the speed and amount of computation available to it; resulting in faster speeds, and so on.
6JoshuaZ11yYou are repeating what amounts to a single cached thought. The claim in question is that there's enough evidence to convince a skeptic. Giving a short line of logic for that isn't at all the same. Moreover, the claim that such evidence exists is empirically very hard to justify given the Yudkowsky-Hanson debate. Hanson is very smart. Eliezer did his best to present a case for AI going foom. He didn't convince Hanson.
1shokwave11yI'm not allowed to cache thoughts that are right? You seem to be taking "Hanson disagreed with Eliezer" as proof [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38oj] that all evidence Eliezer presented doesn't amount to FOOM. I'd note here that I started out learning from this site very skeptical, treating "I now believe in the Singularity" as a failure mode of my rationality, but something tells me you'd be suspicious of that too.
4JoshuaZ11yYou are. But when people ask for evidence it is generally more helpful to actually point to the evidence rather than simply repeating a secondary cached thought that is part of the interpretation of the evidence. No. I must have been unclear. I'm pointing to the fact that there are people who are clearly quite smart and haven't become convinced by the claim after looking at it in detail. Which means that when someone like XiXiDu asks where the evidence is [http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/38p3] a one paragraph summary with zero links is probably not going to be sufficient. I'm not suspicious of it. My own estimate for fooming has gone up since I've spent time here (mainly due to certain arguments made by cousin_it), but I don't see why you think I'd be suspicious or not. Your personal opinion or my personal opinion just isn't that relevant when someone has asked "where's the evidence?" Maybe our personal opinions with all the logic and evidence drawn out in detail might matter. But that's a very different sort of thing.

I can't speak for anyone else, but I downvoted it because of the deadly combination of:

  • A. Unfriendly snarkiness, i.e. scare-quoting "rationalists" and making very general statements about the flaws of LW without any suggestions for improvements, and without a tone of constructive criticism.

  • B. Incorrect content, i.e. not referencing this article which is almost certainly the primary reason there are so many comments saying "I donated", and the misuse of probability in the first paragraph.

If it were just A, then I could appreciate the comment for making a good point and do my best to ignore the antagonism. If it were just B, then the comment is cool because it creates an opportunity to correct a mistake in a way that benefits both the original commenter and others, and adds to the friendly atmosphere of the site.

The combination, though, results in comments that don't add anything at all, which is why I downvoted srdiamond's comment.

Downvoted parent and grandparent. The grandparent because:

  • It doesn't deserve the above defence.
  • States obvious and trivial things as though they are deep insightful criticisms while applying them superficially
  • Sneaks through extra elements of an agenda via presumption.

I had left it alone until I saw it given unwarranted praise and a meta karma challenge.

I find it really disheartening every time I come on here to find that a community of "rationalists" is so quick to muffle anyone who disagrees with LW collective opinion.

See the replies to all similar complaints.

9XiXiDu11yInitially I wanted to downvote you but decided to upvote you for providing reasons for why you downvoted the above comments. The reason for why I believe that the comments shouldn't have been downvoted is that in this case something other than signaling disapproval of poor style and argumentation is more important. This post and thread are especially off-putting to skeptical outsiders. Downvoting critical comments will just reinforce this perception. Therefore, if you are fond of LW and the SIAI, you should account for public relations and kindly answer any critical or generally skeptical comments rather than simply downvoting them.
9ata11yWhat is there to say in response to a comment like the one that started this thread? It was purely an outside-view argument that doesn't make any specific claims against the efficacy of SIAI or against any of the reasons that people believe it is an important cause. It wasn't an argument, it was a dismissal.
5Vaniver11yYour post right here seems like a good example. You could say something along the lines of "This is a dismissal, not an argument; merely naming a bias isn't enough to convince me. If you provide some specific examples, I'd be happy to listen and respond as best as I can." You can even tack on an "But until then, I'm downvoting this because it seems like it's coming from hostility rather than a desire to find the truth together." Heck, you could even copy that and have it saved somewhere as a form response to comments like that.
4XiXiDu11yI noticed the tendency on LW to portray comments as attacks. They may seem that way to trained rationalists and otherwise highly educated folks. But not every negative comment is actually intended to be just a rhetorical device or simple dismissal. It won't help if you just downvote people or call them logical rude. Some people are honestly interested but fail to express themselves adequately. Usually newcomers won't know about the abnormally high standards on LW. You have to tell them about it. You also have to take into account those who are linked to this post, or come across it by other means, who don't know anything about LW. How does this thread appear to them, what are they likely to conclude, especially if no critical comment is being answered kindly but simply downvoted or snidely rejected?
4DSimon11yAgreed that responding to criticism is important, but I think it's especially beneficial to respond only to non-nasty criticism. Responding nicely to people who are behaving like jerks can create an atmosphere where jerkiness is encouraged.
1Vaniver11yThis is the internet, though; skins are assumed to be tough. There is some benefit to saying "It looks like you wanted to say 'X'. Please try to be less nasty next time. Here's why I don't agree with X" instead of just "wow, you're nasty."
0wedrifid11yI have noted that trying to take that sort of response seems to lead to negative consequences more often than not.
3Vaniver11yOur experiences disagree, then; I can think of many plausible explanations that leave both of us justified, so I will leave it at this.
-1Kaj_Sotala11yI agree that it's been downvoted too much. (At -6 as of this comment, up from -7 due to my own upvote.)
1[anonymous]11yOne data point in the other direction: I donated yesterday, before this was posted, and hadn't particularly been expecting to tell anyone. Once I saw this post, I figured I may as well get rewarded in karma for my existing donation, but it would have already been there whether or not I ended up having a chance to use it to signal cause solidarity.