LESSWRONG
LW

Machine Intelligence Research Institute (MIRI)
Personal Blog

38

MIRI's 2013 Summer Matching Challenge

by lukeprog
23rd Jul 2013
2 min read
122

38

Machine Intelligence Research Institute (MIRI)
Personal Blog

38

MIRI's 2013 Summer Matching Challenge
67Rain
14Rain
12lukeprog
9Leonhart
2lukeprog
5lukeprog
-11Gurkenglas
2ArisKatsaris
0Jiro
3Gurkenglas
2ArisKatsaris
1Gurkenglas
4ArisKatsaris
-1Gurkenglas
2ArisKatsaris
0Gurkenglas
-7Gurkenglas
7ArisKatsaris
-7Gurkenglas
6ArisKatsaris
-7Gurkenglas
2ArisKatsaris
5drnickbone
-5Gurkenglas
0AlexMennen
-6Gurkenglas
56iceman
9lukeprog
6Eliezer Yudkowsky
0Gurkenglas
1iceman
50Furcas
9lukeprog
48scotherns
4lukeprog
47David Althaus
4lukeprog
45Eneasz
4lukeprog
43BT_Uytya
14somervta
12OnTheOtherHandle
10A1987dM
4lukeprog
0Adele_L
16Malo
41ArisKatsaris
12lukeprog
10lukeprog
2Kawoomba
39[anonymous]
2lukeprog
37wmorgan
2lukeprog
31Rain
30Benquo
3lukeprog
30player_03
21player_03
4lukeprog
2lukeprog
30So8res
10lukeprog
7So8res
29So8res
3lukeprog
28Ozymandias_King
5lukeprog
28Larks
4lukeprog
27JGWeissman
3lukeprog
24kgalias
3lukeprog
23Kawoomba
2lukeprog
20KnaveOfAllTrades
3lukeprog
20khafra
8Eliezer Yudkowsky
4khafra
1endoself
1khafra
4nshepperd
2lukeprog
20LucasSloan
2lukeprog
20Halfwitz
15Eliezer Yudkowsky
10Shmi
36Eliezer Yudkowsky
0Halfwitz
5Halfwitz
4Kawoomba
12Eliezer Yudkowsky
7lukeprog
19Simon Fischer
2lukeprog
11Rukifellth
4lukeprog
21Rukifellth
0lukeprog
0Kawoomba
14ArisKatsaris
21lukeprog
10somervta
6Benya
0Rukifellth
10JonahS
9Louie
6amcknight
14lukeprog
5Rukifellth
4ShardPhoenix
4lukeprog
1ShardPhoenix
3Rukifellth
7Randaly
6somervta
2Rukifellth
6Benya
2Rukifellth
New Comment
122 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:48 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Rain12y670

I continue to donate $1000 per month despite a 20% pay cut.

Reply
[-]Rain12y140

To those discussing monthly vs. lump sum: Luke recognized the issue a few days ago and counted 6 months' worth of my donations toward this matching drive, much as the original commitment was counted at 12 months (now all paid).

Reply
[-]lukeprog12y120

Right.

If you prefer to donate monthly, but also want to take advantage of matching, then just tell us you're pledging to keep up the monthly donation for at least 6 months, and we'll count 6 months' worth of your matching donation to the drive, to take advantage of matching.

Reply
9Leonhart12y
Luke, I've been donating monthly for, um, I think a couple of years now? I so pledge, in addition to anything else I donate. PM me if you need ID to verify this.
2lukeprog12y
PMed.
5lukeprog12y
Thanks very much!
-11Gurkenglas12y
[-]iceman12y560

Wrote a cheque for $5,000.

(I put the redacted image of my donation online because someone else decided to start an ad-hoc fundraising effort for MIRI on FIMFiction.)

Reply
9lukeprog12y
Thanks very much! I would also like to affirm that thread's claim that "if what you really want is ponies, the Truly Friendly AI will in fact give you ponies." ("Really want", of course, requires lots of unpacking.)
6Eliezer Yudkowsky12y
Not quite affirmable; a CEV-based FAI only gives you ponies if that's what you would-want, if on average everyone would-want is to give a pony to someone who would-want one. (Because an individually based mechanism might e.g. look at babies and determine that what they would-want as individuals is eternal feeding and burping.)
0Gurkenglas12y
Can you find a more obviously bad example of the implications of individually-based CEV? I find that if a baby would-want to feed and burp, that's what it should get; and if two parents want to spawn another human with CEV rights, they might not want to spawn it initially only would-wanting to feed and burp.
1iceman12y
Yes, though I find it improbable that they'd Really Want ponies. (Devil's advocate: there are people who participate in the fandom daily, and have big chunks of their identity tied up in being a brony. If there were actually a population where people would Really Want ponies, this would be the one.)
[-]Furcas12y500

Donated 500 USD (~530 CAD).

Reply
9lukeprog12y
Thanks very much!
[-]scotherns12y480

Donated $50 (on top of my automated monthly donation).

Reply
4lukeprog12y
Thanks!
[-]David Althaus12y470

Donated $500.

Reply
4lukeprog12y
Thanks very much!
[-]Eneasz12y450

$200, and finally signed up for monthly donations as well.

Reply
4lukeprog12y
Awesome, thanks!
[-]BT_Uytya12y430

$10.00

I'm a student and this is my second PayPal transaction ever, so I was a bit scared to donate more.

Hopefully my example will inspire anybody else. $10.00 isn't very much, but come on, it's not like it is worse than not donating anything at all.

Reply
[-]somervta12y140

It did. Also a student, just made my first donation. Whaddya know, public announcement of donations really does motivate!

Reply
[-]OnTheOtherHandle12y120

This does make me feel better - thanks. I'm just entering college and don't even have a bank account yet, but your post inspired me to get one fast so I can donate whatever I can afford within the matching window :)

Reply
[-]A1987dM12y100

$10.00 isn't very much, but come on, it's not like it is worse than not donating anything at all.

http://xkcd.com/871/

:-)

Reply
4lukeprog12y
Thanks!
0Adele_L12y
Could be, if the transaction/processing/etc... is more. I think $10 is enough to be a net positive, but I'm not sure where the threshold is.
[-]Malo12y160

Transaction fees for non-profits such as MIRI on PayPal are 2.2% + $0.30, and the processing is automated with our donor database solution so it's definitely net positive :)

Reply
[-]ArisKatsaris12y410

$1,200 donated.

I'd like to remark on something that annoys me: Your "donation meter" (at least the one on your site, if not the one in the post above) ought either be certain to be updated daily, or at the very least it should note when it was last updated. I find the phrase "raised to date" frustrating and annoying when I can't trust that the "to date" is actually current.

Reply
[-]lukeprog12y120

Donation meters updated, courtesy of Malo Bourgon.

Reply
[-]lukeprog12y100

Thanks very much!

I'll ask our web guys if we can add a 'last updated' thing somewhere.

Reply
2Kawoomba12y
Well, one way to be sure would be to donate the remainder.
[-][anonymous]12y390

yadda yadda, continuing to donate 1/3 of my income, yadda yadda

feed me karma

Reply
2lukeprog12y
:)
[-]wmorgan12y370

$3,000.00

Reply
2lukeprog12y
Thanks very much!
[-]Rain12y310

I'd like to say that PMs from private_messaging disparaging this drive and my donations will NOT deter me from funding the mission I feel will help lead to the best possible future.

Reply
[-]Benquo12y300

$1,000 and my employer will match it.

Reply
3lukeprog12y
Thanks very much!
[-]player_0312y300

I donated $1000 and then went and bought Facing the Intelligence Explosion for the bare minimum price. (Just wanted to put that out there.)

I've also left myself a reminder to consider another donation a few days before this runs out. It'll depend on my financial situation, but I should be able to manage it.

Reply
[-]player_0312y210

I've donated a second $1000.

Reply
4lukeprog12y
Thanks again!!
2lukeprog12y
Thanks very much!
[-]So8res12y300

I donated. My employers match charitable donations, though not always in a timely fashion. I'm hoping that their contribution can be further matched.

Reply
[-]lukeprog12y100

Thanks! Who is your employer? We may need to send them some forms. We already have donation matching set up with Google, Microsoft, Boeing, Adobe, Fannie Mae, and several other companies through Network for Good and America's Charities.

You can also contact me privately via email.

Reply
7So8res12y
Google.
[-]So8res12y290

Wow, MIRI is more underfunded than I thought. I donated again, after freeing up some cash.

Reply
3lukeprog12y
Awesome, thanks!
[-]Ozymandias_King12y280

Donated 1400$

Reply
5lukeprog12y
Thanks very much!
[-]Larks12y280

Currently between jobs; donated $100 anyway, as the world is not a story and will not wait for my montage to finish.

Reply
4lukeprog12y
Thanks!
[-]JGWeissman12y270

I donated $5000.

Reply
3lukeprog12y
Thanks very much!
[-]kgalias12y240

Donated $50.

Reply
3lukeprog12y
Thanks!
[-]Kawoomba12y230

$50.

Reply
2lukeprog12y
Thanks!
[-]KnaveOfAllTrades12y200

Donated $50.

Reply
3lukeprog12y
Thanks!
[-]khafra12y200

Donated 0.9766578425 bitcoins, a number I chose since that's Chaitin's Omega for the shortest FAI.

Reply
8Eliezer Yudkowsky12y
(Consults Inverse Chaitin function in Wolfram Alpha.) Actually, is there a definition of Chaitin's Omega for particular programs? I thought it was just for universal Turing machines, or program classes with a measure on them anyway.
4khafra12y
Whoops, that's right. I, ah, may have just unleashed a trolly AI.
1endoself12y
Yes, you can take the probability that they will halt given a random input. This is analogous to the case of a universal Turing machine, since the way we ask it to simulate a random Turing machine is by giving it a random input string.
1khafra12y
Dangit, I should've said "the FAI is Turing-complete, you can carry out arbitrary computations simply by running it in carefully selected universes." With a five orders of magnitude improvement in timing, I could be witty.
4nshepperd12y
Is that the probability that the shortest FAI halts given random input?
2lukeprog12y
Thanks!
[-]LucasSloan12y200

Donated $50.

Reply
2lukeprog12y
Thanks!
[-]Halfwitz12y200

Just curious, whatever happened to EY's rationality books? You invested months of effort into them. Did you pull a sunk cost reversal? Or is publishing them not on the schedule till next year.

Reply
[-]Eliezer Yudkowsky12y150

The drafts came out unexciting according to reader reports. I suspect that magical writing energy ['magic' = not understood] was diverted from the rationality book into the first 63 chapters of HPMOR which I was doing in my 'off time' while writing the book, and which does have Yudkowskian magic according to readers. HPMOR and CFAR between them used up a lot of the marginal utility I thought we would get from the book which diminishes the marginal utility of completing it.

Reply
[-]Shmi12y100

Shoulda hired Yvain to retell them :)

Reply
[-]Eliezer Yudkowsky12y360

We tried that experiment, but Yvain was heading off to a new job and his first stab didn't seem to be a quick fix.

Reply
0[anonymous]12y
That makes a lot of sense actually. I can't think of anyone who could do a better job.
5Halfwitz12y
You should consider posting the drafts somewhere. At the very least, we'll get new material to add to the wiki. Wikis don't need 'magic.'
4Kawoomba12y
It's better not to publish than to publish something unpolished. "This is just a draft" wouldn't sufficiently counteract the impression of "I read something of perceived-lower quality, and it's from EY." Publish unpolished and perish, if you will.
[-]Eliezer Yudkowsky12y120

I wasn't especially happy with the reception / effects of publishing the unpolished TDT draft.

Reply
7lukeprog12y
They aren't a priority for this year. We briefly contracted with two different writers who might have been able to finish the books without pulling Eliezer away from his other priorities, but that didn't work. We're still thinking about what is best to do with the drafts.
[-]Simon Fischer12y190

66$, with some help of a friend.

Reply
2lukeprog12y
Thanks!
[-]Rukifellth12y110

I've only just now gotten a job, and may owe my dad too much money to make this donation drive, but I'll see what I can do. If things go as planned, I might be able to give 700 by the deadline.

Also, isn't three weeks something of a short window?

Reply
4lukeprog12y
We announced it to our blog on July 8th, and to our newsletter a bit after that. This is just the first time we mentioned it on LW.
[-]Rukifellth12y210

Paycheck came in, donated the 700!

Reply
0lukeprog12y
Thanks!
0Kawoomba12y
You mean "Thanks very much!", to stay consistent. You're welcome!
[-]ArisKatsaris12y140

I'm still surprised that it took you two weeks before mentioning it in LessWrong. Was this delay by neglect or design?

Reply
[-]lukeprog12y210

In part, we wanted to learn something about the degree to which donors are following the blog, following our newsletter, or following Less Wrong. I also wanted to be able to link from this post to a forthcoming interview with Benja Fallenstein that explains in more detail what we actually do at the workshops and why, but that was taking too long to complete, so I decided to just hurry up and post.

Reply
[-]somervta12y100

Data point: I was following the blog, which is where I first heard about the drive, but it was the comments on this thread which got me to finally donate.

Reply
6Benya12y
For people stumbling upon this in the future: That interview has now been published. (Sorry to have been the cause of that delay :-/)
0Rukifellth12y
I see, my apologies.
[-]JonahS12y100

The links to Eliezer's Open Problems in FAI papers are broken.

Reply
9Louie12y
Fixed. Thanks.
[-]amcknight12y60

For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I've seen in the first category: CEV (9 years old!), Paul Christiano's man-in-a-box indirect normativity, Luke's decision neuroscience, Daniel Dewey's value learning... I really like these approaches but they are only very early starting points compared to what will eventually be required.

Do you have any plans to tackle the hu... (read more)

Reply
[-]lukeprog12y140

Do you have any plans to tackle the humane values problem?

Yes. The next open problem description in Eliezer's writing queue is in this category.

Reply
[-]Rukifellth12y50

I think small donors should also state their donations amounts of 50-100 dollars. Having counted the medium and large donations in this thread to a rough total of 11,000 dollars, it seems unlikely that the goal is being reached with just those, and I have a feeling there will be some sort of "breaking the ice" effect if small donors chirped up about their chip ins, so to speak. Right now the number of medium and large donors represented in this thread eclipses the smalls.

Reply
[-]ShardPhoenix12y40

I was going to donate (a not-huge amount) but I can't because Paypal won't accept my credit card. Don't know why. I did have a Paypal account that got blocked years ago for some unknown reason that I've never bothered to fix since it requires faxing documentation or some such nonsense.

Reply
4lukeprog12y
Thanks for letting me know. Will you please email malo@intelligence.org and see if he can help?
1ShardPhoenix12y
Ok.
[-]Rukifellth12y30

I wonder, if this community has the allegiance of at least 100 rationalists in the 80th percentile for rationality, how much money could be raised if all of them tried to form separate start-ups as feeder companies for MIRI?

Reply
7Randaly12y
Two attempts to do this are Quixey and Metamed. Quixey is notable for being the only for-profit institution to support MIRI; both groups' individual employees have also donated notable sums.
6somervta12y
Your Quixey link goes to metamed.
2Rukifellth12y
I grinned at how the two at the bottom seem to have donated just enough to be mentioned. Quixey hasn't been able to pump in as much as I expected though.
6Benya12y
There are two donors which have donated $5,000 (just enough to be mentioned), three which have donated $10,000, one which has donated $15,000, and one which has donated $25,000, which suggests "people like donating such that their total is a multiple of $5,000" as a strong competing hypothesis.
2Rukifellth12y
Touche
Moderation Log
Curated and popular this week
122Comments

(MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.)

Update 09-15-2013: The fundraising drive has been completed! My thanks to everyone who contributed.

The original post follows below...

 

 

 

 

Thanks to the generosity of several major donors,† every donation to the Machine Intelligence Research Institute made from now until (the end of) August 15th, 2013 will be matched dollar-for-dollar, up to a total of $200,000!  

Donate Now!

Now is your chance to double your impact while helping us raise up to $400,000 (with matching) to fund our research program.

This post is also a good place to ask your questions about our activities and plans — just post a comment!

If you have questions about what your dollars will do at MIRI, you can also schedule a quick call with MIRI Deputy Director Louie Helm: louie@intelligence.org (email), 510-717-1477 (phone), louiehelm (Skype).

progress bar


Early this year we made a transition from movement-building to research, and we've hit the ground running with six major new research papers, six new strategic analyses on our blog, and much more. Give now to support our ongoing work on the future's most important problem.

Accomplishments in 2013 so far

  • Released six new research papers: (1) Definability of Truth in Probabilistic Logic, (2) Intelligence Explosion Microeconomics, (3) Tiling Agents for Self-Modifying AI, (4) Robust Cooperation in the Prisoner's Dilemma, (5) A Comparison of Decision Algorithms on Newcomblike Problems, and (6) Responses to Catastrophic AGI Risk: A Survey.
  • Held our 2nd and 3rd research workshops.
  • Published six new analyses to our blog: The Lean Nonprofit, AGI Impact Experts and Friendly AI Experts, Five Theses..., When Will AI Be Created?, Friendly AI Research as Effective Altruism, and What is Intelligence?
  • Published the Facing the Intelligence Explosion ebook.
  • Published several other substantial articles: Recommended Courses for MIRI Researchers, Decision Theory FAQ, A brief history of ethically concerned scientists, Bayesian Adjustment Does Not Defeat Existential Risk Charity, and others.
  • Published our first three expert interviews, with James Miller, Roman Yampolskiy, and Nick Beckstead.
  • Launched our new website at intelligence.org as part of changing our name to MIRI.
  • Relocated to new offices... 2 blocks from UC Berkeley, which is ranked 5th in the world in mathematics, and 1st in the world in mathematical logic.
  • And of course much more.

Future Plans You Can Help Support

  • We will host many more research workshops, including one in September in Berkeley, one in December (with John Baez attending) in Berkeley, and one in Oxford, UK (dates TBD).
  • Eliezer will continue to publish about open problems in Friendly AI. (Here is #1 and #2.)
  • We will continue to publish strategic analyses and expert interviews, mostly via our blog.
  • We will publish nicely-edited ebooks (Kindle, iBooks, and PDF) for more of our materials, to make them more accessible: The Sequences, 2006-2009 and The Hanson-Yudkowsky AI Foom Debate.
  • We will continue to set up the infrastructure (e.g. new offices, researcher endowments) required to host a productive Friendly AI research team, and (over several years) recruit enough top-level math talent to launch it.
  • We hope to hire an experienced development director (job ad not yet posted), so that the contributions of our current supporters can be multiplied even further by a professional fundraiser.

(Other projects are still being surveyed for likely cost and strategic impact.)

We appreciate your support for our high-impact work! Donate now, and seize a better than usual chance to move our work forward.

If you have questions about donating, please contact Louie Helm at (510) 717-1477 or louie@intelligence.org.

† $200,000 of total matching funds has been provided by Jaan Tallinn, Loren Merritt, Rick Schwall, and Alexei Andreev.