I think it's unlikely that pengvado is lying -- but if anyone from CFAR is reading this and can confirm this donation, I think that would be a Good Thing.
Pengvado previously commented on a MIRI fundraising post that he "donated 20,000$ now, in addition to 110,000$ earlier this year," which was true.
To positively reinforce CFAR for finally posting this, I'm going to give $750 before the end of 2013. This is separate from my matching funds pledge - treat it like any other donation.
In addition my employer should match that, for a total of $1,500, or $3,000 when you count the fundraiser's match of both.
UPDATE: Donation made. I'll request the employer match in the next few days.
UPDATE2: Employer match requested
I made a $150 donation. I particularly like that effort has gone into making the workshops more accessible. I'm suggesting to my father that he should apply for the February workshop (I am very surprised to have ended up believing it will be worthwhile for him).
Eliezer posted a Facebook status about the fundraiser needing more support, so I was going to donate $1000... but then I saw I would get a PrettyRational print if I donated $1500, so here we are :)
Yes; thank you; we really appreciate it. Monthly contributions are a very good way to help, if anyone's thinking about it; and if you pledge a year's worth of monthly contributions, that whole year counts toward this match.
Great post. I've made it a personal goal to attempt to find 5 high value participants for the Melbourne workshop, and I'll also provide support in the form of accommodation for CFAR instructors and volunteers before/after the February workshop.
Great post - lots of useful information about the program, where it's headed and how it's been going the last few years. Thanks. $150
I donated $100. I'd have donated more, but I had put somewhere over $3000 towards and attending and helping someone else to attend the effective altruism conference earlier this year.
Also, am about to quit my job and am not sure about my future cash flow situation.
Thanks so much! And thanks for helping with the effective altruism conference last year; I really enjoyed the opportunity to teach and attend there; it made a real difference for me.
Donated 40€. I was going to donate to MIRI or CFAR, and chose CFAR due to this Facebook discussion.
Quick feedback: Thanking people for their contributions is awesome, but with this many people contributing, your thank-yous are completely stomping the "recent comments" section, which makes it harder to keep up with site flow. If you want to publicly thank everyone, a top-level reply to the article twice per day that thanks each of that day's contributors by name will keep your article in LessWrong's "front-of-mind presence" and give everyone their deserved recognition without lowering the signal-to-noise ratio.
This is not to disparage your excellent organization or your dedication to it; I will be donating myself ASAP.
The main problem with this is that it makes it cumbersome to send notifications to the people you're thanking. I also feel like your method would come off as more impersonal, distant, and artificial.
I haven't gotten the sense that thanking donors is a huge problem, since funding drives only occur once a year. Perhaps if we had hundreds of donors rather than a few dozen leaving comments. I may be undervaluing the cleanness of the Recent Comments section because I don't use it regularly enough, but my current feeling is that a few minutes of annoyance for Recent Comments browsers is worth it for making an important comment section feel slightly more warm and personable to a much larger and less LW-savvy audience. And for giving Anna and Luke a bit less work.
I've donated £420 since the start of the fundraiser, and intend to donate 10% of my next paycheque too if the goal hasn't been reached by then.
I just donated $100, in large part because of the detailed writeup and because of the many people writing here how much they donated. So thanks everyone!
Donated $105, making my contribution the true baseball bat in the infamous $110 question.
May we get these things right more often.
several mainstream media articles about CFAR on their way, including one forthcoming shortly in the Wall Street Journal
That article's up now -- it was on the cover of the Personal Journal section of the WSJ, on December 31st. Here's the online version: More Rational Resolutions
I think this is a very well written and useful picture of what CFAR is up to. I applaud CFAR for writing this and it definitely puts me many steps closer to be willing to fund CFAR.
However, one concern of mine is that the altruistic value of CFAR does not seem to me to compare much to the value of other organizations expressly focused on do-gooding, like GiveWell or the Centre for Effective Altruism. It seems like CFAR would be a nice thing to fund once these organizations are already more secure in their own funding, but that's not true yet. Any thoughts on this? (As a disclaimer, I think I have more detailed reservations about funding CFAR that I may discuss if this becomes a conversation, so don't see me doing this in the future as moving the goalposts.)
I can give you a proof of concept, actual numbers and examples omitted.
Considered a simplified model where there are only two efficient charities, a direct one and CFAR, and no other helping is possible. If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.
In addition CFAR is explicitly trying to build a network of competent rational do-gooders, with the expectation that the gains will be more than linear, because of division of labor.
Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.
CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can's growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).
That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the "E"/ element as well as the "A", which something along the lines of CFAR may be especially helpful for).
ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.
On reflection, this is an opportunity for me to be curious. The relevant community-builders I'm aware of are:
Whom am I leaving out?
My model for what they're doing is this:
GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.
CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.
Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness
Do any of these descriptions seem off? If so, how?
PS I don't think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.
Do any of these descriptions seem off? If so, how?
Some comments below.
GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.
And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).
80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.
I would add that counseled folk and extensive web traff...
It's a complicated subject, of course, but my own impression is that CFAR is indeed a good place to donate on the present margin, from the perspective of long-term world-improvement, even bearing in mind that there are other organizations one could donate to that are focused on community building around effective altruism.
My reason for this is two-fold:
The SPARC program (for highly math-talented high school...
I've just sent a check for 3000$, scheduled for delivery on Jan 13. CFAR is pending approval for my employer's donation matching program. Once that goes through my donation will be matched by my employer.
Donated $1,500.
(In part because I realized that while I'm currently as income-deficient as I was last year, I expect that to change soon and anything I donate now counts for this year's taxes, so may as well get an early start.)
In CFAR, MIRI have the ultimate hedge. If the whole MIRI mission is misdirected or wrong headed, CFAR is designed to create the people who will notice that and do whatever does most need to be done.
I would phrase this more along the lines of "If nothing MIRI does works, or for that matter if everything works but it's still not enough, CFAR tries to get a fully generic bonus on paths unseen in advance."
I mean, the main way CFAR might be able to overcome this isn't by being super extremely unbiased, but by bringing a wide diversity of good thinkers into the network (with diverse starting views, diverse group affiliations, and diverse basic thinking styles). This is totally a priority for us.
A small note/improvement request: Just as I asked last time for MIRI's donation bar (and that one was fixed), it's a minor annoyance for me when the donation bar doesn't indicate when it was last updated -- if I e.g. look at it on January 4 and again on January 7, and it hasn't moved, I'd like to know whether it hasn't moved because it simply hasn't been updated the last few days, or because people haven't been donating the last few days.
Please try to have this minor fix implemented, at least in time for the next donation drive. Many thanks in advance. (As I've already mentioned in another thread, I have donated $1000 to CFAR's current donation drive.)
In 2014, we’ll be devoting more resources to epistemic curriculum development; to research measuring the effects of our curriculum on both competence and epistemic rationality; and to more widely accessible curricula.
I'd love to hear more detailed plans or ideas for achieving these.
we’ll be devoting more resources to epistemic curriculum development
This is really exciting! I think people tend to have a lot more epistemic rationality than instrumental rationality, but that they still don't have enough epistemic rationality to care about x-risk or other EA goals.
Another important comment occurred to me -- sorry it's late.
...During the very first minicamps (the current workshops are agreed to be better) we randomized admission of 15 applicants, with 17 controls. Our study was low-powered and effects on e.g. income would have needed to be very large for us to expect to detect them. Still, we ended up with non-negligible evidence of absence: income, happiness, and exercise did not visibly trend upward one year later. [...] The details will be available soon on our blog (including a much larger number of negative re
It was a bit troublesome to figure out if the donation would be tax deductible because the word "deductible" isn't used anywhere at the page you linked to (http://rationality.org/fundraiser2013/). In fact, I almost gave up.
Fortunately, if you go to http://rationality.org/donate/, CFAR says they're a 501(c)(3) organization although I'm not sure how I'd verify that... And since the IRS has very big teeth, maybe I should figure that out first.
In addition, for this sort of minor question, doing a full blown Skype conversation probably isn't appropria...
CFAR is a 501c(3) tax-exempt organization. The current team has indeed been running things from the beginning; it is simply that, prior to the beginning (prior to any paid staff; prior to me meeting Julia or Val or anyone; prior to deciding that there would be a CFAR), some folk filed for a non-profit "just in case" a CFAR ended up being launched, since the processing time required for getting 501c(3) status is large. We have not lost key staff.
Does CFAR feel developed enough that it would prefer money to feedback?
I.E, I presume there are many people out there who could help CFAR either by dedicating a few hours of there time thinking about how to improve CFAR or earning money to donate to CFAR.
I think CFAR feels poor enough to prefer money to feedback.
Also they've tried a lot of the obvious things - I had a conversation with Anna where I suggested about 10 things for CFAR to try, they'd already tried about 9, and the 10th wasn't obviously better than the stuff already on their list. Maybe you're smarter than me, though :)
That preference seems mostly right to me... but I did just get quite a good suggestion by email that I hadn't thought of. If you feel like you know important things, do share.
Having spent a fair amount of time around CFAR staff, in the office and out, I can testify to their almost unbelievable level of self-reflection and creativity. (I recall, several months ago, Julia joking about how much time in meetings was spent discussing the meetings themselves at various levels of meta.) For what it's worth, I can't think of an organization I'd trust to have a greater grasp on its own needs and resources. If they're pushing fundraising, I'd estimate with high confidence that it's because that's where the bottleneck is.
I think donating x hours-worth of income is, with few exceptions, a better route than trying to donate x hours of personal time, especially when you consider that managing external volunteers/having discussions (a perhaps-unpredictable percentage of which will be unproductive) is itself more costly than accepting money.
I'd be willing to guess that the next best thing to donating money would be to pitch CFAR to/offer to set up introductions with high-leverage individuals who might be receptive, but only if that's the sort of thing (you have evidence for believing) you're good at.
Also, sharing information about the fundraising drive via email/Facebook/Twitter/etc. is probably worth the minimal time and effort.
We did one more experiment and have another in the works. Second experiment will be written up, I think, but hasn't been yet. I suspect we'd also love to share the data with you (and possibly more widely if there aren't anonymization issues; I wasn't closely involved in the experiments and don't know if there are); I see your unanswered comment back in the thread; I suspect it's just a matter of a small team of somewhat overbooked people dropping a thing.
I helped create CFAR, and work every day in the same office as they do, and I still need to talk with the co-founders for several hours before I understand enough detail about CFAR's challenges and opportunities to have advice that I'm decently confident will be useful rather than something they've already tried, or something they have a good reason for not doing, etc.
Question: what exactly is CFAR doing to encourage do-gooding? Of the three listed goals, my impressions of what CFAR does seem mostly focused on the first two.
(Just one thing that came to mind, I'm sure there are others than Anna et al can talk about.) People who are looking to do good can get - I guess they're called scholarships? - towards the workshop price. Not only does this hopefully make those looking to do good better, more effective, it also brings those people who aren't thinking about do-gooding as a (life choice? career?) into an environment surrounded by people who are passionate about doing good. The conversations that go on around them are extremely skewed towards that kind of thing, and I think that's likely to be very valuable (and not just to those unfamiliar with EA - I know several people were inspired by some of those conversations, and some of them came out of them with ideas that they're collaborating on).
...From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build
Summary: We outline the case for CFAR, including:
CFAR is in the middle of our annual matching fundraiser right now. If you've been thinking of donating to CFAR, now is the best time to decide for probably at least half a year. Donations up to $150,000 will be matched until January 31st; and Matt Wage, who is matching the last $50,000 of donations, has vowed not to donate unless matched.[1]
Our workshops are cash-flow positive, and subsidize our basic operations (you are not subsidizing workshop attendees). But we can't yet run workshops often enough to fully cover our core operations. We also need to do more formal experiments, and we want to create free and low-cost curriculum with far broader reach than the current workshops. Donations are needed to keep the lights on at CFAR, fund free programs like the Summer Program on Applied Rationality and Cognition, and let us do new and interesting things in 2014 (see below, at length).[2]
Our long-term goal
CFAR's long-term goal is to create people who can and will solve important problems -- whatever the important problems turn out to be.[3]
We therefore aim to create a community with three key properties:
Our plan, and our progress to date
How can we create a community with high levels of competence, epistemic rationality, and do-gooding? By creating curricula that teach (or enhance) these properties; by seeding the community with diverse competencies and diverse perspectives on how to do good; and by linking people together into the right kind of community.
Curriculum design
Progress to date
Next steps
Forging community
Progress to date
Next steps
Financials
Expenses
Revenue
Donations
Savings and debt
Summary
How you can help
Our main goals in 2014:
Footnotes