Summary: This is an FAQ on the AI Safety GiveWiki at ai.givewiki.org. It’s open to anyone, and we’re particularly trying to attract s/x-risk projects at the moment! Some of the questions, though, apply to impact markets more generally. This document will give you an overview of what it is that we’re building and where we’re hoping to go with it.
But before we jump into the FAQ, a quick announcement:
We are looking for new projects and expressions of interest from donors!
Please let me know if you have any further questions, below or in a call.
Donors and grant applicants face the following three problems at the moment:
This is our solution:
There are a host of other benefits in various specific scenarios. You can read about them on our blog.
Eventually we want to grow this into an ecosystem akin to the voluntary carbon credit market (phase 3). But for now only phase 1 is relevant.
What we call a project is some set of actions that creators or charity entrepreneurs plan to carry out within some time frame. Good examples are blog articles, scientific papers, campaigns, courses, etc.
Whole charities (like the whole of the Center on Long-Term Risk rather than any one piece of research) are a bit of an awkward contingency because they don’t have any obvious “completion date,” but they qualify too. We’re considering methods for how we can evaluate them too.
We call a creator or charity entrepreneur someone who publishes a project on our website to fundraise for it. They are usually researchers, founders, entrepreneurs, etc. Founder would be another obvious choice, but the term creator is more general and harder to confuse with funder than founder.
Both of them give money to projects, but their ambitions are different.
Generalist donors either don’t have the time or the specialized knowledge to evaluate projects. They want to use the impact market like a black box, a charity evaluator that makes recommendations to them.
Specialist donors have the time or the special knowledge to form a first-hand opinion on projects – be it because they are experts in a relevant field, because they are experts in startup picking, or simply because they are friends with the people who run a particular project. They use the impact market to make recommendations and thereby leverage the donations of the donors who rely on them. They may also be after the prizes that funders might provide!
Funders are basically large donors. They can behave exactly like other donors, but they can also provide prizes to incentivize other donors.
The score is computed in three steps:
Each project has an end date. The project creator can edit this date in case things take longer than planned, but at some point the date will be in the past and the project will really be complete. At this point it can apply to be evaluated.
We want to get a number of evaluators on board to consider the project artifacts and pass judgment on them.
The focus here will not be to make great contributions to priorities research but rather, if the project is (say) a book, to establish whether the book got written at all and whether it looks like someone has put effort into it. Ethical value judgments will be embedded in these assessments but we can hopefully find multiple evaluators so that, in controversial cases, the scores can average out.
The guideline for the calibration of the scores will be something along the lines: Suppose this book/paper/etc. didn’t exist, and I wanted to make it happen. How much would I have to pay? Or conversely for harmful projects: If there were a fairy who let’s me undo this book/paper/etc., how much would I pay to have it undone?
There have been changes to the funding landscape in 2022. Such vicissitudes keep our long-term plans in flux. But at the moment we’re aiming to create a market that is similar to the voluntary market for carbon credits. (These are also called “carbon offsets,” but the term “offset” would be confusing in our context.)
What we’ve been calling a “project” is something that can issue one or more impact certificates. Our platform still lists the existing certificates, but that’s merely an archive at this point. There is a chance we might return to this format, especially if we choose to found a nonprofit branch of our organization, but for the moment we have no such plans.
We’ve encountered three problems with impact certificates:
Finally – and this hasn’t become a problem but would have – a lot of interesting financial instruments, such as perpetual futures, will remain inapplicable to impact certificates because each one of them is doomed to have very little liquidity. Most projects on the GiveWiki will require some $10–100k in seed funding to get off the ground. The fully diluted market cap of even the most successful projects will probably almost never exceed $1–10m. The circulating supply will be much less still. Such assets are a good fit for bonding curve or English auctions but it would be useless to try to set up order books, indices, and futures markets for them. We previously hoped to bucket them to alleviate this problem. Impact credits will hopefully one day serve this purpose.
Hence, we’ve removed impact certificates from our plans and introduced projects instead, which are perfectly laissez faire about their definition. We’ve also opted to allow no trade of anything that can be turned into dollars. We might reboot markets for impact certs when the overall conditions change.
You can think of the donor score as analogous to the total value that you would hold in retired certificate shares if we still had those. (“Retired,” a.k.a. “consumed” or “burned,” shares are ones that cannot be sold anymore.) But it’s probably just confusing to think of it that way.
The only monetary rewards that donors may receive are prizes if they make it to the top of our donor ranking.
That hinges on how much promise your project has.
Let’s say it has a lot.
To wit: As soon as you have any fundraising success, you can leverage it to build greater success. The platform even does it for you!
Basically anything goes… so long as it’s legal and not super risky!
We review every project and eliminate any that seem to us like they might be harmful. But please also make sure yourself that you don’t include any classified information or info hazards in the description because all projects are public. (Would you like to make your project only accessible to logged-in users? Send us a message through the Intercom button in the bottom right to indicate your interest in this feature!)
The ideal project is something finite that produces artifacts. Our evaluators will have an easy time with projects that fundraise for books, articles, or papers because they can read them to assess them. They’ll have a hard time with projects that are about whole organizations because an organization typically does a lot and they also can’t look into the future to know what great things the organization might still accomplish. Expect organizations to be undervalued, not because they suck but because so much of what they do is shrouded by the future and closed office doors.
There’s no required format. So if you already have a funding application lying around because you already applied for funding from some foundation, then just copy-paste or link it.
Other than that, you just need to enter a title and someplace where people can send you their donations, such as a PayPal or Stripe page. You can add some tags to make it easier for your project to be found. All in all this should take no longer than 5 minutes.
If you have no application written up yet, it’ll take longer. It’s up to you how comprehensive you want to make it. One thing I like to do is to write down just the essentials (if it’s short, it’s more likely to get read too), and then to include a link to a site where people can book a call with me to learn more.
Alternatively they also have the option to ask questions in the Q & A section. No need to procatalepse them all in your description when you can just respond to the ones that actually come up.
Amber Dawn might also help you with the writing.
Have you supported any charities early on that late made it bigly?
I, for one, would love to know what fledgling organizations you support today so I can get in early too. And for you that means that suddenly your donations count for more!
Some of my friends donate up to $100,000 per year. They don’t have much time to research their donations, so they, too, would love to know about that fledgling organization that you support. Even if you just donate $100 to the organization, your $100 might leverage $100,000 from the donors who trust your judgment!
That’s one thing that the platform can do for you.
Another is that we’re hoping for larger funders to come in and to reward our top donors. They might opt for cash prizes or for regranting prizes. Either way you’ll have a lot more money to give away if you unlock any of those prizes!
My hope is that eventually a substantial number of people can turn donating into their full-time job. They make small but really smart donations, earn high scores as a result, and then make it all back several times over from the prizes that they win.
As of early 2023 we’re not there yet, but you might as well start building up your score already.
A Kickstarter type of system would solve this, right? We can’t easily implement such “assurance contracts” ourselves, but we can help you coordinate: We could offer a way for donors to pledge that they want to donate $x if all donors together pledge to donate $X. Then once the sum of all pledges reaches $X, you’ll all get notified and can dispatch your donations.
Does that sound interesting? Please let us know, e.g., through the Intercom button in the lower right. We’ll prioritize the feature more highly.
You want to donate but maybe you don’t have time to do a lot of research or you want to donate in a field where you don’t have the requisite background knowledge. Hence you’re dependent on friends, funds, or charity evaluators to suggest good giving opportunities.
But all of these have limitations: Your friends probably know of many of the same giving opportunities, so you might be overlooking even better ones. The same is true of funds, though they receive applications, which alleviates the problem. Conversely, you may know and trust them less than some of your friends. The track record of retrospective self-evaluation at funds is thin. Finally charity evaluators have a wholly different set of limitations: They put a lot of effort into their evaluations, so that they can’t evaluate projects whose funding gaps are so small that they don’t warrant the evaluators’ efforts. Plus charity evaluators don’t exist for many cause areas.
We want to solve that for you. All you need to do when you want to donate is to turn to our platform. You can:
Today we’re just getting started, but over the coming months we want to establish a new, bottom-up, grassroots type of funding allocation mechanism that scales down to the smallest projects, is fully meritocratic, and doesn’t know geographic limits.
Our plan is to hand off power to top donors gradually. First all their forecasting will bottom out at the judgments of impact evaluators that we will hire. That’ll ensure that they’ll be sophisticated altruistic, but it will not immediately steel us against our own biases. Later we want to recruit impact evaluators from our top donors, increasing the organic, bottom-up meritocracy of the platform.
But then we want to transition to phase 2 of our rollout. Phase 2 will gradually put top donors on the same footing as evaluators until most evaluation is done by top donors. But even then our evaluators will still be around to steer the platform as needed to make sure it is not usurped by any amoral top donors.
We ask projects to publish their fundraising goals and stretch goals. If they have not done so, please ask them for that information in the Q & A section.
You can use the platform like any other donor to find great, new funding opportunities.
But we also have a special function for you: You can basically rent our top donors by offering regranting budgets to them. Those serve the dual purpose that (1) you’ll get top grantmaker talent for free, maybe even top grantmaker talent whose networks are relatively uncorrelated with yours, and (2) by announcing such a prize, you create an incentive for prospective top donors to show up and try to prove their mettle.
If that sounds interesting to you, please get in touch, e.g., through the Intercom button to the lower right or via hi@givewiki.org.
Are you? If so, we can easily build a custom score for you. You score the projects, and we aggregate all of your project scores into your own custom donor ranking. Please get in touch if that sounds interesting to you, e.g., through the Intercom button to the lower right or via hi@givewiki.org.
We’ve termed this problem the “Retrofunder’s Dilemma.” It’s easy to imagine a world in which there are several funders – just too many for them all to be really chummy with each other – who all insist on extremely niche scoring rules to make sure that they don’t reward any donations to good deeds that anyone else might reward too. But that would leave exactly the most uncontroversially good deeds unrewarded.
We’re far from this being a problem for our rewarding, alias retrofunding, at all and even farther from it becoming a greater problem for retrofunding than it is already for prospective funding. But if it becomes a problem, the abovelinked article lists three remedies that funders can implement and four that charity entrepreneurs can implement. Or that we can implement for them to establish coordination.
Not really, sort of in the way that airplanes didn’t replace bikes. We think that impact markets will be best suited for funding the long-tail of small, young speculative startup charity projects. But they will be rather uninteresting for projects with strong track records or otherwise safe, reliable success. They will also be uninteresting for projects that require a lot of funding from the get-go.
You read more about the math behind these considerations on our blog.
The basic idea is that projects that are > 90% likely to succeed (according to some metric of success that the funder uses) don’t leave much room for an investor to make a profit while reducing the risk further for the funder.
Additionally, a risk-neutral funder is only interested in a risk reduction if it moves an investment from the space of negative expected value to the space of positive expected value. If a project is already 90% likely to succeed, it would have to be very expensive before it could become negative EV for a funder. Such an expensive project is then easily worth the time of the funder to evaluate prospectively rather than retrospectively.
So impact markets (with risk-neutral funders) are most interesting for:
If highly risk-averse funders are involved, though, they may be happy to pay a disproportionate fee for a risk reduction from 10% to 0%! There are also funders who are limited by their by-laws to only invest in certain types of low-risk projects. In some cases impact markets may present a loophole for them to do good more effectively without incurring any illicit risks.
No. The financial markets have developed over the course of over a century and are accompanied with legislation that is usually phrased in such generic terms that it is nigh impossible to create a separate financial apparatus outside of it. Many cryptocurrency projects have tried to create market mechanisms beyond the reach of the law, but the law typically disagreed. More recently, there is instead a stronger push to welcome regulation and to reform the law to facilitate regulation.
We therefore consider it infeasible to try to replace the existing financial systems. Rather our goal is to create systems that reward the creation and maintenance of public, common, and network goods while interfacing with the existing financial systems in standard, regulated ways. (The closest parallel is the voluntary market for carbon credits.)
[This section has not been rewritten for the new “impact credits” approach. The differences are probably minor.]
We’ve been trying to get an idea of how good impact markets are by putting some rough estimates into a Guesstimate, but a lot of the factors are multiplicative and they are all hard to guess, so that the variance of the result is very wide.
Some key benefits are:
The result of the model is that impact markets are unlikely to improve the current efficiency by less than 60x or by more than 11,000x.
We think that this range is likely biased upward:
You can find further discussion of the model in the comments on this post.
The biggest concern that we’ve had from the beginning in 2021 is that prize contests (such as impact markets) are general purpose: Anyone can use them – to incentivize awesome papers on AI safety or to incentivize terrorist attacks. In fact, promises of rewards in heaven could count as prizes. If we create tooling to make prize contests easier, there is the risk that said tooling will be used by unscrupulous actors too. The very concept of the prize contest could also count as attention hazard.
Here is a summary of all of the risks that we’ve identified and our mitigation strategies.
A rich terrorism funder could, for example, copy our approach and build an analogous platform where they promise millions of dollars to donors who fund speculative approaches to terrorism, such as terror attacks that only work out in 1 in 10 attempts. We would not allow such projects on our platform or a scoring procedure with such goals, but that doesn’t keep terrorists from building their own clone of our platform.
This doesn’t need to be obviously ill-intentioned (though terrorists probably also consider themselves to be heroes). You could imagine someone cloning our platform to fund grassroots nuclear fusion research, which might lead to accidental nuclear chain reactions in the basements of hobby physicists in densely populated cities around the world.
The Impact Attribution Norm alleviates this problem to (roughly) the extent to which it is adopted (see the question about measurement above). Yet it is not obvious that it will reliably be applied the way we would like to see it applied. This article is a good summary of its limits. See also our comment. Consider for example:
These risks mostly seem like “black swan” risks to us – deleterious but highly unlikely risks. We’re also quite confident that we can prevent them from happening on our platform by carefully moderating all activity.
Finally, there is always the question how easy it is already for unscrupulous actors to achieve their ends and why they are not doing it already. It is quite easy for an unscrupulous millionaire to promise a big reward for something like nuclear fusion simply by tweeting it. But this is not currently a major problem. So the legal safeguards (or some other mechanisms) that also apply to our solution must be working fairly well. That said, we’re not solely relying on them.
Seemingly the best outcome for funders is to incentivize excellent work with the promise of a prize but then not reward them at all but to instead put the money into prospective funding of additional impactful work.
That is a shortsighted strategy as no one will trust a funder again if they’ve pulled this trick once. I would go further and suggest that donors should not rely on new funders to pay up unless they have a history of being trustworthy. For funders this means that it’s probably in their interest to gradually ramp up their prizes, so that they can build up trust more cheaply. Another option is escrow.
Eventually we hope to have tradable impact credits so that donors can assume that any funder who suddenly vanishes will thus leave the price at an unexpectedly low level which other funders will immediately use to “buy the dip.”
This article touches on this question. I don’t think it’s important whether there is always more evidence of impact at a later point. Impact markets will just be most interesting for projects for which that is true.
The second part of the answer is that we think that there is a substantial number of projects for which this is true.
An example: You can usually divide your uncertainty about how a project – say, a book – turns out into two multiplicative parts: the probability that the book gets written at all and the impact-over-probability distribution of the finished book if it gets written. Once you know whether the book got written or not, that product collapses into just the second factor (minus the “if …”).
This only goes through if you take uncertainty to mean something like the difference between the best and the worst or the 99st and the 1th percentile outcomes, which may be a bit unintuitive. If you think of uncertainty as variance, and your fully written book has either an extremely positive or an extremely negative impact, then added uncertainty over whether the book has really been written adds another cluster of neutral outcomes in the middle between the extremes. It does not reduce (or increase) the difference between the extremes, but it does reduce the variance.
The whole point of impact markets is to decentralize funding – so might they perversely increase it? The argument goes that the current scoring rule allows for truly exceptionally good donations – the first donation to a project as amazing as Evidential Cooperation in Large Worlds might’ve been a substantial donation. Whoever the donor might be, they’d get an enormous score boost even though they were only right once. This boost might push them to the top of our ranking for many years until finally enough other donors have gradually accrued comparable scores. That seems unlikely but also undesirable.
One variation that we’re trialing is a score that does not take the size of a donation into account but just the earliness. Every project has a first donation, so even the first donations to great projects could no longer be as remarkable as a substantial first donation to a great project could’ve been under the size-weighted scoring rule.
Another remedy is to have scores decay over time. One solution we’re trialing is to have a score that only takes into account donations from the past year.
We’ll keep monitoring this potential issue and react in case it does manifest.