Dawn Drescher

I’m working on Impact Markets – markets to trade nonexcludable goods.

If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.

Pronouns: Ideally they. But he/she and gender-neutral neopronouns are fine too.

Wiki Contributions

Comments

Does this include all donors in the calculation or are there hidden donors?

Donors have a switch in their profiles where they can determine whether they want to be listed or not. The top three in the private, complete listing are Jaan Tallinn, Open Phil, and the late Future Fund, whose public grants I've imported. The total ranking lists 92 users. 

But I don't think that's core to understanding the step down. I've gone through the projects around the threshold before I posted my last comment, and I think it's really the 90% cutoff that causes it. Not a big donor who has donated to the first 22 but not to the rest.

There are plenty of projects in the tail that have also received donations from a single donor with a high score – but more or less only that so that said donor has > 90% influence over the project and will be ignored until more donors register donations to it.

Ok so the support score is influenced non-linearly by donor score.

By the inverse rank in the ranking that is sorted by the score. So the difference between the top top donor and the 2nd top donor is 1 in terms of the influence they have.

TL;DR: Great question! I think it mostly means that we don't have enough data to say much about these projects. So donors who've made early donations to them, can register them and boost their project score.

  1. The donor score relies on the size of the donations and their earliness in the history of the project (plus the retroactive evaluation). So the top donors in particular have made many early, big, and sometimes public grants to projects that panned out well – hence why they are top donors.
  2. What influences the support score is not the donor score itself but the inverse rank of the donor in the ranking that is ordered by the donor score. (This corrects the outsized influence that rich donors would otherwise have, since I assume that wealth is Pareto distributed but expertise is maybe not, and is probably not correlated at quite that extreme level.)
  3. But if a single donor has more than 90% influence on the score of a project, they are ignored, because that typically means that we don't have enough data to score the project. We don't want a single donor to wield so much power.

Taken together, our top donors have (by design) the greatest influence over project scores, but they are also at a greater risk of ending up with > 90% influence over the project score, especially if the project has so far not found many other donors who've been ready to register their donations. So the contributions of top donors are also at greater risk of being ignored until more donors confirm the top donors' donation decisions.

"GiveWiki" as the authority for the picker, to me, implied that this was from a broader universe of giving, and this was the AI Safety subset.

Could be… That's not so wrong either. We rather artificially limited it to AI safety for the moment to have a smaller, more sharply defined target audience. It also had the advantage that we could recruit our evaluators from our own networks. But ideally I'd like to find owners for other cause areas too and then widen the focus of GiveWiki accordingly. The other cause area where I have a relevant network is animal rights, but we already have ACE there, so GiveWiki wouldn't add so much on the margin. One person is interested in potentially either finding someone or themselves taken responsibility for an global coordination/peace-building branch, but they probably won't have the time. That would be excellent though!

No biggie, but I'm sad there isn't more discussion about donations to AI safety research vs more prosaic suffering-reduction in the short term.

Indeed! Rethink Priorities has made some progress on that. I need to dig into the specifics more to see whether I need to update on it. The particular parameters that they discuss in the article have not been so relevant to my reasoning on these parameters, but it's well possible that animal rights wins out even more clearly on the basis of the parameters that I've been using.

It says “AI Safety” later in the title. Do you think I should mention it earlier, like “The AI Safety GiveWiki's Top Picks for the Giving Season of 2023”?

Thanks so much for the summary! I'm wondering how this system could be bootstrapped in the industry using less powerful but current-levels-of-general AIs. Building a proof of concept using a Super Mario world is one thing, but what I would find more interesting is a version of the system that can make probabilistic safety guarantees for something like AutoGPT so that it is immediately useful and thus more likely to catch on. 

What I'm thinking of here seems to me a lot like ARC Evals with probably somewhat different processes. Humans doing tasks that should, in the end, be automated. But that's just how I currently imagine it after a few minutes of thinking about it. Would something like that be so far from OAA to be uninformative toward the goal of testing, refining, and bootstrapping the system?

Unrelated: Developing a new language for the purpose of the world modeling would introduce a lot of potential for bugs and there'd be no ecosystem of libraries. If the language is a big improvement over other functional languages, has good marketing, and is widely used in the industry, then that could change over the course of ~5 years – the bugs would largely get found and an ecosystem might develop – but that seems very hard, slow, risky, and expensive to pull off. Maybe Haskell could do the trick too? I've done some correctness proofs of simple Haskell programs at the university, and it was quite enjoyable.

Hiii! You can toggle the “Show all” switch on the projects list to see all publicly listed projects. We try to only rank, and thereby effectively recommend, projects that are currently fundraising, so projects that have any sort of donation page or widget that they direct potential donors to. In some cases this is just a page that says “If you would like to support us with a donation, please get in touch.” When the project owner adds a link to such a page in the “payment URL” field, the project switches from “Not currently accepting donations” to “Accepting donations” and is visible by default. In the case of Lightcone, we couldn't find any such page.

The Lightcone project is currently still owned by us, which is a stopgap. I see that you already have an account on platform. Can I assign the project to you so you can add or remove the donation link as you see fit? Thanks!

Oh, haha! I'll try to be more concise!

Possible crux: I think I put a stronger emphasis on attribution of impact in my previous comment than you do because to me that seems like both a bit of a problem and solveable in most cases. When it comes to impact measurement, I'm actually (I think) much more pessimistic than you seem to be. There's a risk that EV is just completely undefined even in principle and even if that should turn out to be false or we can use something like stochastic dominance instead to make decisions, that still leaves us with a near-impossible probabilistic modeling task.

If the second is the case, then we can probably improve the situation a bit with projects like the Squiggle ecosystem and prediction markets but it'll take time (which we may not have) and will be a small improvement. (An approximate comparison might be that I think that we can still do somewhat better than GiveWell, especially by not bottoming out at bad proxies like DALYs or handling uncertainty more rigorously with Squiggle, and that we can go as well as that in more areas. But not much more, probably.)

Conversely, even if we have roughly the same idea how much the passing of time helps in forecasting things, I'm more optimistic about it, relatively speaking.

Might that be a possible crux? Otherwise I feel like we agree on most things, like desiderata, current bottlenecks, and such.

It seems very important to consider how such a system might update and self-correct.

Argh, yeah. We're following the example of carbon credits in many respects, and there there are some completely unnecessary issues whose impact market equivalents we need to prevent. It's too early to think about this now, but when the time comes, we should definitely talk to insiders of the space who have ideas in how it should be changed (but probably can't anymore) to prevent the bad incentives that have probably caused that.

Another theme in our conversation, I think, is figuring out exactly what or how much the final system should do. Of course there are tons of important problems that need to be solved urgently, but if one system tries to solve all of them, they sometimes trade off against each other. Especially for small startups it can be better to focus on one problem and solve it well rather than solve a whole host of problem a little bit each.

I think at Impact Markets we have this intuition that experienced AI safety researchers are smarter than most other people when it comes to prioritizing AI safety work, so that we shouldn't try to steer incentives in some direction or other and instead double down on getting them funded. That gets harder once we have problems with fraud and whatnot, but when it comes to our core values, I think we are closer to, “We think you're probably doing a good job and we want to help you,” rather than “You're a bunch of raw talent that wants to be herded and molded.” Such things as banning scammers is then an unfortunate deviation from our core mission that we have to accept. That could change – but that's my current feeling on our positioning.

In such a context, we need systems that make it more likely such work happens even without any ability to identify it upfront, or quickly notice its importance once it's completed.

Nothing revolutionary, but this could become a bit easier. When Michael Aird started posting on the EA Forum, I and others probably figured, “Huh, why didn't I think of doing that?” And then, “Wow, this fellow is great at identifying important, neglected work they can just do!” With a liquid impact market, Michael's work would receive its first investments at this stage, which would create additional credible visibility on the marketplaces, which could cascade into more and more investments. We're replicating that system with our score at the moment. Michael could build legible track record more quickly through the reputational injections from others, and then he could use that to fundraise for stuff that no one understands, yet.

I expect that a significant improvement to the funding side of things could be very important.

Yeah, also how to even test what the talent constraint is when the funding constraint screens it off. When the funding was flowing better (because part of it was stolen from FTX customers…), has AI safety progress sped up? Do you or others have intuitions on that?

Awww, thanks for the input!

I actually have two responses to this, one from the perspective of the current situation – our system in phase 1, very few donors, very little money going around, most donors don't know where to donate – and the final ecosystem that we want to see if phase 3 comes to fruition one day – lots of pretty reliable governmental and CSR funding, highly involved for-profit investors, etc.


The second is more interesting but also more speculative. The diagram here, shows both the verifier/auditor/evaluator and the standardization firms. I see the main responsibility with the standardization firms, and that's also where I would like my company to position itself if we reach that stage (possibly including the verification part). 

One precedent for that is the Impact Genome. It currently recognizes (by my latest count) 176 kinds of outcomes. They are pretty focused on things that I would class as deploying solutions in global development, but they're already branching out into other fields as well. Extend that database with outcomes like different magnitudes of career plan changes (cf. 80,000 Hours), years of dietary change, new and valuable connections between collaborators, etc., and you'll probably end up with a database of several hundred outcome measures, most of which are not just about publishing in journals. (In the same section I mention some other desiderata that diverge a bit from how the Impact Genome is currently used. That article is generally the more comprehensive and interesting one, but for some reason it got fewer upvotes.)

In this world there's also enough financial incentive for project developers to decide what they want to do based on what is getting funded, so it's important to set sensible incentives.

It's possible that even in this world there'll be highly impactful and important things to do that'll somehow slip through the cracks. Absent cultural norms around how to attribute the effects of some more obscure kind of action, it might lead to too many court battles to even attempt to monetize it. I'm thinking of tricky cases that are all about leveraging the actions of others, e.g., when doing vegan outreach work. Currently there are no standards for how to attribute such work (how much reward should the leaflet designer get, how much should the activist get, how much should the new vegan or reducitarian get). But over time more and more of those will probably get solved as people agree on arbitrary assignments. (Court battles cost a lot of money, and new vegans will not want to financially harm the people who first convinced them to go vegan, so the activist and the leaflet designer are probably in good positions to monetize their contributions, and just have to talk to each other how to split the spoils.)


But we're so so far away from that world. 

In the current world I see three reasons for our current approach:

  1. It's basically on par with how evaluations are done already while making them more scalable.
  2. The counterfactual to getting funded through a system like ours is usually dropping out of AI safety work, not doing something better within AI safety.
  3. If we're successful with our system, project developers will much sooner do small, cheap tweaks to make their projects more legible, not change them fundamentally.

First, my rough impression from the projects on our platform that I know better is that for them it's mostly that they're, by default, not getting any funding or just some barely sufficient baseline funding from their loyal donors. With Impact Markets, they might get a bit of money on top. The loyal donors are probably usually individuals with personal ties to the founders. The funding that they can get on top is thanks to their published YouTube videos, blog articles, conference talks, etc. So one funding source is thanks to friendships; the other is thanks to legible performance. But there's no funding from some large donor who is systematically smarter and more well-connected than our evaluators + project scout network. 

And even really smart funders like Open Phil will look at legible things like the track record of a project developer when making their grant recommendations. If the project developer has an excellent track record of mentioning just the right people and topics to others at conferences, then no one, not Open Phil or even the person themselves will be able to take that into account because of how illegible it is.

Second, we're probably embedded in different circles (I'm guessing you're more thinking of academic researchers at university departments where they can do AI safety research?), but in my AI safety circles there are the people who have savings from their previous jobs that they're burning through, maybe some with small LTFF grants, and some that support each other financially or with housing. So by and large it's either they get a bit of extra money through Impact Markets and can continue their work for another quarter or they drop out of AI safety work and go back to their industry jobs. So even if we had enough funding for them, it would just prevent them from going back to unrelated work for a bit longer, not change what they're doing within AI safety.

A bit surprisingly, maybe, one of our biggest donors on the platform is explicitly using it to look for projects that push for a pause or moratorium on AGI development, largely though public outreach. That can be checked by evaluators through newspaper reports on the protests, and other photos and videos, but it'll be unusually opaque how many people they reached, whether any of them were relevant, and what they took away from it. So far our track record seems to be to foster rather illegible activism rather than distract from it, though admittedly that has happened a bit randomly – Greg is just really interested in innovative funding methods.

Third, currently the incentives are barely enough to convince project developers to spend 5 minutes to post their existing proposals to our platform, and only in some cases. (In others I've posted the projects for them and then reassigned them to their accounts.) They are not enough to cause project developers to make sure that they have the participants' permission to publish (or share with external evaluators) the recordings of their talks. They're not enough for them to design feedback surveys that shed light on how useful an event was to the participants. (Unless they already have them for other reasons.)

And it makes some sense too: We've tracked $391,000 in potential donations from people who want to use our platform; maybe 10% of those will follow through; divide that by the number of projects (50ish), and the average project can hope for < $1,000. (Our top projects can perhaps hope for $10k+ while the tail projects can probably not expect to fundraise anything, but the probability distribution math is too complicated for me right now. Some project developers might expect a Pareto distribution where they'd have to get among the top 3 or so for it to matter at all; others might expect more of a log-normal distribution.) Maybe they're even more pessimistic than I am in their assumptions, so I can see that any change that would require a few hours of work does not seem worth it to them at the moment.

If we become a bit more successful in building momentum behind our platform, maybe we can attract 100+ donors with > $1 million in total funding, so that we can present a stronger incentive for project developers. But even then I think what would happen is rather that they'll do such things as design feedback surveys to share with evaluators or record unconference talks to share them etc., but not to fundamentally change what they're doing to make it more provable.

So I think if we scale up by 3–4 orders of magnitude, we'll probably still do a bit better with our system than existing funders (in terms of scaling down, while having similarly good evaluations), but then we'll need to be careful to get various edge cases right. Though even then I don't think mistakes will be path dependent. If there is too little funding for some kind of valuable work, and the standardization firms find out about it, they can design new standards for those niches.

I hope that makes sense to you (and also lunatic_at_large), but please let me know if you disagree with any of the assumptions and conclusion. I see for example that even now, post-FTX, people are still talking about a talent constraint (rather than funding constraint) in AI safety, which I don't see at all. But maybe the situation is different in the US, and we should rebrand to impactmarkets.eu or something! xD

It would the producer of the public good (e.g. for my project I put up the collateral).

Oh, got it! Thanks!

Possibly? I'm not sure why you'd do that?

I thought you’d be fundraising to offer refund compensation to others to make their fundraisers more likely to succeed. But if the project developer themself put up the compensation, it’s probably also an important signal or selection effect in the game theoretic setup.

I disagree that a Refund Bonus is a security.

Yeah, courts decide that in the end. Howey Test: money: yes; common enterprise: yes; expectation of profit: sometimes; effort of others: I don’t know, not really? The size of the payout is not security-like, but I don’t know if that matters. All very unclear.

Profit: I imagine people will collect statistics on how close to funded campaigns can still be a day before they close so that they still fail in (say) 90% of the cases. Then they blindly invest $x into all campaigns that are still $x away from that threshold on their last day.

I imagine the courts may find that if someone goes to such efforts to exploit the system, they were probably not tricked into doing so. Plus there is the question of what effort of others we could possibly be referring to.

But even if the courts in the end decide that you’re right and it’s not a security, the legal battle with the SEC alone will be very expensive… They keep expanding their own heuristic for what they think is a security (not up to them to decide). They’ve even started to ignore the “expectation of profit” entirely (with stablecoins).

But perhaps you can find a way to keep the people who run the fundraisers in the clear and keep your company in South Africa (where I know the laws even less though). If the fundraisers are on a mainstream blockchain, the transactions are public, so you (outside of the US) could manage the refund compensation on behalf of the project developers and then pay refunds according to the public records on the blockchain. That way, no one could prove that a particular project developer is a member in your system… except maybe if they make “honeypot” contributions I suppose. Perhaps you can have a separate fund from which you reward contributors to projects you like regardless of whether they’re members. If a honeypot contributor gets a refund, they won’t know whether it’s because the project developer is a member of your org or because you selected their project without them knowing about it.

This is actually a cool idea. I don't know how I'd manage to get people's details for giving refund without co-operating with the fundraising platform, and my impression is that most platforms are hesitant to do things like this. If you know of a platform that would be keen on trying this, please tell me!

Yes… You could talk with Giveth about it. They’re using blockchains, so perhaps you can build on top of them without them having to do anything. But what I’ve done in the past is that people sign up with my platform, get a code, and put the code in their pubic comment that they can attach to a contribution on the third-party platform. Then if they want to claim any rights attached to the contribution from me, I check that the code is the right one, and if it is, believe them that they’re either the person who made the contribution or that the person who made the contribution wanted to gift it to them.

I don't quite understand this point. You could work on AI Safety and donate to animal charities if you don't want to free-ride. 

Well, let’s say I barely have the money to pay for my own cost of living or that I consider a number of AI safety orgs to also be the even-more-cost-effective uses of my money.

Load More