Review

I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).

I don't have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:

They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.

It seems useful to recognize, to notice, when you're contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.

A common Effective Altruism argument against offsets is that they don't make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible. If you want to mitigate harms you are contributing to, you can offset by increasing your "doing good" budget, but it doesn't make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.

I think this is a decently good point, but doesn't move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal. This is similar for the reasons to be vegan or vegetarian - it's probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.

After having used ChatGPT (3.5) and Claude for a few months, I've come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I've also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy. 

Unfortunately both can be true:

1) Language models are really useful and can help people learn, write, and research more effectively
2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential risk

I think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the "concerned about AI x-risk" reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models. But it is something, and I want to recognize that I'm contributing to this rapid scaling and deployment in some way.

Weighing all this together, I've decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much more important than offsets. I won't dock anyone points for not donating to offset harm from paying for AI services at a small scale. But I will notice if other people make similar commitments and take it as a signal that people care about risks from commercial incentives.

I didn't spend a lot of time deciding which orgs to donate to, but my reasoning is as follows: MIRI has a solid track record highlighting existential risks from AI and encouraging AI labs to act less recklessly and raise the bar for their alignment work. GovAI (the Center for AI governance) is working on regulatory approaches that might give us more time to solve key alignment problems. According to staff I've talked to, MIRI is not heavily funding constrained, but that they believe they could use more money. I suspect GovAI is in a similar place but I have not inquired.

Note: I wrote this on the EA forum and wasn't sure whether to crosspost it. However, I realized this audience was the one I most wanted to see it, even though I have it categorized as kind of an "EA" topic, so decided to post it here too.

New Comment
3 comments, sorted by Click to highlight new comments since:

I've found that using Bing/Chat-GPT has been enormously helpful in my own workflows.  No need to have to carefully read documentation and tutorials just to get a starter template up and running.  Sure it breaks here and there, but it seems way more efficient to look up stuff when it goes wrong vs. starting from scratch.  Then, while my program is running, I can go back and try to understand what all the options do.  

It's also been very helpful for finding research on a given topic and answering basic questions about some of the main ideas.  

Let's assume that OpenAI is reckless. Does giving them money make them more reckless?

It seems to me that your thought process is that OpenAI is evil and thus giving them money symbolically rewards evil. There can be some value in symbolic actions. This reminds me of the sporting and culture boycotts of Apartheid South Africa. To whatever extent that these worked, it wasn't about the money, but other forms of leverage.

Maybe tiny positive feedbacks reinforce behavior, although this seems pretty anthropomorphic. But maybe giving them money for services widens their options from inherently short-term venture funding. A sustainable stream of product money might make them less reckless. Probably it just can't compete with venture funding, but, if anything, I think the sign is positive.

@Daniel_Eth  asked me why I choose 1:1 offsets. The answer is that I did not have a principled reason for doing so, and do not think there's anything special about 1:1 offsets except that they're a decent schelling point. I think any offsets are better than no offsets here. I don't feel like BOTECs of harm caused as a way to calculate offsets are likely to be particularly useful here but I'd be interested in arguments to this effect if people had them.