I don't want to accelerate an arms race, and paying for access to GPT seems like a perfect way to be a raindrop in a dangerous flood. My current idea is to pay an equal amount monthly to Miri. I'll view it as the price being $40 per month with half going to AI safety research.

Is this indefensible? Let me know. GPT-4 is very useful to me personally and professionally, and familiarity of language models will also be useful if I have enough time to transition into an AI safety career, which I am strongly considering.

If it is a good idea, should we promote the offsetting strategy among people who are similarly conflicted?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 2:33 PM

This seems completely negligible to me, given how popular ChatGPT was. I wouldn't worry about it

Look up the evidence on the effectiveness of boycotts. My understanding is that they don't work. In particular, it seems unlikely to me that the alignment community (which is small) will have a meaningful impact on OpenAI's actual or perceived success.

I have a general principle of not contributing to harm. For instance, I do not eat meat, and tend to disregard arguments about impact. For animal rights issues, it is important to have people who refuse to participate, regardless of whether my decades of abstinence have impacted the supply chain.

For this issue however, I am less worried about the principle of it, because after all, a moral stance means nothing in a world where we lose. Reducing the probability of X-risk is a cold calculation, while vegetarianism is is an Aristotelian one.

With that in mind, a boycott is one reason not to pay. The other is a simple calculation: is my extra $60 a quarter going to make any tiny miniscule increase in X-risk? Could my $60 push the quarterly numbers just high enough so that they round up to the next 10s place, and then some member of the team works slightly harder on capabilities because they are motivated by that number? If that risk is 0.00000001%, well when you multiply by all the people who might ever exist... ya know? 

Are you doing anything alignment related? The benefits to you (either in productivity or in keeping you informed) might massively outweigh the marginal benefits to OpenAI's bottom line.

Yes but you throw away your benefits. Using tools like this effectively might increase the chance you keep your job 50 percent or more.

I don't think OpenAI is funding-constrained in any real way at the moment, and using new AI systems for mundane utility seems pretty harmless (more from Zvi).

This is somewhat galaxy-brained thinking, but if GPT-4 generates enough revenue, perhaps it actually steers OpenAI execs towards slowing down? "If GPT-4 is already generating $X billion dollars on its own, why risk hundreds of millions or billions of dollars more, and a potential safety disaster or PR crisis, to train GPT-5 ASAP?"

Or, even more galaxy-brained, if enough people pay for ChatGPT+ to get mundane utility out of the chatbot, OpenAI will be capacity-constrained, possibly forcing them to raise prices (or at least delay lowering them) and price out some capabilities research that requires API use at scale.

Realistically though, I think the impact of paying for ChatGPT+ is minimal in either direction, even if everyone in your reference class also pays for it.

I'd set a deadline for how long you'll ever use it for. I'm only doing one month.

I'm gonna pass on the question of whether it's defensible (like you, the thought of giving money to OpenAI makes me uneasy), but I do like the idea of an "Alignment tax". By general principles one should expect that there is some ideal proportion of money flowing into alignment/regulation efforts vs. AI development that makes the future maximally safe. So steering towards that seems like the right thing to do.