I barely use AI tools, and the main reason is that I developed some sort of revulsion to them, because associate them with the harms they cause and the risks they bring.

On the other hand, it seems that more and more, whoever doesn't adopt AI tools will get become far less productive and will be left behind.

And also, if the people who worry about the risk limit themselves, and the people who don't worry about the risk don't limit themselves, it creates a personal responsibility vortex which tilts the balance of power away from safety.

But, perhaps it's possible to get the best of both words - use AI tools , but in a responsible manner that doesn't increase the harms and risks from it.

How does one do that?

New Answer
New Comment

3 Answers sorted by

RogerDearnaley

73

The general consensus among red-teamers and safety evaluators seems to be that the currently publicly available frontier AI tools such as GPT-4 and Claude 2 display hints of the capabilities that would cause x-risk-type danger, but don't actually have enough of them to pose any significant dangers. (They do have more minor concerns such as potential for misuse, and the sorts of biases one can already find on the Internet — both of which are pretty manageable if you're conscientious.) It's possible, if now unlikely, that these hints of danger could be fanned into something more dangerous by wrapping them in some sort of ingenious agentic scaffolding, but so far that doesn't seem to have happened. So I wouldn't worry much about using this year's generation of frontier AI (at least in ways that many people already have that don't involve a lot of new scaffolding), nor about anything less capable (such as any currently-available open-source models). If you wanted to be extra cautious, you could stick to last year's frontier, such as GPT-3.5.

Turning your history off/telling them not to use your data should avoid your use contributing anything to the zetabytes of training data already on the web (avoiding ever posting anything online would similarly help). Paying for the service will help the bottom line of whichever superscaler you use, but currently it's widely assumed that they're selling access to their models below cost, so (at least if you use the service heavily) you're costing them more money than you're paying them.

My problem isn't the danger from the tool itself, but from aiding teams/companies which develop them, and adding to the pressure to use AI tools, which will aid them even more. Edit: I see your other answer addressed this concern.

RHollerith

50

You don't say what kind of harms you consider worst. I consider extinction risk the worst harm, and here is my current personal strategy. I don't give OpenAI or similar companies any money: if I had to use use AI, I'd use an open-sourced model. (It would be nice if there were some company that offered a paid service that I could trust not to advance the state of the art, but I don't know of any though yes, I understand that some companies contribute more to extinction risk than others.) I expect that the paid services will eventually (and soon probably) get so good that it is futile to continue to bother with open-sourced models, and that to compete I will need eventually to give money to a company offering a paid service. I plan to try to put that day off as long as possible, which I consider useful, not futile: suppose it takes OpenAI 4 generations of services (where GPT-4-based services is generation 1) to become a very successful, very profitable company. each of those generations is equally important in expectation to OpenAI's eventual wild success. (An iMac was a vastly better computer than the first Macintosh, but every major product launch between the first Mac and the first iMac was roughly equally important to the eventual wild success of Apple.) Thus if I can hold off giving OpenAI any money till they're offering 4th-generation services, I will have withheld from OpenAI 75% of the "mojo" (revenue, evidence that I'm a user that they can in aggregate show to investors) I might have given them (before they become so successful that nothing I do could possibly have any effect), but enhanced my productivity almost as much as if I had used all 4 generations of OpenAI's service (because of how good the 4th generation will be).

If I use any AI services, open-source or not, I don't tell anyone about it (to prevent my contributing to the AI field's reputation for usefulness) except for people who take AI extinction risk seriously enough that they'd quickly vote to shut down the whole field if they could.

Like Eliezer says, this is not what a civilization that might survive AI research looks like, but what I just wrote is my current personal strategy for squeezing as much dignity as possible from the situation.

Thanks. That sounds like a good strategy, and yes, I agree that extinction risk the worst harm. Can you say more about how to use open source models? Or link to some guide?

2RHollerith
I have yet to interact with a state-of-the-art model (that I know of), but I do know from browsing Hacker News that many are running LLaMA and other open-source models on their own hardware (typically Apple Silicon or desktops with powerful GPUs).

Sune

31

You can use ChatGPT 3.5 for free with chat history turned off. This way your chats should not be used as training data.