You don't say what kind of harms you consider worst. I consider extinction risk the worst harm, and here is my current personal strategy. I don't give OpenAI or similar companies any money: if I had to use use AI, I'd use an open-sourced model. (It would be nice if there were some company that offered a paid service that I could trust not to advance the state of the art, but I don't know of any though yes, I understand that some companies contribute more to extinction risk than others.) I expect that the paid services will eventually (and soon probably) get so good that it is futile to continue to bother with open-sourced models, and that to compete I will need eventually to give money to a company offering a paid service. I plan to try to put that day off as long as possible, which I consider useful, not futile: suppose it takes OpenAI 4 generations of services (where GPT-4-based services is generation 1) to become a very successful, very profitable company. each of those generations is equally important in expectation to OpenAI's eventual wild success. (An iMac was a vastly better computer than the first Macintosh, but every major product launch between the first Mac and the first iMac was roughly equally important to the eventual wild success of Apple.) Thus if I can hold off giving OpenAI any money till they're offering 4th-generation services, I will have withheld from OpenAI 75% of the "mojo" (revenue, evidence that I'm a user that they can in aggregate show to investors) I might have given them (before they become so successful that nothing I do could possibly have any effect), but enhanced my productivity almost as much as if I had used all 4 generations of OpenAI's service (because of how good the 4th generation will be).
If I use any AI services, open-source or not, I don't tell anyone about it (to prevent my contributing to the AI field's reputation for usefulness) except for people who take AI extinction risk seriously enough that they'd quickly vote to shut down the whole field if they could.
Like Eliezer says, this is not what a civilization that might survive AI research looks like, but what I just wrote is my current personal strategy for squeezing as much dignity as possible from the situation.