I think this is a great strategy! In particular, older LLMs are much more inefficient during inference, so this also wastes the compute of the scaling labs.
Thanks for the comment! I'm hoping to get some more feedback on this overtime, as there are some more technical questions in my mind as to how to actually pull this off, as well as the theoretical questions relating to whether this would be a good strategy, or whether it would be counter-productive! :)
So I have been using ChatGPT quite a lot recently to help with research (it being, in effect, a kind of 'google on sterioids'), but I am also quite cautious that so using AI chatbots like ChatGPT may contribute to the development/advancement of new models, as so using them would seem to provide demand for their development. Agreeing with the ethos of the 'PauseAI' movement, I do not want to contribute to this, at least until more research has been done into their capacilities, and/or more safety measures have been put in place. As a result, I have started to consider using only older models of (e.g.) ChatGPT, and not using newer ones, in order to avoid providing demand for the development of new models. What I want to know is (i) what people think about trying (individually and colelctively) 'boycott' in this way (in principle), (ii) whether it would be possible (in practice), and (iii) if so, what computing power would be required for the successful accomplishment of this kind of use of AI chatbots - would it be plausible for the average person to be able to gain access to this?