This post is one part of the sequence Understanding the diffusion of large language models. As context for this post, I strongly recommend reading at least the 5-minute summary of the sequence.
EDIT 25-Feb-2023: I have made a big update from the claims in this post about deployment of large language models costing less than development in total. I now think that for the largest, most commercially successful LLMs, the total compute spent on deployment is much larger than in development. However, this doesn't significantly update my conclusion about the importance of focusing on development rather than deployment as a target of intervention (point 2c in the Key Takeaways). More information in this comment.
Here I overview two GPT-3-like models, OPT-175B and BLOOM. OPT-175B is a GPT-3 replica that can be downloaded by ML researchers in academia, government and industry labs (after their request for access is approved). BLOOM is similar to GPT-3, but not a replica, and is available for anyone to download.
Table 1: attributes of OPT-175B and BLOOM
Note that many other GPT-3-like models have been developed before OPT-175B and BLOOM became available. However, I am 80% confident that before July 2022, no other GPT-3-like models had their trained weights widely available for download. In terms of understanding the most important impacts of diffusion, I think which GPT-3-like models are currently accessible is less important than the timing, development and release strategy of GPT-3-like models. I cover those characteristics in the other post in the sequence.
To date, the trained model weights for GPT-3 have not been made available. Having direct access to trained model weights would allow someone to (a) run the model independently on a computing cluster, (b) make copies of the model, (c) fine-tune the model for new tasks, and anything else that requires access to the values of weights (e.g., interpretability research). Although OpenAI provides an API that allows access to GPT-3 outputs and fine-tuning procedures, this API places considerable limits on diffusion. As well as preventing direct access to model weights, the API limits the applications of the model through OpenAI’s monitoring and review process, and limits the speed at which model outputs can be accessed. So while the OpenAI API may be satisfactory for many users to access GPT-3 capabilities, it does not allow as much breadth and freedom in the use of the model as direct access to trained model weights would have.
The lack of direct access to GPT-3 model weights appears to have created a demand for that access. The demand is strong enough that multiple actors have spent millions of dollars to make GPT-3-like models more widely and freely available. GPT-3 itself is almost certainly profitable for OpenAI given that its commercial API has not been discontinued. So I think the most obvious incentive for AI companies to create a GPT-3-like model is to develop their own products using their own model.
On the academic side, there seems to be growing interest in studying foundation models—GPT-3 is one such model. The research collaboration that culminated in BLOOM involved more than one thousand researchers (BigScience, 2022). Meanwhile, Zhang et al. (2022, p. 1) state: “Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and to bring more voices to the table in studying the impact of these LLMs.” Finally, Black et al. (2022, p. 1) state: “We make the models weights freely and openly available to the public through a permissive license, motivated by the belief that open access to LLMs is critical to advancing research in a wide range of areas—particularly in AI safety, mechanistic interpretability, and the study of how LLM capabilities scale.”
OPT-175B is a 175-billion-parameter language model from Meta AI Research, announced in May 2022 (Zhang et al., 2022). The primary goal of OPT-175B was as a replication of GPT-3. The replication seems to have largely succeeded; however, the model performed an average of two percentage points worse than GPT-3 on 14 benchmarks across zero-shot and few-shot evaluation settings.
The trained OPT-175B model seems to be accessible by anyone that can demonstrate their status as a researcher affiliated with academia, government, civil society, or an industry lab, who has relevant publications. Based on what it says in the paper and the application form, I estimate that 1000 (90% CI: 200–3000) people could be eligible, and all of these people could be granted access in the first year following release. This number depends on how quickly applications are processed and how relevant the applicant’s publications need to be.
Direct access to OPT-175B’s trained model weights is provided upon request. It therefore seems that anyone who currently has access could pass on the weights to someone else, or even make the weights publicly available for download. The potential for diffusion would thereby increase even further. It is not clear to me whether these actions would violate the terms of copyright stated in the license agreement for OPT-175B. I am not aware of any unilateral open-sourcing event like this occurring yet for OPT-175B. However, I am only 60% confident that this has not happened. My confidence is based on how little time has passed since OPT-175B was released, that I haven’t heard about it happening, and that searching DuckDuckGo for “download opt-175b model” does not have any confirming results on the first page.
At some point after OPT 175B’s release, this API for OPT 175B was released by a team at Sky Lab, UC Berkeley.
BLOOM is a 176-billion-parameter language model from the open research collaboration known as BigScience. BLOOM was released in July 2022. The prior model that most influenced the design of BLOOM was apparently Megatron-LM (Shoeybi et al., 2019), which along with GPT-3 is heavily based on GPT-2 (Radford et al., 2019).
Despite its similar size and pretraining approach, unlike OPT-175B I don’t consider BLOOM to be an exact replication attempt. This is partly because it is not stated by the authors as an explicit replication attempt. It is also because the training data for BLOOM is much more multilingual, which was emphasized in BigScience (2022). My best guess is that BLOOM is worse than GPT-3 on most tasks to a similar degree to OPT-175B. On the other hand, I expect that BLOOM’s more multilingual training data leads to a wider spread of capabilities across languages.
Unlike OPT-175B, BLOOM’s trained model weights are publicly available to download from the HuggingFace website. HuggingFace is also hosting a public API for BLOOM. So access to BLOOM is even more open than for OPT-175B. Anyone with decent internet and 329 GB of storage can start downloading the weights immediately, without any request for access.
5 professional software developers that have worked with ML projects, and five ML engineers who know how to run language models over multiple GPUs.
Labor cost: $250K
15 professional software developers that have worked with ML projects, and five ML engineers who know how to run language models over multiple GPUs.
Labor cost: $500K
Table 2: Summary of deployment scenarios and the estimated requirements for them, explored in this section
So far I have considered the ability to access (i.e., download and interact with) trained model weights of GPT-3-like models. But if an actor has only downloaded some model weights, the cost and talent requirements to actually run inference with the model could be far from trivial. Even further resources would be needed to deploy a model at a scale large enough to build a profitable business. An important question is: how large is the barrier to deploying a model impactfully, compared to training a model? I measure how large the barrier is in terms of compute cost and talent requirements.
I will first consider this question in the case of a GPT-3-like model, BLOOM. My choice of BLOOM is not particularly special; it is just the only GPT-3-like model I was aware of (as of July 2022) that is open-source. Most of the analysis that follows depends merely on BLOOM’s basic Transformer architecture and number of parameters, so any similarly-sized language model can be substituted. My assumptions about what BLOOM is capable of are less defensible, but again, I think it serves as a basis for useful hypotheticals.
I think the most important version of the question is how large the barrier is to deploying transformative AI (TAI) systems. Nonetheless, asking this question of present-day AI systems still seems useful. In particular, the answer to this question affects which actors will be incentivized to develop or gain access to AI systems, and to deploy those systems. In turn, some actors may gain insight and revenue from these systems, and thereby become more able to develop, access, and diffuse TAI in the future.
I expect that most actors that use state-of-the-art AI systems for inference will do so via APIs provided by some other actor. The AI system will be hosted by some organization, and users will make queries to the model via an API. The most prominent example of such an API today is the OpenAI API. If an actor merely wants to run inference with a model, there are strong incentives for using an API—the provider of the API handles all of the setup, running costs and maintenance, while the user merely needs to make queries to the API to get outputs from the model.
However, as I argued above, there are still incentives to not use a particular model API. An API inherently limits a user’s interaction with a model. A user can’t necessarily probe or fine-tune the model in any way they want, nor use the model for any application, due to content policies and content filtering. Furthermore, there are many models for which no API is released in the first place, but which some actors may want to replicate, such as the Chinchilla model from DeepMind (Hoffmann et al., 2022).
Suppose that the weights for some model become publicly available. This could happen via deliberate open publication by the original developer of the model, or by some other diffusion mechanism like replication or theft. Due to the incentives to use open-source models, it seems very likely that someone will also set up and open-source a convenient way to use that model. There will very likely be enough actors in the world that at least one of them is motivated to do this. At that point, many more people will be capable of running the model, either via an API or other tools that are open-sourced.
BLOOM has an API with fully open access on this page. I was able to create a HuggingFace account (only requiring an email address) and run inference with BLOOM immediately using the text box on the right side of the page. This interface is only a preview—HuggingFace has a more comprehensive and scalable hosted inference API to make it easy to deploy the models that they host, including BLOOM. Hugging Face offers a $9/month plan that allows up to one million input characters for model inference per month. Again, given the convenience of APIs like this, I expect that most actors that use models will do so via APIs provided by other actors.
I have just argued that most actors that use models will do so via other actors’ APIs. However, I still expect there to be cases of diffusion where some actors want to run a model independently. By running “independently,” I mean that an actor downloads the model weights and runs the model on a server that they either directly rent or own. The incentive for independent deployment could arise because
For (1), there might be no existing API because the model developer wants to limit diffusion of capabilities, or simply has no interest in providing wider access to the model. For (2), the actors in question could be startups that want to scale up to train and deploy their own models, or malicious actors that don’t want their malicious usage to be tracked (even if permitted) by an API.
For these reasons, I think it’s useful to analyze who can run a model like BLOOM independently, even though there is in fact an openly accessible API for BLOOM. One can then apply a similar analysis to future models where there is a stronger incentive to run the model independently.
By my calculations, the most compute-efficient way to run inference with BLOOM by renting compute from the cloud is to use one 8 x 80 GB A100 GPU node. Based on that, running one instance of BLOOM independently would cost $10/hr from the cloud provider Lambda, using a minimum three-month rent commitment. A user could generate 150 million English words for $240 per day this way if running the model 24/7. For comparison, based on current OpenAI API pricing, using GPT-3 via the OpenAI API would generate this many words for $4000. I estimate that 5000 people in the world (90% CI: 100 to 45,000) have the talent required to run BLOOM independently this way. See this appendix for the reasoning behind these estimates.
I have just considered the constraints involved in running a single instance of BLOOM. But useful deployments of BLOOM could be much larger. To get a sense of how much the largest viable deployments of GPT-3-like models would cost, I consider two representative scenarios. The largest viable deployment is the deployment with the highest volume of model outputs that
The first scenario is an automated text disinformation campaign. A crude way to model this is producing content equal in size to 1% of the average number of Tweets per day, which I estimated as 670 million. Using BLOOM running on cloud compute to accomplish this, I estimate the cost would be $450 per day (90% CI: 240 to 720) or $160K per year (90% CI: 88 to 260). For comparison, accomplishing this with GPT-3 via the OpenAI API (assuming it was allowed) would cost $5100 (90% CI: 3800 to 6600) per day or $1.9M (90% CI: 1.4 to 2.4) per year. My best-guess estimate of the total compute cost to develop a GPT-3 replica from scratch was $13 million with a 90% CI of $4 million to $36 million (see this section). So even for a very large-scale sustained operation, such as generating text equivalent to 1% of global Twitter activity for one year, model training would be about a 100x larger constraint financially. See this appendix for supporting reasoning.
The second scenario I consider is a GPT-3-sized coding language model that is very commercially successful. This means the model is used to improve one million software developers’ productivity by between 1% and 10%. I estimate the cost of this would be $2M per year (90% CI: 260K to 8.4M). So on my best guess, this cost is still an order of magnitude lower than the total development cost of the model at $13M. But this cost is plausibly close to the total development cost, given the upper bound is $8.4M. See this appendix for supporting reasoning.
In terms of talent requirements, I’m confident that the level of talent required to train any machine learning model is basically sufficient to run inference with that model, because forward passes of the model are performed as part of the training process. However, deploying a large language model at a commercially viable scale generally requires other areas of talent. These areas include more traditional software development skills to build APIs and host models on servers. However, I expect that ML engineering talent is the bottleneck for deployment, because it is more scarce than other software engineering talent. Based on that, my best guess is that large-scale deployment like in the above scenarios would approximately require
I do not have a rigorous justification for these exact requirements; they just seem like the most intuitively plausible to me. Let’s suppose this team is working on the project with a salary of $100k per person for the entire year. Then the one-year Twitter scenario above would cost the equivalent of 10 x 100k = $1M in talent, while the coding assistant scenario would cost $2M. However, I expect that the actual time spent on such a project would be closer to three months full-time equivalent. So my final labor cost estimate is $250K for the Twitter scenario and $500K for the coding assistant scenario.
Combining the Twitter and coding assistant compute cost estimates in this Guesstimate model, I get an overall estimate of $1.3M (90% CI: 230K to 5.2M). As a percentage of the GPT-3 replication cost estimate, this is 12% (90% CI: 1.3 to 56%).
As another line of evidence, I tried to find information on what percentage of the cost of machine learning applications as a whole is accounted for by inference rather than training. I found two sources estimating 80-90% of the cost is for inference. However, those sources don’t provide clear evidence or reasoning for those estimates, and they appear to be incentivized to give high estimates. Updating slightly for this evidence, my overall estimate of the cost of the largest viable deployments of GPT-3-like models is 20% of the development cost (90% CI: 10 to 68). Converting this back to dollars, I get $2.6M (90% CI: 1 to 6.6). 
Putting this all together, in my median scenario the development of a GPT-3-like model costs about 5 times more than the largest viable deployment of the model. But my confidence interval means there is a 5% chance that there are deployment scenarios which (a) cost more than 68% as much as developing the model, and (b) have a significant impact, such as improving one million software developer’s productivity by a few percent. So plausibly, for the highest-impact applications, the cost of deployment is almost as prohibitive as the cost of development.
One consideration I haven’t taken into account in the above analysis is the ability for actors to scale up via commercial revenue. Actors could deploy a model at a small but profitable scale, then use the resulting revenue to scale up, then deploy at a larger and more profitable scale, and so on in an amplifying feedback loop. This feedback loop can also have discontinuous jumps—if an actor has a moderately successful and promising application of AI, they might suddenly receive much more funding from investors. AI21 Labs is an example, reportedly raising $64M in funding in July 2022 and thereby almost doubling their total capital (Wiggers, 2022).
Having said that, the current leading AI developers can also set up this amplifying feedback loop, and have the biggest head start. So I think that leading developers are likely to maintain a steady (and perhaps an accelerating) lead this way. Because of this maintained lead, I think the number of actors that can afford to independently deploy future state-of-the-art models will most likely not increase significantly over time, even as smaller actors scale up.
Above, I have argued that the development of GPT-3-like models is a much larger constraint than the deployment of models. I think there is more opportunity for the AI governance community to take advantage of the larger constraint on development, than to make deployment more difficult. For example, diffusion can be limited by taking advantage of the large compute and talent requirements to train GPT-3-like models. Meanwhile, deployment seems much easier to do and more difficult to control. This is because the cost of even the largest viable deployments seem to be much smaller (about four times smaller, at my best guess).
Furthermore, the developers of models seem to be in the most convenient position to deploy those same models. This is because
For these reasons, I think the AI governance community should prioritize limiting which actors can develop models over limiting which actors can deploy models.
For the number of words, see the “Actual throughput (tokens per day per GPU node)” cell in this Guesstimate model, which estimates 200 million tokens per day. Average words per token is 0.75 (see https://perma.cc/T6M8-Q9BJ), so 200M tokens corresponds to roughly 150M words. The cost per hour comes from the "Reserved pricing" for 8x NVIDIA A100 80GB GPUs from Lambda, listed here: https://perma.cc/TTB9-B8TF.
Most CS graduates could in principle afford the financial cost of $240 to run BLOOM for one day, but running BLOOM for a year (say) would then cost ~$90K which would only be affordable for perhaps tens to hundreds of individuals.
Let’s consider the minimum talent required to download BLOOM and run the model on a separate cloud compute server. I think this requirement is equivalent to a single top-one-percentile Computer Science graduate who has passed at least one course on natural language processing with deep learning, and who can spend three months full-time figuring out how to run the model. This is because a lot of the know-how to run the model is available on the internet, such that a strong machine learning background is not required to start with. For example, EleutherAI’s Discord server would have a lot of relevant information and people willing to help. Tools such as HuggingFace accelerate make it easier to use machine learning models with multiple GPUs (which seems to be required for models as big as BLOOM).
Besides that, I don’t have special reasons to specify the requirement as a single top-one-percentile CS graduate with introductory machine learning experience spending three months trying. It is just a concrete-enough requirement that is intuitively plausible to me. I think that the people in this set are a useful indication of the actual set, because it seems to overlap significantly with the actual set. For instance, I’m confident that high-percentile CS graduates make up more than 20% of the actual set.
Reasoning for the calculation:
As a lower bound, it seems implausible that the number could be any lower than the total number of “infrastructure engineers” I counted in my case studies, which was 73 (see this cell in the diffusion database). So I set an approximated lower bound at 100.
As an upper bound, it seems implausible that the number of people capable of running BLOOM exceeds the number of times the BLOOM repository (which includes the model weight files) has been downloaded. I could not find a total number of downloads, but the downloads in the past month (as of October 10, 2022) are reported at about 15,000 (see repository page). Assuming the same number of downloads happened in the other two months since the repository was released in early July, that would make 45,000 downloads in total. The actual number may be higher because of a spike in interest in BLOOM in the first month after it was announced, but I find any significantly higher number too implausible given the technical difficulty of running a model as large as BLOOM. The number would also be close to this, at 50,000, if I instead chose 10% for one of the two 1% numbers in the “CS graduates” calculation above, which seems barely plausible.
So my overall estimate is 5000 with a 90% CI of 100 to 45,000.
See this Guesstimate model for calculations and reasoning.
Buchanan et al. (2021, p. 58) provide a point of comparison: "...creating enough content to equal in size to one percent of global Twitter activity would require hundreds of GPT-3s running 24/7 and would cost tens of millions of dollars per year." So my cloud-compute cost estimate ($160K) is about two orders of magnitude lower than theirs (~$10M). Their reasoning is not entirely clear, especially the calculation behind “hundreds of GPT-3s.” However, they seem to make the following different assumptions:
Based on the rough factors of difference in (2) and (3), I get 2 * 3 * 2 = 12x overall. So the two orders of magnitude difference seems mostly, but perhaps not entirely, explained by the difference in assumptions that I came up with.
This research is a project of Rethink Priorities. It was written by Ben Cottier. Thanks to Alexis Carlier, Amanda El-Dakhakhni, Ashwin Acharya, Ben Snodin, Bill Anderson-Samways, Erich Grunewald, Jack Clark, Jaime Sevilla, Jenny Xiao, Lennart Heim, Lewis Ho, Lucy Lim, Luke Muehlhauser, Markus Anderljung, Max Räuker, Micah Musser, Michael Aird, Miles Brundage, Oliver Guest, Onni Arne, Patrick Levermore, Peter Wildeford, Remco Zwetsloot, Renan Araújo, Shaun Ee, Tamay Besiroglu, and Toby Shevlane for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.
The 150 million words is somewhat arbitrary. The number came about in my estimate of how many tokens the BLOOM model could generate when running continuously on an 8x 80GB A100 GPU instance for 24 hours, at a typical hardware utilization rate.
That said, my intuition is that the number of people who will actually learn how to run and then use BLOOM independently for some research or application, at any point in time since BLOOM was released, is much lower. My 90% CI for that number is 10 to 1000. I expect that most people who use BLOOM will use an API rather than run it themselves.
Note that there are other (perhaps stronger) reasons to focus on the model development stage.
Firstly, the forms of diffusion that help actors develop models pushes AI progress forward more than the forms of diffusion that help actors deploy models. Pushing AI progress forward is what shortens AI timelines and thereby increases AI existential risk.
Secondly, a lot of AI existential risk comes from misaligned power-seeking AI rather than misuse by humans. I expect that reducing diffusion of deployment would have a smaller effect on this source of risk.
The BLOOM announcement blog post states “We're finalizing an inference API for large-scale use even without dedicated hardware or engineering. In the meantime, for quick tests, prototyping, and lower-scale use, you can already play with an early version on the HF hub” (BigScience, 2022).
My confidence is based on (a) skimming the papers and/or blog posts for all GPT-3-like models in the diffusion database for mention of model access; (b) the 20-billion parameter GPT-NeoX-20B model being “to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission” as of February 2022 (Black et al., 2022); (c) none of the experts that I consulted with, nor papers that I looked at, mentioned other models that are both GPT-3-like and widely available for download. (I did not ask any experts about this directly, but several experts mentioned BLOOM and OPT, so it’s likely that they would have also mentioned other widely-accessible models if they existed.) YaLM from the Russian tech company Yandex is a possible exception (which was in fact known to me), but given that it has only 100 billion parameters, my guess is that it does not have comparable performance to GPT-3.
Throughout this sequence, “GPT-3” refers to the original 175-billion-parameter model that was first described in Brown et al. (2020) unless it is mentioned in the context of using the OpenAI API, which provides an updated version of the model.
See Shelvane (2022, p. 105): a member of the OpenAI policy team told the author that “[researchers] can't make any changes to the underlying weights [of GPT-3]. They can't fine-tune it arbitrarily. They can't remove layers, they can't inspect the activations; they can't do all sorts of things.”
See Usage Guidelines which describe the procedure for application review, and the content policy.
See Shelvane (2022, p. 84): “the [OpenAI] API is designed to prevent users from stealing GPT-3 [...] the API comes with usage quotas, which users must apply to increase.”
The cost of millions of dollars is based on my training compute cost estimates for OPT-175B and BLOOM. See this column in the diffusion database.
An example of this is AI21 Labs with Jurassic-1-Jumbo, provided via AI21 Studio (AI21 Labs, 2022).
GPT-NeoX-20B does not meet my definition of a GPT-3-like model, but it still serves as an informative case study.
See Zhang et al. (2022, p. 8): "Given our primary goal as a replication of GPT-3..."
See the note on this cell in the diffusion database. I have not investigated whether the lower performance is significant in terms of how useful the model is, and I lack the intuition to judge this at face value. Zhang et al. (2022, p. 8) claim a “parity in performance for standard evaluation datasets used in the GPT-3 models,” but I didn’t find a clear statistical basis for this claim in the paper.
See Zhang et al. (2022), Introduction, p. 1: "We are releasing all of our models between 125M and 66B parameters, and will provide full research access to OPT-175B upon request. Access will be granted to academic researchers; those affiliated with organizations in government, civil society, and academia; and those in industry research laboratories." The form to request model access includes fields for “Organization / Affiliation,” “Intended Use,” and “Previous related publications.”
Based on my estimated number of natural language processing researchers at top universities. I also estimate this number is less than the estimated number of applications that can be processed in one year. See this Guesstimate model for further details.
See Zhang et al. (2022), Introduction, p. 1: “will provide full research access to OPT-175B upon request.” I interpret this as making the OPT-175B trained model weight file(s) available for download to the requester.
See access request form: “Subject to your compliance with the Documentation and Sections 2, 3, and 5, Meta grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Meta’s copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes.” Section 2 places restrictions on copying for certain purposes or copying without including the copyright, but not total restriction.
The search can be roughly replicated at this link, but I failed to obtain a working archived copy of the search.
I have not figured out when the API was released, but I only became aware of it in October 2022.
For the influence of Megatron-LM on BLOOM, see https://huggingface.co/bigscience/bloom#model-architecture-and-objective: the BLOOM model architecture is "Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code)". The BLOOM Megatron code (https://github.com/bigscience-workshop/Megatron-DeepSpeed) is "a detached fork of https://github.com/microsoft/Megatron-DeepSpeed, which itself is a fork of https://github.com/NVIDIA/Megatron-LM." The original Megatron-LM code was open-sourced to accompany Shoeybi et al. (2019).
For the influence of GPT-2 on Megatron-LM, see Shoeybi et al. (2019), Abstract, p.1: “...we train an 8.3 billion parameter transformer language model similar to GPT-2.”
For GPT-3 see the paper, p.14: “Although GPT-3’s training data is still primarily English (93% by word count), it also includes 7% of text in other languages.” For BLOOM, see model card: English is only 30.04% of the training data (presumably also measured by word count).
This is based on the following evidence. When I averaged the normalized accuracy on tasks that BigScience has evaluated for both BLOOM and OPT-175B, both BLOOM and OPT-175B both achieved approximately 47% accuracy. OPT-175B, in turn, had 2% less accuracy on average compared to GPT-3, on the tasks that OPT-175B was evaluated on in Zhang et al. (2022, p. 17). So this suggests that BLOOM is similarly worse than GPT-3 on those tasks. A big caveat to this is that the set of tasks that BigScience has evaluated for both BLOOM and OPT-175B seem far from comprehensive. See this Colab notebook for the calculations and further explanation.
The 329 GB size was listed under “Checkpoint size” at https://huggingface.co/bigscience/bloom#speeds-sizes-times
Compute cost estimates are just based on cloud compute prices, and exclude the cost of other hardware such as a laptop to set up the cloud computing instance.
By “direct” I mean the people and skills that are required to set up the model and keep the model running in the deployment setting, excluding people that maintain software dependencies (e.g. PyTorch), or people that give advice on how to do deployment.
This means the deployment with the highest volume of model outputs that (a) would be possible for at least one actor to do by now if they tried; (b) is worth the cost—not necessarily in terms of financial revenue, but in achieving the actor's goal. See this Guesstimate model for calculations (the method is also explained in the main text).
Inference means passing data into the model and obtaining an output. This is also known as a “forward pass” of the model.
By “hosted” I mean that the organization stores the model on a server, and runs the model on hardware that is owned or rented by the organization.
See for example the OpenAI API Usage Guidelines which describe the procedure for application review, and the content policy.
The listed price for Davinci (which is presumably some version of the 175-billion parameter GPT-3 model) is $0.02 per 1000 tokens. 1000 tokens is roughly 750 English words based on this page. Therefore 150,000,000 words requires 150e6 * 0.02 / 750 ~= $4000.
Credit to Buchanan et al. (2021) section 4 (starting p. 55) for the inspiration for this scenario.
Note that I am glossing over the actual capability of BLOOM to automate disinformation effectively. On this point (but substituting GPT-3 for BLOOM), Buchanan et al. (2021) concluded that “although GPT-3 will not replace all humans in disinformation operations, it is a tool that can help them to create moderate- to high-quality messages at a scale much greater than what has come before.” As I explained earlier, BLOOM seems less capable overall than GPT-3, so the quality of messages would generally be lower, or a human operator would need to spend more time ensuring the messages are high enough quality.
Again, I am not accounting for the likelihood of a GPT-3-size coding language model being able to improve ~1 million software developers’ productivity by 1-10%. However, I think this is plausible given that OpenAI Codex is an existing 20-billion-parameter model that is already being marketed as a tool to improve developer productivity (OpenAI, 2022). Intuitively, I think that users wouldn’t be willing to adopt Codex (or tools building on Codex) in the long-term if they didn’t expect to get an overall productivity improvement of 1% or more.
After I made these estimates, I obtained a reference class estimate. The reference class was the team working on GitHub Copilot, GitHub’s code suggestion tool powered by OpenAI Codex, which is a 20-billion parameter language model trained on code. I searched the term "GitHub copilot" on LinkedIn, filtered by "People", and then reviewed the first 4 pages of results for people that appeared to be currently working as engineers or developers for GitHub Copilot (after the 4th page, the results did not seem relevant enough to be worth continuing). I found 4 ML or Research Engineers, and 8 Software or Data Engineers, making 12 people in total. I think it's most likely that this LinkedIn search underestimates the true number of contributors, due to false negatives. This estimate is close to my intuitive estimate, but it should be taken as weak evidence due to being one case with a limited methodology. See this document for more details on the method. Due to time constraints, I did not use this evidence to update my final estimate.
The three months is just an intuitive estimate based on project durations in my 1.5 years of experience in software engineering at a company that deployed ML models.
See Leopold (2019) (reports 80-90%) and Barr (2019) (reports “up to 90%”).
See Guesstimate model for calculations.
Note: there is already an API to run inference with BLOOM here, but I think it’s useful to consider the general case where an actor deploys independently on a separate server, with less limit on usage.
See the BLOOM model card—“Speeds, Sizes, Times” section.
Should we expect a future where most people use GPT-like tools to generate text, but 90% of people use the models trained by 2 or 3 large companies?
This could allow amazing thought control of the population. If you want to suppress some ideas, just train your model to be less likely to generate them. As a consequence, the ideas will disappear from many people's articles, blogs, school essays.
Many people will publicly deny or downplay their use of GPT, so they will unknowingly provide a cover for this manipulation. People will underestimate the degree of control if they e.g. keep believing that news articles are still written by human journalists, when in fact the job of the journalist will consists of providing a prompt, and then choosing the best of a few generated articles.
Similarly, bloggers who generate their texts will be much more productive than bloggers who actually write them. Yes, many readers will reward quality over quantity, but the quality can be achieved in ways other than writing the articles, for example figuring out interesting prompts (such as "explain Fourier Transform using analogies from Game of Thrones"), or use some other tricks to give the blog a unique flavor.
What the companies need (and I do not know how difficult this would be technically) is to reverse-engineer why GPT produced certain outputs. For example, you train a model using some inputs. You ask it some questions, and select the inconvenient answers. Then you ask which input texts have contributed most strongly to generating the inconvenient answers. You remove those texts from the training set, and train a new model. This could even be fully automated if you can write an algorithm that ask the questions and predict which answers would be inconvenient.
Welcome to the glorious future where 99% of people support the party line on their social networks, because that is what their GPT-based comment-generating plugins have produced, and they were too lazy to change it.
I'm a bit surprised that you talk about someone needing a lot of expertise and training to be able to run BLOOM. Why is it so hard to use and not as easy to use as other open source software?
To be clear (sorry if you already understood this from the post): Running BLOOM via an API that someone else created is easy. My claim is that someone needs significant expertise to be able to run their own instance of BLOOM. I think the hardest part is setting up multiple GPUs to run the 176B parameter model. But looking back, I might have underestimated how straightforward it is to get the open-source code to run BLOOM working. Maybe it's basically plug-and-play as long as you get an appropriate A100 GPU instance on the cloud. I did not attempt to run BLOOM from scratch myself.
I recall that in an earlier draft, my estimate for how many people know how to independently run BLOOM was higher (indicating that it's easier). I got push-back on that from someone who works at an AI lab (though this person wasn't an ML practitioner themselves). I thought they made a valid point but I didn't think carefully about whether they were actually right in this case. So I decreased my estimate in response to their feedback.