I think using the term"training run" in that first bullet point is misleading, and "renting the compute" is confusing since you can't actually rent the compute just by having $60M, you likely need to have a multi-year contract.
I can't tell if you're attributing the hot takes to me? I do not endorse them.
This is because I'm specifically talking about 2022, and ChatGPT was only released at the very end of 2022, and GPT-4 wasn't released until 2023.
Good catch, I think the 30x came from including the advantage given by tensor cores at all and not just lower precision data types.
This is probably the decision I make I am the least confident in, figuring out how to do accounting on this issue is challenging and depends a lot on what one is going to use the "cost" of a training run to reason about. Some questions I had in mind when thinking about cost:
The simple initial way I use to compute cost than is to investigate empirical evidence of the expenditures of companies and investment.
Now, these numbers aren't the same ones a company might care about - they represent expenses without accounting for likely revenue. The argument I find most tempting is that one should look at deprecation cost instead of capital expenditure, effectively subtracting the expected resale value of the hardware from the initial expenditure to purchase the hardware. I have two main reasons for not using this:
Having said all of this, I'm still not confident I made the right call here.
Also, I am relatively confident GPT-4 was trained only with A100s, and did not use any V100s as the colab notebook you linked speculates. I expect that GPT-3, GPT-4, and GPT-5 will all be trained with different generations of GPUs.
So, it's true that NVIDIA probably has very high markup on their ML GPUs. I discuss this a bit in the NVIDIA's Monopoly section, but I'll add a bit more detail here.
All this aside, my basic take is that I think "what people are actually paying" is the most straightforward and least speculative means we have of defining near term "cost".
I think communicating clearly with the word "woman" is entirely possible for many given audiences. In many communities, there exists an internal consensus as to what region of the conceptual map the word woman refers to. The variance of language between communities isn't confined to the word "woman" - in much of the world the word "football" means what American's mean by "soccer". Where I grew up i understood the tristate area to be NY, PA, and NJ - however the term "the tristate area" is understood by other groups to mean one of ... a large number of options.
(Related point: I'm not at all convinced that differing definitions of words is a problem that needs a permanent solution. It seems entirely plausible to me that this allows for beneficial evolution of language as many options spawn and compete with each other.)
Manifold.markets is play-money only, no real money required. And users can settle the markets they make themselves, so if you make the market you don't have to worry about loopholes (though you should communicate as clearly as possible so people aren't confused about your decisions).
I'm specifically interested in finding something you'd be willing to bet on - I can't find an existing manifold market, would you want to create one that you can decide? I'd be fine trusting your judgment.
I'm a bit confused where you're getting your impression of the average person / American, but I'd be happy to bet on LLMs that are at least as capable as GPT3.5 being used (directly or indirectly) on at least a monthly basis by the majority of Americans within the next year?
I talk about this in the Granular Analysis subsection, but I'll elaborate a bit here.