With KV caching, it costs almost exactly as many FLOPs to take 100 input tokens and generate 900 output tokens, as to take 900 input tokens and generate 100 output tokens. However, you need a lot more memory/memory bandwidth to process an output token than an input token, because to process an output token you also need to fit the KV cache in memory.
Got it, thanks!
But to process the 1001st input token, you also need to load all the 1000 tokens in memory, forming the cache (it does happen in one step though). And for each new output token, you surely don't dump all the existing KV cache after each generation, only to load it again to append an extra KV vectors for the last generated token. So isn't the extra work for output tokens just that the KV cache is accessed, generated, expanded, one token at a time, and that's where the "more work" come from?
Is there any reason why this would imply the ratio of pricing of output:input tokens being commonly something like 3:1?
I doubt the actual cost can be linear. It's probably done to make things easier and more predictable for customers.
Intuitively, it seems that output tokens should be more expensive. The autoregressive model has to run once for each output token, and as these runs progress, output tokens gradually become a part of the input (so the last token is generated with context being all input and almost all output).
But exact formulas would depend on the implementation. I think they do amortize their costs among all uses. A number of runs (number of output tokens) multiplied by a (varying) cost of the each run is unlikely to be close to linear.
Thanks for the answer, I appreciate it!
Intuitively, it seems that output tokens should be more expensive. The autoregressive model has to run once for each output token, and as these runs progress, output tokens gradually become a part of the input (so the last token is generated with context being all input and almost all output).
I agree with the intuition, but I think that's where I am confused. Thanks to the KV cache we do not run the new input sequence (previous sequence + last generated token) through the encoders (as we do for the input sequence...
Hi,
I am trying to understand the difference in the cost of producing a single input token vs output token.
Based on some articles, I came to the following conclusion:
I want to be able to estimate the cost of processing a single token, and I cannot wrap my head around this. I theoretically estimated based on GPU rent price and separately based on power consumption (assuming some utilization such as 10%), and I believe I somehow need to differentiate between input/output tokens here.
In one tweet from LeptonAI who hosts these LLM, I also saw that there are usually 3-10 times more input tokens than output tokens. Again, if input tokens dominate the sequence and it was FLOPS the issue, I would expect to reflect that in the pricing. Not sure what role it plays in these calculations so far.
Any help is appreciated, thanks!