3763

LESSWRONG
LW

3762
Language Models (LLMs)AI
Frontpage

27

[ Question ]

Supposing the 1bit LLM paper pans out

by O O
29th Feb 2024
1 min read
A
5
11

27

27

Supposing the 1bit LLM paper pans out
10mtaran
6Thomas Kwa
5mtaran
8Vladimir_Nesov
5Tomás B.
2Tomás B.
4Veedrac
2lemonhope
1Fergus Argyll
1lemonhope
2Vladimir_Nesov
New Answer
New Comment

5 Answers sorted by
top scoring

mtaran

Mar 02, 2024

10-4

I think this could be a big boon for mechanistic interpretability, since it's can be a lot more straightforward to interpret a bunch of {-1, 0, 1}s than reals. Not a silver bullet by any means, but it would at least peel back one layer of complexity.

Add Comment
[-]Thomas Kwa2y66

It could also be harder. Say that 10 bits of current 16 bit parameters are useful; then to match the capacity you would need 6 ternary parameters, which would potentially be hard to find or interact in unpredictable ways.

Reply
5mtaran2y
Perhaps if you needed a larger number of ternary weights, but the paper claims to achieve the same performance with ternary weights as one gets with 16-bit weights using the same parameter count.

Vladimir_Nesov

Feb 29, 2024*

82

The paper is not about post-training quantization, instead it's quantization aware training (this is more clearly discussed in the original BitNet paper). The representation is ternary {-1, 0, 1} from the start, the network learns to cope with that constraint throughout pre-training instead of getting subjected to brain damage of quantization after training.

Compare this with

  • BD Rouhani et al. (Oct 2023) Microscaling Data Formats for Deep Learning

where the Microscaling block number format is used to train a transformer at essentially 4 bits per weight, achieving the same perplexity as with 32 bit floating point weights, see Figure 4 on page 7. If perplexity doesn't change for quantization aware training when going down to 4 bits, it's not too shocking that it doesn't significantly change at 1.6 bits either.

Add Comment

Tomás B.

Feb 29, 2024

52

This is applied to training. It’s not a quantization method.

Add Comment

Tomás B.

Mar 02, 2024

20

@Veedrac suppose this pans out and custom hardware is made for such networks.  How much faster/larger/cheaper will this be?

Add Comment
[-]Veedrac2y40

Communication overhead won't drop faster than linear.

Reply

lemonhope

Feb 29, 2024*

2-3

I don't think it can be patched for training to make training itself 1.58 bit (95% confident). I think training (not inference) is where most the money goes to and comes from, so hardware market will not be affected (90%).

Even in the small inference market, chip companies already have 4-8 bit inference accelerators in the oven (99%); they will not estimate the benefits of 1.58 bit to justify the risk of such specialized hardware, so nobody will build more than 100 1-bit or 1.58-bit inference chips (80%).

Old fashioned CPUs have at most 32 threads so will still be slow as heck to run NNs (90%).

I think your question is quite important.

Add Comment
1
[-]Fergus Argyll2y14

If I understand correctly (I very well might not), A "one bit LLM" has to be trained as a "one bit LLM" in order to then run inference on it as a "one bit LLM". I.e this isn't a new Quantization scheme.

So I think training and inference are tied together here, meaning; if this replicates, works, etc. we will probably have new hardware for both stages

Reply
1lemonhope2y
I don't see them mention anything about training efficiency anywhere so I don't think it is really legit 1.58 bit training in a meaningful sense
2Vladimir_Nesov2y
Training doesn't become more efficient, gradients and activations are still full precision, and I'm guessing there is a full precision copy of weights maintained during training (in addition to quantized weights used for forward passes). The advantage is that this method of training produces a quantized model that has the same quality as a non-quantized model (unlike post-training quantization, which makes models worse). And additionally the {-1, 0, 1} quantization means you need much less multiplication circuitry for inference, so the potential for inference chips is not just that there is less memory, but also that there is less energy and transistors, significantly raising the practical ceiling for local (on-device) inference. It's apparently not a novel idea, quantization aware training was explored before there were transformers: * P Merolla, R Appuswamy, J Arthur, SK Esser, D Modha (2016) Deep neural networks are robust to weight binarization and other non-linear distortions.
Moderation Log
More from O O
View more
Curated and popular this week
A
5
0
Language Models (LLMs)AI
Frontpage

https://arxiv.org/abs/2402.17764 claims that 1 bit LLMs are possible.

If this scales, I'd imagine there is a ton of speedup to unlock since our hardware has been optimized for 1 bit operations for decades. What does this imply for companies like nvidia and the future of LLM inference/training? 

 Do we get another leap in LLM capabilities? Do CPUs become more useful? And can this somehow be applied to make training more efficient?

Or is this paper not even worth considering for some obvious reason I can't tell. 
 

Edit: this method is applied to training already