It's a good metaphor, but I think one important aspect this misses is overfitting - when you have a lot of parameters the NN can literally memorise small training sets till it gets 100% on the training set and 0% on the test set. Whereas a smaller model is forced to learn the underlying structure, so generalises better.
Hence larger models need a much larger training set even to match smaller models, which is another disadvantage of larger models (besides for higher per token training and inference costs).
Ok, in that case you're just basically referring to the SSA Vs SIA. That's an old chestnut, and either way leads to seemingly paradoxical results.
There is no other deck of cards here. There's no copy of me to compare myself to, and say how curious that looks exactly like me.
That's like saying that every game of cards must be rigged, because otherwise the chance of having this particular card order is miniscule...
Do we have the numbers?
Making up random numbers (I've done zero research)
Then if this all happens it about doubles fundings for EA related causes. Is a reasonable chance of that happening worth upfronting donations for?
Fair enough. I found it unreadable in a way I associate with AI (lots of dense words, but tricky to extract the content out of them), and the em dashes are somewhat of a giveaway.
Given how much slop there is I do appreciate if people clarify what they used AI for because I don't want to wade through a ton of slop which wasn't even human written.
Thanks for replying.
Hi, was this post written by, or with assistance from, AI?
Thanks
What about persuading politicians that AI safety is a cause that will win them votes? That requires very broad spectrum outreach to get as many ordinary people on board as possible.
That assumes you're in a better position to make such investments than the charity?