Measuring the composition of fryer oil at different times certainly seems like a good way to test both the original hypothesis and the effect of altitude.
You're right, my original wording was too strong. I edited it to say that it agrees with so many diets instead of explains why they work.
One thing I like about the PUFA breakdown theory is that it agrees with aspects of so many different diets.
Edit: I originally wrote "neatly explains why so many different diets are reported to work"
If this was true, how could we tell? In other words, is this a testable hypothesis?
What reason do we have to believe this might be true? Because we're in a world where it looks like we're going to develop superintelligence, so it would be a useful world to simulate?
From the latest Conversations with Tyler interview of Peter Thiel
I feel like Thiel misrepresents Bostrom here. He doesn’t really want a centralized world government or think that’s "a set of things that make sense and that are good". He’s forced into world surveillance not because it’s good but because it’s the only alternative he sees to dangerous ASI being deployed.
I wouldn’t say he’s optimistic about human nature. In fact it’s almost the very opposite. He thinks that we’re doomed by our nature to create that which will destroy us.
Three questions:
This is fantastic. Thank you.
Thanks! I added a note about LeCun's 100,000 claim and just dropped the Chollet reference since it was misleading.
Thanks for the correction! I've updated the post.
To me the strongest evidence that fine-tuning is based on LoRA or similar is the fact that pricing is based just on training and input / output and doesn't factor in the cost of storing your fine-tuned models. Llama-3-8b-instruct is ~16GB (I think this ought to be roughly comparable, at least in the same ballpark). You'd almost surely care if you were storing that much data for each fine-tune.