The hypothesis of a breakthrough having had happened crossed my mind, but the only thing that didn't seem to fit into this narrative was Ilya regretting his decision. If such a new technology has emerged and he thinks Sam is not the man to guide the company in this new scenario, why would he reverse his position? It doesn't seem wise given what it's assumed to be at stake.
The Adventist Study is interesting, but surely not definitive. It's not even a RCT. To my knowledge, the only respectable RCT ever conducted analyzing a diet pattern is PREDIMED. It doesn't test veganism specifically (the treatment group adopts a mediterranean diet), but it does increase the strength of the association between plant based diets and lower overall mortality risk. Overall, I think the level of evidence against meat is only suggestive, but since it's so time-consuming and expensive to conduct these trials, I don't expect much further light coming from investigations using metrics like mortality, heart attacks, strokes, etc. I think epigenetic clocks will in the future be a much better way to quickly analyze the effects of diet interventions, and I suspect plant-based diets will have an advantage over other diet patterns.
How is health a trade-off when the longest living populations are the ones eating mostly plant based diets?
Now with epigenetic clocks we can see how dietary modifications impact health more broadly and they do show that nutritionist consensus (something like a mediterranean diet / plant based diet) is in the right direction. Your skepticism isn't well founded, in my opinion.
Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity's best interest pops up.
That's still a theoretical problem; something we should consider but not overly update on, in my opinion. Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation? That wasn't the case even with nuclear bombs.
That's exactly my point. We don't even know how these future technologies will look like. Gain of function research has potential major negative effects right now, so I think it's reasonable to be cautious. AI is not currently at this point. It may potentially be in the future, but by then we will be better equipped to deal with it and assess the risk-benefit profile we are willing to put up with.
AI risk is still at another level of concern. If you ask me to list what can go wrong with gain of function research, I can probably cite a lot of things. Now if you ask me what dangers LLM's can cause to humanity, I will have a much more inoffensive list.
I am with him on this. The level of AI alarmism that is being put forward especially in this community is uncalled for. I was just reading Yudkowski and Scott's chat exchange and all the doom arguments I captured were of the form "what if?". What about we just return to the way we do engineering: keep building and innovating and dealing with negative side effects along the way?