WilliamKiely

Wiki Contributions

Comments

IMO the largest trade-offs of being vegan for most people aren't health trade-offs, but they're other things like the increased time/attention cost of identifying non-vegan foods. Living in a place where there's a ton of non-vegan food available at grocery stores and restaurants makes it more of a pain to get food at stores and restaurants than it is if you're not paying that close attention to what's in your food. (I'm someone without any food allergies, and I imagine being vegan is about as annoying as having certain food allergies).

That being said, it also seems to me that the vast majority of people's diets are not well optimized for health. Most people care about convenience, cost, taste, and other factors as well. My intuition is that if we took a random person and said "hey, you have to go vegan, lets try to find a vegan diet that's healthier than your current diet" that we'd succeed the vast majority of the time simply because most people don't eat very healthily. That said, the random person would probably prefer a vegan diet optimized for things other than just health more than a vegan diet optimized for just health.

I only read the title, not the post, but just wanted to leave a quick comment to say I agree that veganism entails trade-offs, and that health is one of the axes. Also note that I've been vegan since May 2019 and lacto-vegetarian since October 2017, for ethical reasons, not environmental or health or other preferences reasons.

It's long (since before I changed my diet) been obvious to me that your title statement is true since a prior it seems very unlikely that the optimal diet for health is one that contains exactly zero animal products, given that humans are omnivores. One doesn't need to be informed about nutrition to make that inference.

Probability that most humans die because of an AI takeover: 11%

This 11% is for "within 10 years" as well, right?

Probability that the AI we build doesn’t take over, but that it builds even smarter AI and there is a takeover some day further down the line: 7%

Does "further down the line" here mean "further down the line, but still within 10 years of building powerful AI"? Or do you mean it unqualified?

But to my mind, such a scenario is implausible (much less than one percent probability overall) because it stacks up too many unlikely assumptions in terms of our prior experiences with related systems.

You mentioned 5-6 assumptions. I think at least one isn't needed (that the goal changes as it self-improves), and disagree that the others are (all) unlikely. E.g. Agentic, non-tool AIs are already here and more will be coming (foolishly). Taking a point I just heard from Tegmark on his latest Lex Fridman podcast interview, once companies add APIs to systems like GPT-4 (I'm worried about open-sourced systems that are as powerful or more powerful in the next few years), then it will be easy for people to create AI agents that uses the LLMs capabilties by repeatedly calling it.

This is the fear of “foom,”

I think the popular answer to this survey also includes many slow takeoff, no-foom scenarios.

And then, when humans are worth more to the advance of this AI’s radically changed goals as mere atoms than for all the things we can do, it simply kills us all.

I agree with this, though again I think the "changed" can be ommitted.

Secondly, I also think it's possible that rather than the unaligned superintelligence killing us all in the same second like EY often says, that it may kill us off in a manner like how humans kill off other species (i.e. we know we are doing it, but it doesn't look like a war.)

Re my last point, see Ben Weinstein-Raun's vision here: https://twitter.com/benwr/status/1646685868940460032

Load More