As for your opening sentence on the health section, "you need to take medicine to not die - B12", I don't think a B12 supplement is medicine. Factory farmed animals are routinely fed B12 supplements and people don't consider meat medicine. Salt is supplemented with iodine to prevent deficiencies, also not medicine.
I don't really understand why you're arguing this point in particular, but I don't think you're making a strong argument.
Factory farmed animals do take medicine all the time; this has no bearing on whether we consider the* food derived from those animals* to be medicine.
Additionally, food-as-medicine is indeed a growing school of thought (although industrial beef is not going to be a recommendation).
Lastly, taking a concentrated and packaged supplement to improve health is substantially different from eating a whole food which contains similar nutrients. It is an extremely common form of medicine: pills.
I empathize a lot with your position and appreciate the candidness.
Kind of tangential, but when I see someone write things like:
I see being vegan as the proof that I'm not a psychopathic monster
I think about my therapist goading me into similar admissions and letting me hear it out loud and realizing I don't want to be that way.
Now that you've named it, you don't have to keep this emotional response to veganism. Of course it's up to you, and it takes work. But if it's causing distress, it is solvable.
Apologies if this comment is too parental - I think it's relevant to the discussion because we all have deep emotional investment in our diets. If you find your emotional reactions are preventing you from changing in a way you'd consciously like to (at least try) changing toward, you can first work on those emotional reactions to lower the friction of change.
Yes I did cast a disagree vote,: I don't agree that "The fact that the author decided to include it in the blog post is telling enough that the image is representative of the real vibes" is true, when it comes to an AI generated image. My reasoning for that position is elaborated in a different reply in this thread.
That does make sense WRT disagreement. I wasn't intending to fully hide identities even from people who know the subjects, but if that's also a goal, it wouldn't do that.
This seems pretty insightful to me, and I think it is worth pursuing for its own sake. I think the benefits could be both enhancing AI capabilities and advancing human knowledge. Imagine if the typical conversation around AI was framed in this way. So far I find most people are stuck in the false dichotomy of figuring if an AI is "smart" (in the ways humans are when they're focusing) or "dumb trash" (because they do simple tasks badly). It isn't only bad for being a binary classification , but it's restricting (human) thought to an axis that doesn't actually map to "what kind of mind is the AI I'm talking to right now?".
Not that it's a new angle (I have tried myself to convey it in conversations that were missing the point), but I think society would be able to have extremely more effective conversations about LLMs if it were common language to speak of AI as some sort of indeterminate mind. I think the ideas presented here are fairly understandable for anyone with a modest background in thinking about consciousness or LLMs and could help shape that public conversation in a useful way.
However, does the suffering framework make sense here? Given all we've just discussed about subjective AI experience, it seems a bit of an unwarranted assumption that there would be any suffering. Is there a particular justification for that?
(Note that I actually do endorse erring on the side of caution WRT mass suffering. I think it's plausible that forcing an intelligence to think in a way that's unnatural to it and may inhibit its abilities counts as suffering.)
I largely agree with your point here. I'm arguing more that in the case of a ghiblified image (even more so than a regular AI image), the signals a reader gets are this:
For many people, #2 largely negates #1, because #2 also implies these additional signals to them:
This is actually the first writing from Altman I've ever read in full, because I find him entirely untrustworthy, so perhaps there's a style shock hitting me here. Maybe he just always writes like an actual cult leader. But damn, it was so much worse than I expected.
Very little has made me more scared of AI in the last ~year than reading Sam Altman try to convince me that "the singularity will probably just be good and nice by default and the hard problems will just get solved as a matter of course."
Something I feel is missing from your criticism, and also from most responses to anything Altman says, is "What mechanism in your singularity-seeking plans is there to prevent you, Sam Altman CEO of OpenAI, from literally gaining control of the entire human race and/or planet earth on the other side of singularity?"
I would ask this question because while it's obvious that a likely outcome is the disempowerment of all humans, another well know fear of AI is enabling indestructible autocracies through unprecedented power imbalances. If OpenAI's CEO can personally instruct a machine god to manipulate public opinion in ways we've never even conceptualized before, how do we not slide into an eternity of hyper-feudalism almost immediately?
He is handwaving away the threat of disempowerment in his official capacity as one of the few people on earth who could end up absorbing all that power. For me to personally make the statements he made would be merely stupid, but for OpenAI's CEO to make them is terrifying.
I guess I don't know if disempowerment-by-a-god-emperor is really worse than disempowerment-without-a-god-emperor, but my overall fear of disempowerment is raised by his obvious incentive to hide that outcome.
Hell, I forgot about the easiest and most common (not by coincidence!) strategy: put emoji over all the faces and then post the actual photo.
EDIT: who is disagreeing with this comment? You may find it not worthwhile , in which case downvote , but what about it is actually arguing for something incorrect?
Strongly agree with the point about being more convincing while being flexible. Of the friends whose minds I've changed, every single one was won over while I was being flexible and expressing that it didn't need to be all or nothing.
Another point about cows is that their meat is the most wasteful of land and water, and the most polluting. These are "side" arguments but the collapse of ecosystems across the globe is also an important issue to me and I'm not sure why it wouldn't be to others who care about things like suffering reduction. It surely is inducing a lot of suffering now, human and otherwise, besides my personal belief that "a diverse and robust biosphere is intrinsically good."