So I’ve noticed that some people here have thoughts on multi-agent models of the mind. I operate consistently day-to-day with a multi-agent model of my mind that I’ve developed out of my own internal experience—after experimenting with both perspectives, multi-agent feels intuitive and sense-making to me, whereas trying to see myself as a single agent feels wrong or confusing. Is it a metaphor? Is my model different or is my brain different? Is this a symptom of something? Who knows! Sometimes I try to “tone this down” when explaining thoughts and feelings to others (mostly stick to describing myself as though I were a single agent having conflicting thoughts/feelings)…and it can genuinely feel weird, like I’m editing or translating in a way that loses important information.
I am primarily a consequentialist, but when it comes to animal rights, my reasoning is not consequentialist first and foremost, which is why I am a vegan and not simply avoiding eggs and chicken (which might be better from a consequentialist perspective given the trade-offs explained here---my effort might be better spent on changing my lifestyle to donate more, etc.). I think "do not pay people to abuse and kill others on your behalf for the sake of convenience and pleasure" should be a strict rule. The ambiguity comes in once the justification changes from "gaining convenience and pleasure" to "avoiding misery" (from either physical health problems or having to manage your... (read more)