Wiki Contributions

Comments

I think even if we believe that plant-based and clean meat as well as change in attitudes can get us to a world free of, at least, factory farming, it may be worth looking into the strategies as plans for what we might call worst case scenarios, like it it turns out that clean meat will remain too expensive, plant-based alternatives fail to catch on and a signicicant part of the population fails to be convinced by the ethical arguments.

I also think that those ideas may be more important in countries that are only just building factory farms compared to western countries.

I think you raised a very important question and i very much agree that one should be honest with oneself what one truly cares about.

When it comes to the interventions you proposed i am nor really sure about the practicality. (2) sounds doable but i'd guess that the side effects of losing the ability to strong pain are severe and would lead to self-hurting behaviour and maybe increased fighting among the animals. But if it was possible to find a drug that could be administered to animals to reduce their suffering (maybe just in certain situations) without major side-effects, that could in fact be an effective intervention and may be worth looking into, mainly for the reason that it maybe wouldn't come with big costs to the corporations doing the farming. It may, however, help to sustain factory farming past the point it could be abolished otherwise, which would probably cause more net suffering.

I don't know how much time breeding animals that are radically different from ours takes and I'm generally a bit more sceptical whether it's worth persuing that.

In general the main problem with this way of fighting animal suffering is that most people concerened about animals wouldn't support it and they probably also would have no problem admitting that they care about more than just reducing suffering. I think that it's probably better to persue strategies for animal suffering reduction that most people in the movement could get behind.

So i think their could be some value of researching this approach but I am sceptical overall.

One thing you could try would be a giving game. You could divide your listeners into small groups and give them a few charities to choose from with a few bullet points of information for each of them. The charity that gets the most votes gets a previously agreed to ammount of money from whatever source.

Another thing would do to have them answer the questions of this quizz by 80,000 hours about which social programms actually work and which don't.

Both of those activities show how you can't really trust you intuition on these things and deeper investigation is important, nicely demonstrating one of the core ideas of Effective Altruism.

You also could explain and discuss the drowing child thought experiment, but how well this will work out likely strongly depends on the group you are talking to and how much they like discussing these kinds of question.

Btw, if you didn't do so already i'd recommend you to ask this question in the Effective Altruism Facebook group or the Effective Altruism Group Organizers Facebook group.

Those seem like really important distinctions. I have the feeling people who don't think AI Alignment is super important either implicitly or explicitly only think of parochial alignment and not of holistic alignment and just don't consider the former to be so difficult.

A Hansonian explanation fo this may be that, say when it comes to dieting, people claim to want science-based, honest guideline to help them loose weight but actually just want to find some simple diet or trick that they need to follow.

Constructing and elaborate guideline might be something that a publicly funded organisation may be able to do, but someone who wants to make money probably won't do that, because it would likely not sell to well.

I agree with others who commentet here that the aestetics of it isn't really that satisfying right now. But i think the system has the potential to be good overall, so I don't really want to turn it off. Maybe the differences should be less extreme?

Interesting overall, but more examples would have been helpful.

Very good explanation.

I actually prefer Eliezer Yudkowsky's formulation of the PD in The True Prisoner's Dilemma. This makes it feel less like a interesting game theoretic problem and more like one of the core flaws in the human condition that might one day end us all. But for this post i think the normal formulation was fine.