All of Yannick_Muehlhaeuser's Comments + Replies

Wirehead your Chickens

I think even if we believe that plant-based and clean meat as well as change in attitudes can get us to a world free of, at least, factory farming, it may be worth looking into the strategies as plans for what we might call worst case scenarios, like it it turns out that clean meat will remain too expensive, plant-based alternatives fail to catch on and a signicicant part of the population fails to be convinced by the ethical arguments.

I also think that those ideas may be more important in countries that are only just building factory farms compared to western countries.

Wirehead your Chickens

I think you raised a very important question and i very much agree that one should be honest with oneself what one truly cares about.

When it comes to the interventions you proposed i am nor really sure about the practicality. (2) sounds doable but i'd guess that the side effects of losing the ability to strong pain are severe and would lead to self-hurting behaviour and maybe increased fighting among the animals. But if it was possible to find a drug that could be administered to animals to reduce their suffering (maybe just in certain situations) wi... (read more)

Yeah, most of my suggestions were semi-intentionally outside the Overton window, and the reaction to them is appropriately emotional. A more logical approach from an animal welfare proponent would be something along the lines of "People have researched various non-mainstream ideas before and found them all suboptimal, see this link ..." or "This is an interesting approach that has not been investigated much, I see a number of obvious problems with it, but it's worth investigating further." etc.

On the one hand, "it's proba... (read more)

How to intro Effective Altruism

One thing you could try would be a giving game. You could divide your listeners into small groups and give them a few charities to choose from with a few bullet points of information for each of them. The charity that gets the most votes gets a previously agreed to ammount of money from whatever source.

Another thing would do to have them answer the questions of this quizz by 80,000 hours about which social programms actually work and which don't.

Both of those activities show how you can't really trust you intuition on these things and deeper inve... (read more)

8Said Achmiz4y
Indeed; as a data point, the drowning child argument is one of the things that clarified my thinking about these things, and convinced me not to support EA.
Disambiguating "alignment" and related notions

Those seem like really important distinctions. I have the feeling people who don't think AI Alignment is super important either implicitly or explicitly only think of parochial alignment and not of holistic alignment and just don't consider the former to be so difficult.

People who don't know much about AI have two templates, neither of which is AI: the first is a conventional computer, the second is a human being. Following the first template it seems obvious that AGIs will be obedient and slavish.
Recommendations vs. Guidelines

A Hansonian explanation fo this may be that, say when it comes to dieting, people claim to want science-based, honest guideline to help them loose weight but actually just want to find some simple diet or trick that they need to follow.

Constructing and elaborate guideline might be something that a publicly funded organisation may be able to do, but someone who wants to make money probably won't do that, because it would likely not sell to well.

April Fools: Announcing: Karma 2.0

I agree with others who commentet here that the aestetics of it isn't really that satisfying right now. But i think the system has the potential to be good overall, so I don't really want to turn it off. Maybe the differences should be less extreme?

This could be a good thing to try. Make it more subtle and also have more levels - I notice that my comments are of the same size as Qiaochu's. That's a little strange, since he has nearly 10x my karma. Honestly, only the karma he earned by commenting on my posts should count. Can the mod team look into this?
On the Loss and Preservation of Knowledge

Interesting overall, but more examples would have been helpful.

The Costly Coordination Mechanism of Common Knowledge

Very good explanation.

I actually prefer Eliezer Yudkowsky's formulation of the PD in The True Prisoner's Dilemma. This makes it feel less like a interesting game theoretic problem and more like one of the core flaws in the human condition that might one day end us all. But for this post i think the normal formulation was fine.