Wiki Contributions

Comments

1%? Shouldn't your basic uncertainty over models and paradigms be great enough to increase that substantially?

I think it's about a 0.75 probability, conditional upon smarter-than-human AI being developed. Guess I'm kind of an optimist. TL;DR I don't think it will be very difficult to impart your intentions into a sufficiently advanced machine.

I haven't seen any parts of Givewell's analyses that involve looking for the right buzzwords. Of course, it's possible that certain buzzwords subconsciously manipulate people at Givewell in certain ways, but the same can be said for any group, because every group has some sort of values.

Why do you expect that to be true?

Because they generally emphasize these values and practices when others don't, and because they are part of a common tribe.

How strongly? ("Ceteris paribus" could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?

Somewhat weakly, but not extremely weakly. Obviously there is no single clear criteria, it's just about people's philosophical values and individual commitment. At most, I think that being a solid EA is about as important as having a couple additional years of relevant experience or schooling.

I do think that if you had a research-focused organization where everyone was an EA, it would be better to hire outsiders at the margin, because of the problems associated with homogeneity. (This wouldn't the case for community-focused organizations.) I guess it just depends on where they are right now, which I'm not too sure about. If you're only going to have 1 person doing the work, e.g. with an EA fund, then it's better for it to be done by an EA.

I bet that most of the people who donated to Givewell's top charities were, for all intents and purposes, assuming their effectiveness in the first place. From the donor end, there were assumptions being made either way (and there must be; it's impractical to do all kinds of evaluation on one's own).

I think EA is something very distinct in itself. I do think that, ceteris paribus, it would be better to have a fund run by an EA than a fund not run by an EA. Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other's points of view, and maintain civil and constructive dialogue than I do for other people. And secondly, EA simply has the right values. It's a good culture to spread, which involves more individual responsibility and more philosophical clarity. Right now it's embryonic enough that everything is tied closely together. I tentatively agree that that is not desirable. But ideally, growth of thoroughly EA institutions should lead to specialization and independence. This will lead to a much more interesting ecosystem than if the intellectual work is largely outsourced.

It seems to me that Givewell has already acknowledged perfectly well that VillageReach is not a top effective charity. It also seems to me that there's lots of reasons one might take GiveWell's recommendations seriously, and that getting "particularly horrified" about their decision not to research exactly how much impact their wrong choice didn't have is a rather poor way to conduct any sort of inquiry on the accuracy of organizations' decisions.

In fact, it seems to me that the less intelligent an organism is, the easier its behavior can be approximated with model that has a utility function!

Only because those organisms have fewer behaviors in general. If you put a human in an environment where its options and sensory inputs were as simple as those experienced by apes and cats, humans would probably look like equally simple utility maximizers.

Kantian ethics: do not violate the categorical imperative. It's derived logically from the status of humans as rational autonomous moral agents. It leads to a society where people's rights and interests are respected.

Utilitarianism: maximize utility. It's derived logically from the goodness of pleasure and the badness of pain. It leads to a society where people suffer little and are very happy.

Virtue ethics: be a virtuous person. It's derived logically from the nature of the human being. It leads to a society where people act in accordance with moral ideals.

Etc.

pigs strike a balance between the lower suffering, higher ecological impact of beef and the higher suffering, lower ecological impact of chicken.

This was my thinking for coming to the same conclusion. But I am not confident in it. Just because something minimaxes between two criteria doesn't mean that it minimizes overall expected harm.

Load More