Posts

Sorted by New

Wiki Contributions

Comments

Part One: Methodology: Why think that intuitions are reliable? What is reflective equilibrium, other than reflecting on our intuitions? If it is some process by which we balance first-order intuitions against general principles, why think this process is reliable? Metaethics: Realism vs. Error vs. Expressivism?

Part Two: 2.6 I don't see the collapse - an axiology may be paired with different moralities - e.g. a satisficing morality, or a maximizing morality. Maybe all that is meant by the collapse is that the right is a function of the good? Then 'collapse' is misleading.

Part Four: 4.2 Taking actions that make the world better is different from taking actions that make the world best. Consequentialism says that only consequences matter - a controversial claim that hasn't been addressed.
4.4 Makes a strawman of the deontologist. Deontologists differ from consequentialists in ways other than avoiding dirtying their hands / guilt. They may care about not using others as means, or distinctions like doing/allowing, killing/letting die, etc., which apply to some trolley cases, and (purportedly) justify not producing the best consequences. More argument is needed to show that this precludes morality from 'living in the world'.

Part Five: 5.4 Not obvious that different consequentialisms converge on most practical cases. Some desire pain. Some desire authenticity, achievement, relationships, etc. (no experience machine). Some desire not to be cheated on / have their wills disregarded / etc.

Part Seven: 7.3 Doesn't address the strongest form of the objection. A stronger form is: we know that certain acts or institutions are necessarily immoral (gladiatorial games, slavery); utilitarianism could (whether or not it does) require we promote these; therefore utilitarianism is false. I like the utility monster example of this. The response in 7.5 to the utility monster case is bullet-biting - this should be the response in 7.3. The response that utilitarianism probably won't tell us to promote these is inadequate. The mistake is remade by the three responses in 7.4 (prior to the appeal to ideal rather than actual preferences). 7.6 Similar problem here. The response quibbles with contingent facts, but the force of the objection is that vicious, repugnant, petty, stupid, etc., preferences have no less weight in principle, i.e. in virtue of their status as such. 7.7 Response misses the point. The objection is that it's hard to see how utilitarianism can accommodate the intuitive distinction between higher and lower pleasures. Sure, utilitarians have nothing against symphonies, but would a world with symphonies be best? (Would an FAI-generated world contain symphonies?) 7.9 Rather quick treatment of the demandingness objection. One relevant issue in the vicinity is that of agent-centered permissions - permissions to do less than the best (in consequentialist terms), e.g. to favor those with whom we have special relations. Many philosophers and folk alike believe in such permissions - utilitarianism has a counterintuitive result here.

Suggestions for further content: (1) How are we to conceive of 'better' consequences? Perhaps any of the answers given by the aforementioned systems would suffice - pleasure, preference satisfaction, ideal preference satisfaction. But I'm not convinced these are practically/pragmatically equivalent. For instance, there may be different best methods for investigating what produces the most pleasure vs. what would best satisfy our ideal preferences, and so different practical recommendations. (2) What's our axiology? Is it total utilitarian, egalitarian, prioritarian, maximizing, satisficing, etc.? How do the interests of animals, future time slices, and future individuals weigh against present human interests? A total utilitarian approach seems to be advocated, but that faces its own set of problems (repugnant conclusions, fanaticism, etc.).