Sorted by New

Wiki Contributions


I discharge number 3 and number 4 objection, as a situation where the problem is ill-defined. That is, the ammount of knowledge supposed to have is inverosimile or unkown. And yes, I think the fat guy case is a case of an ethical injunction. But doesn't it slip the predictive power of consequentialism? It may not. I'm more concerned on the problems written below.

I do think you should act for a better outcome. I disagree in completeness and transitiveness of values. That's the cause that utility is not cuantifiable, thus there's not a calculation to show which action is right, thus there's not a best possible action. The problem is that action is highly chaotic (sensitive) to non rational variables, because there are some actions where it is impossible to decide, but something has to be decided. Look, how about the first example here I understand that you would choose the same in the first and second question. But what would you choose? A(=C) or B(=D). The answer should be none, just find a way where the 600 hundred people will keep alive. In the meantime, where that option is not possible, there are politics.

By the way, if you believe in utility maximization, explain me Arrow's theorem. I think it disproves utilitarianism.