Wiki Contributions

Comments

An extremely low prior distribution of life is an early great filter.

Done. Thank you for running these.

Check out the previous discussion Luke linked to: http://lesswrong.com/lw/c45/almost_every_moral_theory_can_be_represented_by_a/

It seems there's some question about whether you can phrase deontological rules consequentially-- to make this more formal that needs to be settled. My first thought is that the formal version of this would say something along the lines of "you can achieve an outcome that differs by only X%, with a translation function that takes rules and spits out a utility function, which is only polynomially larger." It's not clear to me how to define a domain in such a way as to allow you to compute that X%.

...unfortunately, as much as I would like to see people discuss the moral landscape instead of the best way to describe it, I have very little time lately. :/

(Sorry for slow response. Super busy IRL.)

If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.

Not necessarily. I'm not saying it makes much sense, but it's possible to construct a utility function that values agent X not having performed action Y, but doesn't care if agent Z performs the same action.

It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.

a) After reading Luke's link below, I'm still not certain if what I've said about them being (approximately) isomorphic is correct... b) Assuming my isomorphism claim is true enough, I'd claim that the "meaning" carried by your preferred ethical framework is just framing.

That is, (a) imagine that there's a fixed moral landscape. (b) Imagine there are three transcriptions of it, one in each framework. (c) Imagine agents would all agree on the moral landscape, but (d) in practice differ on the transcription they prefer. We can then pessimistically ascribe this difference to the agents preferring to make certain classes of moral problems difficult to think about (i.e., shoving them under the rug).

Deontology and virtue ethics don't care about getting things done.

I maintain that this is incorrect. The framework of virtue ethics could easily have the item "it is virtuous to be the sort of person who gets things done." And "Make things happen, or else" could be a deontological rule. (Just because most examples of these moral frameworks are lame doesn't mean that it's a problem with the framework as opposed to the implementation.)

If indeed the frameworks are isomorphic, then actually this is just another case humans allowing their judgment to be affected by an issue's framing. Which demonstrates only that there is a bug in human brains.

I think so. I know they're commonly implemented without that feedback loop, but I don't see why that would be a necessary "feature".

Which is why I said "in the limit". But I think, if it is true that one can make reasonably close approximations in any framework, that's enough for the point to hold.

Are you saying that some consequentialist systems don't even have deontological approximations?

It seems like you can have rules of the form "Don't torture... unless by doing the torture you can prevent an even worse thing" provides a checklist to compare badness ...so I'm not convinced?

How does it change the numbers if you condition on the fact that Alcor has already been around for 40 years?

Load More