cubefox

Wiki Contributions

Comments

cubefox10

I'm not sure what exactly you mean with "landing on", but I do indeed think that the concept of goodness is a fairly general and natural or broadly useful concept that many different intelligent species would naturally converge to introduce in their languages. Presumably some distinct human languages have introduced that concept independently as well. Goodness seem to be a generalization of the concept of altruism, which is, along with egoism, arguably also a very natural concept. Alternatively one could see ethics (morality) as a generalization of the concept of instrumental rationality (maximization of the sum/average of all utility functions rather than of one), which seems to be quite natural itself.

But if you mean with "landing on" that different intelligent species would be equally motivated to be ethical in various respects, then that seems very unlikely. Intelligent animals living in social groups would likely care much more about other individuals than mostly solitary animals like octopuses. Also the natural group size matters. Humans care about themselves and immediate family members much more than about distant relatives, and even less about people with a very foreign language / culture / ethnicity.

the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).

There are many variants of these, and those cover basically all types of utilitarianism. Utilitarianism has so many facets that most plausible ethical theories (like deontology or contractualism) can probably be rephrased in roughly utilitarian terms. So I wouldn't count that as a major restriction.

cubefox32

I don't understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be "blegg". Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for "group" to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it's right outside. And for a group you'd anticipate that it turns out to satisfy associativity etc.

cubefox30

Well, Eliezer doesn't explicitly restrict his theory to humans as far as I can tell. More generally, forms of utilitarianism (be it hedonic or preference oriented or some mixture) aren't a priori restricted to any species. The point is also that some sort of utility is treated as an input to the theory, not a part of the theory. That's no different between well-being (hedonic utilitarianism) or preferences. I'm not sure why you seem to think so. The African Savanna influenced what sort of things we enjoy or want, but these specifics don't matter for general theories like utilitarianism or extrapolated volition. Ethics recommends general things like making individuals happy or satisfying their (extrapolated) desires, but ethics doesn't recommend giving them, for example, specifically chocolate, just because they happen like (want/enjoy) chocolate for contingent reasons.

Ethics, at least according to utilitarianism, is about maximizing some sort of aggregate utility. E.g. justice isn't just a thing humans happen to like. They refer to the aforementioned aggregate which doesn't favor one individual over another. So while chocolate isn't part of ethics, fairness is. An analysis of "x is good" as "x maximizes the utility of Bob specifically" wouldn't capture the meaning of the term.

cubefox1-1

There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like "knowledge" or "game", in that sense you understand what they mean, yet you almost certainly don't know an adequate (necessary and sufficient) definition for them, i.e. one that doesn't suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.

That's not to say that definition is useless, but it's not something we need when evaluating most object level questions. Answering "Do you know where I left my keys?" doesn't require a definition for "knowledge". Answering "Is believing in ghosts irrational?" doesn't require a definition of "rationality". And answering "Is eating Bob's lunch bad?" doesn't require a definition of "bad".

Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can't say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn't.

What anticipated experiences come about from the belief that something is "good" or "bad"? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time).

Well, why not try to answer it yourself? I'd say evidence for something being "good" is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY's extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.

Eliezer has a more recent metaethical theory (basically "x is good" = "x increases extrapolated volition") which is moral realist in a conventional way. He discusses it here. It's approximately a form of idealized–preference utilitarianism.

the thing that most self-described moral realists actually believe, as opposed to the trivialities above—is that moral statements can be not just true but also that their truth is “universally accessible to reason and reflection” in a sense. That’s what you need for nostalgebraist’s attempted reductio ad absurdum

Well, the truth of something being "universally accessible to reason and reflection" would still just result in a belief, which is (per weak orthogonality) different in principle from a desire. And a desire would be needed for the reductio, otherwise we have just a psychopath AI that understands ethics perfectly well but doesn't care about it.

Almost all terms in natural language are vague, but that doesn't mean they are all ambiguous or somehow defective and in need of an explicit definition. We know what words mean, we can give examples, but we don't have definitions in our mind. Imagine you say that believing X is irrational, and I reply "I don't believe in 'rational realism', I think 'rational' is a vague term, can you give me a definition of 'rational' please?" That would be absurd. Of course I know what rational means, I just can't define it, but we humans can hardly define any natural language terms at all.

it's something like "stuff people do whom I want on my team" or "actions that make me feel positively toward someone". But it would require a lot more words to even start nailing down. And while that's a claim about reality, it's quite a complex, dependent, and therefore vague claim, so I'd be reluctant to call it moral realism.

That would indeed not count as moral realism, the form of anti-realism would probably be something similar to subjectivism ("x is good" ≈ "I like X") or expressivism ("x is good" ≈ "Yay x!").

But I don't think this can make reasonable sense of beliefs. That I believe something is good doesn't mean that I feel positive toward myself, or that I like it, or that I'm cheering for myself, or that I'm booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.

I don't think anyone needs to define what words used in ordinary language mean because the validity of any attempt of such a definition would itself have to be checked against the intuitive meaning of the word in common usage.

If good means "what you should do" then it's exactly the big claim Steve is arguing against.

I do think the meaning is indeed similar (except of supererogatory statements), but the argument isn't affected. For example, I can believe that I shouldn't eat meat, or that eating meat is bad, without being motivated to stop eating meat.

This is tangential to the point of the post, but "moral realism" is a much weaker claim than you seem to think. Moral realism only means that some moral claims are literally true. Popular uncontroversial examples: "torturing babies for fun is wrong" or "ceteris paribus, suffering is bad". It doesn't mean that someone is necessarily motivated by those claims if they believe they are true. It doesn't imply that anyone is motivated to be good just from believing that something is good. A psychopath can agree "yes, doing X is wrong, but I don't care about ethics" and shrug his shoulders. Moral realism doesn't require a necessary connection between beliefs and desires; it is compatible with the (weak) orthogonality thesis.

I'm still not sure what you want to say. It's a necessary property of natural numbers that they can be reached from iterating the successor function. That condition can't be expressed in first-order logic, so it can't be proved and it holds in some models and in others it doesn't. It's like trying to define "cat" by stating that it's an animal. This is not a sufficient definition.

What do you mean with "all the true properties"?

Load More