Posts

Sorted by New

Wiki Contributions

Comments

I read

"action A increases the value of utility function U"

to mean (1:) "the utility function U increases in value from action A". Did you mean (2:) "under utility function U, action A increases (expected) value"? Or am I missing some distinction in terminology?

The alternative meaning (2) leads to "should" (much like "ought") being dependent on the utility function used. Normativity might suggest that we all share views on utility that have fundamental similarities. In my mind at least, the usual controversy over whether utility functions (and derivative moral claims, i.e., "should"s and "ought"s) can be objectively true, remains.

Edit: formatting.

First of all, thanks for the comment. You have really motivated me to read and think about this more -- starting with getting clearer on the meanings of "objective", "subjective", and "intrinsic". I apologise for any confusion caused by my incorrect use of terminology. I guess that is why Eliezer likes to taboo words. I hope you don't mind me persisting in trying to explain my view and using those "taboo" words.

Since I was talking about meta-ethical moral relativism, I hope that it was sufficiently clear that I was referring to moral values. What I meant by "objective values" was "objectively true moral values" or "objectively true intrinsic values".

The second sentence doesn't follow from the first.

The second sentence was an explanation of the first: not logically derived from the first sentence, but a part of the argument. I'll try to construct my arguments more linearly in future.

If I had to rephrase that passage I'd say:

If there are no agents to value something, intrinsically or extrinsically, then there is also nothing to act on those values. In the absence of agents to act, values are effectively meaningless. Therefore, I'm not convinced that there is objective truth in intrinsic or moral values.

However, the lack of meaningful values in the absence of agents hints at agents themselves being valuable. If value can only have meaning in the presence of an agent, then that agent probably has, at the very least, extrinsic/instrumental value. Even a paperclip maximiser would probably consider itself to have instrumental value, right?

If rational agents converge on their values, that is objective enough.

I think there is a difference between it being objectively true that, in certain circumstances, the values of rational agents converge, and it being objectively true that those values are moral. A rational agent can do really "bad" things if the beliefs and intrinsic values on which it is acting are "bad". Why else would anyone be scared of AI?

Analogy: one can accept that mathematical truth is objective (mathematicians will converge) without being a Platonists (mathematical truths have an existence separate from humans)

I accept the possibility of objective truth values. I'm not convinced that it is objectively true that the convergence of subjectively true moral values indicates objectively true moral values. As far as values go, moral values don't seem to be as amenable to rigorous proofs as formal mathematical theorems. We could say that intrinsic values seem to be analogous to mathematical axioms.

I fin d that hard to follow. If the test i rationally justifiable, and leads to the uniform results, how is that not objective?

I'll have a go at clarifying that passage with the right(?) terminology:

Without the objective truth of intrinsic values, it might just be a matter of testing different sets of assumed intrinsic values until we find an "optimal" or acceptable convergent outcome.

Morality might be somewhat like an NP-hard optimisation problem. It might be objectively true that we get a certain result from a test. It's more difficult to say that it is objectively true that we have solved a complex optimisation problem.

You seem to be using "objective" (having a truth value independent of individual humans) to mean what I would mean by "real" (having existence independent of humans).

Thanks for informing me that my use of the term "objective" was confused/confusing. I'll keep trying to improve the clarity of my communication and understanding of the terminology.

Hi everyone,

I'm a PhD student in artificial intelligence/robotics, though my work is related to computational neuroscience, and I have strong interests in philosophy of mind, meta-ethics and the "meaning of life". Though I feel that I should treat finishing my PhD as a personal priority, I like to think about these things. As such, I've been working on an explanation for consciousness and a blueprint for artificial general intelligence, and trying to conceive of a set of weighted values that can be applied to scientifically observable/measurable/calculable quantities, both of which have some implications for an explanation of the "meaning" of life.

At the center of the value system I'm working on is a broad notion of "information". Though still at preliminary stages, I'm considering a hierarchy of weights for the value of different types of information, and trying to determine how bad this is as a utility function. At the moment, I consider the preservation and creation of all information valuable; at an everyday level I try to translate this into learning and creating new knowledge and searching for unique, meaningful experiences.

I've been aware of Less Wrong for years, though haven't quite mustered the motivation to read all of the sequences. Nevertheless, I've lurked here on and off over that time, and read lots of interesting discussions. I consider the ability to make rational decisions, and not be fooled by illogical arguments, important. Though without a definite set of values and goals, any action is simply shooting in the dark.

I'm still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.

By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be "rational" if the value/utility function is arbitrary.

Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I've gotten, I'm not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.

The original post, and this one, seems to be running into the "is-ought" gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer's objections, this is a fundamental problem for determining what terminal values or utility function we should use -- a task you and I are both interested in undertaking.

I hadn't come across the von Neumann-Morgenstern utility theorem before reading this post, thanks for drawing it to my attention.

Looking at Moral Philosophy through the lens of agents working with utility/value functions is an interesting exercise; it's something I'm still working on. In the long run, I think some deep thinking needs to be done about what we end up selecting as terminal values, and how we incorporate them into a utility function. (I hope that isn't stating something that is blindingly obvious.)

I guess where you might be headed is into Meta-Ethics. As I understand it, meta-ethics includes debates on moral relativism that are closely related to the existence terminal/intrinsic values. Moral relativism asserts that all values are subjective (i.e., only the beliefs of individuals), rather than objective (i.e., universally true). So no practice or activity is inherently right or wrong, it is just the perception of people that makes it so. As you might imagine, this can be used as a defense of violent cultural practices (it could even be used in defense of baby-eating).

I tend to agree with the position of moral relativism; unfortunate though it may be, I'm not convinced there are things that are objectively valuable. I'm of the belief that if there are no agents to value something, then that something has effectively no value. That holds for people and their values too. That said, we do exist, and I think subjective values count for something.

Humanity has come to some degree of consensus over what should be valued. Probably largely as a result of evolution and social conditioning. So from here, I think it mightn't be wasted effort to explore the selection of different intrinsic values.

Luke Muehlhauser has called morality an engineering problem. While Sam Harris has described morality as a landscape, i.e., the surface is our terminal value we are trying to maximize (Harris picked the well-being of conscious creatures) and the societal practices as the variables. Though I don't know that well-being is the best terminal value, I like the idea of treating morality as an optimization problem. I think this is a reasonable way to view ethics. Without objective values, it might just be a matter of testing different sets of terminal subjective values, until we find the optimum (an hopefully don't get trapped in a local maximum).

Nevertheless, I think it's interesting to suppose that something is objectively valuable. It doesn't seem like a stretch to me to say that the knowledge of what is objectively valuable, would, itself, be objectively valuable. And that the search for that knowledge would probably also be objectively valuable. After all that, it would be somewhat ironic if it turned out the universal objective values don't include the survival of life on Earth.