All of MindTheLeap's Comments + Replies

I read

"action A increases the value of utility function U"

to mean (1:) "the utility function U increases in value from action A". Did you mean (2:) "under utility function U, action A increases (expected) value"? Or am I missing some distinction in terminology?

The alternative meaning (2) leads to "should" (much like "ought") being dependent on the utility function used. Normativity might suggest that we all share views on utility that have fundamental similarities. In my mind at least, the usual controv... (read more)

First of all, thanks for the comment. You have really motivated me to read and think about this more -- starting with getting clearer on the meanings of "objective", "subjective", and "intrinsic". I apologise for any confusion caused by my incorrect use of terminology. I guess that is why Eliezer likes to taboo words. I hope you don't mind me persisting in trying to explain my view and using those "taboo" words.

Since I was talking about meta-ethical moral relativism, I hope that it was sufficiently clear that I was r... (read more)

That's what I like to hear! But there is no need for morality in the absence of agents. When agents are there, values will be there, when agents are not there, the absence of values doesn't matter. I don't require their values to converge, I require them to accept the truths of certain claims. This happens in real life. People say "I don't like X, but I respect your right to do it". The first part says X is a disvalue, the second is an override coming from rationality.

Hi everyone,

I'm a PhD student in artificial intelligence/robotics, though my work is related to computational neuroscience, and I have strong interests in philosophy of mind, meta-ethics and the "meaning of life". Though I feel that I should treat finishing my PhD as a personal priority, I like to think about these things. As such, I've been working on an explanation for consciousness and a blueprint for artificial general intelligence, and trying to conceive of a set of weighted values that can be applied to scientifically observable/measurable/... (read more)

I'm still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.

By the way, I can see your point that objections to moral anti-realism in this community may be som... (read more)

Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I've gotten, I'm not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.

The original post, and this one, seems to be running into the "is-ought" gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer's objections, this is a fundamental problem for determining what terminal values or ... (read more)

I think this community vastly over-estimates its grip on meta-ethical concepts like moral realism [] or moral anti-realism []. (E.g. the hopelessly confused discussion in this thread []). I don't think the meta-ethics sequence resolves these sorts of basic issues.

I hadn't come across the von Neumann-Morgenstern utility theorem before reading this post, thanks for drawing it to my attention.

Looking at Moral Philosophy through the lens of agents working with utility/value functions is an interesting exercise; it's something I'm still working on. In the long run, I think some deep thinking needs to be done about what we end up selecting as terminal values, and how we incorporate them into a utility function. (I hope that isn't stating something that is blindingly obvious.)

I guess where you might be headed is into Met... (read more)

The second sentence doesn't follow from the first. If rational agents converge on their values, that is objective enough. Analogy: one can accept that mathematical truth is objective (mathematicians will converge) without being a Platonists (mathematical truths have an existence separate from humans) I fin d that hard to follow. If the test i rationally justifiable, and leads to the uniform results, how is that not objective? You seem to be using "objective" (having a truth value independent of individual humans) to mean what I would mean by "real" (having existence independent of humans).
I'm assuming a lot of background in this post that you don't seem to have. Have you read the sequences [], specifically the metaethics [] stuff? Moral philosophy on LW is decades (at the usual philosophical pace) ahead of what you would learn elsewhere and a lot of the stuff you mentioned is considered solved or obsolete.