MindTheLeap
MindTheLeap has not written any posts yet.

MindTheLeap has not written any posts yet.

First of all, thanks for the comment. You have really motivated me to read and think about this more -- starting with getting clearer on the meanings of "objective", "subjective", and "intrinsic". I apologise for any confusion caused by my incorrect use of terminology. I guess that is why Eliezer likes to taboo words. I hope you don't mind me persisting in trying to explain my view and using those "taboo" words.
Since I was talking about meta-ethical moral relativism, I hope that it was sufficiently clear that I was referring to moral values. What I meant by "objective values" was "objectively true moral values" or "objectively true intrinsic values".
... (read 416 more words →)The second sentence doesn't
Hi everyone,
I'm a PhD student in artificial intelligence/robotics, though my work is related to computational neuroscience, and I have strong interests in philosophy of mind, meta-ethics and the "meaning of life". Though I feel that I should treat finishing my PhD as a personal priority, I like to think about these things. As such, I've been working on an explanation for consciousness and a blueprint for artificial general intelligence, and trying to conceive of a set of weighted values that can be applied to scientifically observable/measurable/calculable quantities, both of which have some implications for an explanation of the "meaning" of life.
At the center of the value system I'm working on is a... (read more)
I'm still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.
By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be "rational" if the value/utility function is arbitrary.
Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I've gotten, I'm not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.
The original post, and this one, seems to be running into the "is-ought" gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer's objections, this is a fundamental problem for determining what terminal values or utility function we should use -- a task you and I are both interested in undertaking.
I hadn't come across the von Neumann-Morgenstern utility theorem before reading this post, thanks for drawing it to my attention.
Looking at Moral Philosophy through the lens of agents working with utility/value functions is an interesting exercise; it's something I'm still working on. In the long run, I think some deep thinking needs to be done about what we end up selecting as terminal values, and how we incorporate them into a utility function. (I hope that isn't stating something that is blindingly obvious.)
I guess where you might be headed is into Meta-Ethics. As I understand it, meta-ethics includes debates on moral relativism that are closely related to the existence terminal/intrinsic values.... (read more)
I read
to mean (1:) "the utility function U increases in value from action A". Did you mean (2:) "under utility function U, action A increases (expected) value"? Or am I missing some distinction in terminology?
The alternative meaning (2) leads to "should" (much like "ought") being dependent on the utility function used. Normativity might suggest that we all share views on utility that have fundamental similarities. In my mind at least, the usual controversy over whether utility functions (and derivative moral claims, i.e., "should"s and "ought"s) can be objectively true, remains.
Edit: formatting.