Natural languages are full of ambiguity, and yes that use sounds wrong cause your talking about a particular person.
And if you really wanted to say that it was Chris's money, how about "Chris lost Chris's money." It sounds awkward to me cause my English only allows use of they in the singular if it is an abstract person, not a particular real person.
I mean its not like "Chris lost his money" is unambiguous, it is not at all clear to me weather the he refers to Chris, or someone else. That would probably be clear in discourse because of context.
Cowerdly does not simply mean bad. Saying that sacraficing your life to achive a goal is cowardly is nonsense.
They suppressed there fears, that's what bravery means. You can be brave doing horrible things. Mao was brave, Hitler was (at times) brave. You are falling into the halo issue, saying that if an act was bad it must be bad in all ways.
I really wonder how this sort of result applies to cultures that don't expect everyone to have high self-esteem. Such as say japan.
Well visual programing of visual things, is good. but thats just WYSIWYG.
Properly no they are not part of math, they are part of Computer Science, i.e. a description of how computations actually happen in the real world.
That is the missing piece that determines what axioms to use.
See I think you miss understanding his response. I mean that is the only way I can interpret it to make sense.
Your insistence that it is not the right interpretation is very odd. I get that you don't want to trigger peoples cooperation instincts, but thats the only framework in which talking about other beings makes sense.
The morality you are talking about is the human-now-extended morality, (well closer to the less-wrong-now-extended morality) in that it is the morality that results from extending from the values humans currently have. Now you seem to have a categorization that need to categorize your own morality as different from others in order to feel right about imposing it? So you categorize it as simply morality, but your morality is is not necessarily my morality and so that categorization feels iffy to me. Now its certainly closer to mine then to the baby eaters, but I have no proof it is the same. Calling it simply Morality papers over this.
I think I have a co-operation instinct that is pushing me towards the supper happy future.
It feels better, but is probably not what I would do In real life. or I am more different then others then I give credit for.
either I would become incapable of any action or choice, or I wouldn't change at all, or I would give up the abstract goals and gradually reclaim the concrete ones.
You know I cant help but read this a victory for humanity. Not a full victory, but i think the probability of some sort of interstellar civilization that isn't a dystopia is is higher afterwords then before, if nothing else we are more aware of the dangers of AI, and anything that does that and leaves a non-dystopian civilization capable of makeing useful AI is mostlikely a good thing by my utility function.
One thing that does bug me is I do not value happiness as much as most people do. Maybe I'm just not as empathetic as most people? I mean I acutely hope that humanity is replaced by a decenent civilisation/spieces that still values Truth ans Beauty, I care a lot more weather they are successful then if they are happy.
I wonder how much of the variance in preference between this and others could be explained by weather they are single (i.e I don't have some one they love to the point of "I don't want to consider even trying to live with someone else") vs. those that do.
I would take it, I imagine I would be very unhappy for a few months. (It feels like it would take years but thats a well known bias).
I assume "verthandi" is also not a coincidence. "verthandi"