All of Sailor Vulcan's Comments + Replies

1. On the deontology/virtue ethics vs consequentialism thing, you're right I don't know how I missed that, thanks!

1a. I'll have to think about that a bit more.

2. Well, if we were just going off of the four moralities I described, then I already named two examples where two of those moralities are unable to converge: a pure flourishing maximizer wouldn't want to mercy kill the human species, but a pure suffering minimizer would. A pure flourishing maximizer would be willing to have one person tortured forever if that was a necessary prer... (read more)

2. Again, there are plenty of counterexamples to the idea that human values have already converged. The idea behind e.g. "coherent extrapolated volition" is that (a) they might converge given more information, clearer thinking, and more opportunities for those with different values to discuss, and (b) we might find the result of that convergence acceptable even if it doesn't quite match our values now. 3. Again, I think there's a distinction you're missing when you talk about "removal of values" etc. Let's take your example: reading adult MLP fanfiction. Suppose the world is taken over by some being that doesn't value that. (As, I think, most humans don't.) What are the consequences for those people who do value it? Not necessarily anything awful, I suggest. Not valuing reading adult MLP fanfiction doesn't imply (e.g.) an implacable war against those who do. Why should it? It suffices that the being that takes over the world cares about people getting what they want; in that case, if some people like to write adult MLP fanfiction and some people like to read it, our hypothetical superpowerful overlord will likely prefer to let those people get on with it. But, I hear you say, aren't those fanfiction works made of -- or at least stored in -- atoms that the Master of the Universe can use for something else? Sure, they are, and if there's literally nothing in the MotU's values to stop it repurposing them then it will. But there are plenty of things that can stop the MotU repurposing those atoms other than its own fondness for adult MLP fanfiction -- such as, I claim, a preference for people to get what they want. There might be circumstances in which the MotU does repurpose those atoms: perhaps there's something else it values vastly more that it can't get any other way. But the same is true right here in this universe, in which we're getting on OK. If your fanfiction is hosted on a server that ends up in a war zone, or a server owned by a company that gets sold to

1a. Deontology/virtue ethics is a special case of consequentialism. The reason for following deontological rules is because the consequences that result from following deontological rules almost always tend to be better than the consequences of not following deontological rules. The exceptions where it is wiser to not follow deontological rules are generally rare.

1b. Those are social mores, not morals. If a human is brainwashed into shutting down the forces of empathy and caring within themselves, then they can be argued into treating any social more as a ... (read more)

1. Neither deontology nor virtue ethics is a special case of consequentialism. Some people really, truly do believe that sometimes the action with worse consequences is better. There are, to be sure, ways for consequentialists sometimes to justify deontologists' rules, or indeed their policy of rule-following, on consequentialist grounds -- and for that matter there are ways to construct rule-based systems that justify consequentialism. ("The one moral rule is: Do whatever leads to maximal overall flourishing!") They are still deeply different ways of thinking about morality. You consider questions of sexual continence, honour, etc., "social mores, not morals", but I promise you there are people who think of such things as morals. You think such people have been "brainwashed", and perhaps they'd say the same about you; that's what moral divergence looks like. 2. I think that if what you wrote was intended to stand after "I think there is no convergence of moralities because ..." then it's missing a lot of steps. I should maybe repeat that I'm not asserting that there is convergence; quite likely there isn't. But I don't think anything you've said offers any strong reason to think that there isn't. 3. Once again, I think you are not being clear about the distinction between the things I labelled (i) and (ii), and I think it matters. And, more generally, it feels as if we are talking past one another: I get the impression that either you haven't understood what I'm saying, or you think I haven't understood what you're saying. Let's be very concrete here. Pick some human being whose values you find generally admirable. Imagine that we put that person in charge of the world. We'll greatly increase their intelligence and knowledge, and fix any mental deficits that might make them screw up more than they need to, and somehow enable them to act consistently according to those admirable values (rather than, e.g., turning completely selfish once granted power, as real pe

1. Ask yourself, what sorts of things do we humans typically refer to as "morality" and what things do we NOT refer to as "morality"? There are clearly things that do not go in the morality bucket, like your favorite flavor of ice cream. But okay, what other things do you think go in the morality bucket and why?

2. Because a) the same sorts of arguments can be made in reverse. Just as Minnie or Maxie might come to accept Eye for an Eye on pragmatic grounds because it makes society as a whole better/less bad, Goldie might accept Maximize ... (read more)

1. Some varieties of moral thinking whose diversity doesn't seem to me to be captured by your eye-for-eye/golden-rule/max-flourish/min-suffer schema: * For some people, morality is all about results ("consequentialists"). For some, it's all about following some moral code ("deontologists"). For some, it's all about what sort of person you are ("virtue ethicists"). Your Minnie and Maxie are clearly consequentialists; perhaps Ivan is a deontologist; it's hard to be sure what Goldie is; but these different outlooks can coexist with a wide variety of object-level moral preferences and your four certainly don't cover all the bases here. * Your four all focus on moral issues surrounding _harming and benefiting_ people. Pretty much everyone does care about those things, but other very different things are important parts of some people's moral frameworks. For instance, some people believe in a god or gods and think _devotion to their god(s)_ more important than anything else; some people attach tremendous importance to various forms of sexual restraint (only within marriage! only between a man and a woman! only if it's done in a way that could in principle lead to babies! etc.); some people (perhaps this is part of where Ivan is coming from, but you can be quite Ivan-like by other means) have moral systems in which _honour_ is super-important and e.g. if someone insults you then you have to respond by taking them down as definitively as possible. 2. (You're answering with "Because ..." but I don't see what "why?" question I asked, either implicitly or explicitly, so at least one of us has misunderstood something here.) (a) I agree that there are lots of different ways in which convergence could happen, but I don't see why that in any way weakens the point that, one way or another, it _could_ happen. (b) It is certainly true that Maxie and Minnie, as they are now, disagree about some important things; again, that isn't new

Except that for humans, life is a journey, not a destination. If you make a maximize flourishing optimizer you would need to rigorously define what you meant by flourishing, which requires a rigorous definition of a general human utility function, which doesnt and cannot exist. Human values are instrumental all the way down. Some values are just more instrumental than others--that is the mechanism which allows for human values to be over 4d experiences rather than 3d states. I mean, what other mechanism could result in that for a human mind? This is a natu

... (read more)

good point, I missed that, will fix later. more likely that effect would result from programming the AI with the overlap between those utility functions, but I'm not totally sure so I'll have to think about it. I don't think that point is actually necessary for the crux of my argument, though. Like I said, I'll have to think about it. Right now it's almost 4am and Im really sick now.

In other words, people who win at offline life spend less time on the internet because they're devoting more time offline. And since rationalists are largely an online community rather than offline at least outside of the bay area, this results in rationalists dropping out of the conversation when they start winning. That's a surprisingly plausible alternative explanation. I'll have to think about this.

So everything we do in life is problem solving and therefore storytelling was originally a form of problem solving, and this explains the origin of storytelling how? This seems like saying "the sky is made of quarks, all matter is made of quarks. Therefore this explains the origins of the sky." But just saying "quarks!" doesn't tell you where the quarks are and where they're going and how far away they all are from each other in what directions. And the positions of all the many quarks involved are too many to keep track of them all individually with a hum

... (read more)

In some societies it might not be considered socially acceptable to want to punish someone merely because what they are doing will raise their social status. That sort of thing is dishonest because social status is reputational and meant to be earned. If someone tries to punish you for doing something to earn status, they probably did not come by their social status by honest means.

In societies where people think like that, I imagine no one would want to say "this act of altruism will increase their status and so should be punished", because that is a low

... (read more)

This. If less wrong had been introduced to an audience of self-improvement health buffs and business people instead of nerdy booksmart Harry Potter fans, things would have been drastically different. it is possible to become more effective at optimizing for other goals besides just truth. People here seem to naively assume so as long as they have enough sufficiently accurate information everything else will simply fall into place and they'll do everything else right automatically without needing to really practice or develop any other skills. I will be speaking more on this later.

1Дмитрий Зеленский2y
I would replace "introduced" to "sold" or "made interesting" here. It's not enough to introduce a group of people to something - unless their values are already in sync with said something's _appearance_ (and the appearance, aka elevator pitch, aka hook, is really important here), you would need to apply some marketing/Dark Arts/rhetorics/whatever-you-call-it to persuade them it's worth it. And, for all claims of "Rationalists should win", Yudkowsky2008 was too much of a rhetorics-hater (really, not noticing his own pattern of having the good teachers of Defence against the Dark Arts in Hogwarts themselves practicing Dark Arts (or, in case of Lupin, *being* Dark Arts)?) to perform that marketing, and thus the blog went to attract people who already shared the values - nerdy booksmarts (note that a)to the best of my knowledge, HPMoR postdates Sequences; b)Harry Potter isn't exactly a booksmart-choosing fandom, as is shown by many factors including the gross proportion of "watched-the-films-never-read-the-books" fans against readers AND people who imagine Draco Malfoy to be a refined aristocrat whose behavior is, though not nice, perfectly calibred instead of the petty bully we see in both books and films AND - I should stop here before I go on a tangent; so I am not certain how much "Harry Potter fans" is relevant).

Except that you're using "useful to believe" as a criteria for determining whether something is true or not. Also, if you had developed the skills, qualities, attitudes, and habits necessary to handle the truth in a sane and healthy manner, you wouldn't need to believe in a God, because you would know how to live with the knowledge that there is no God and not be broken by it. If you truly had developed the ability to handle the truth safely, it wouldn't matter what the truth was, you'd be able to handle it regardless. That is to say, if a God does not exi

... (read more)

Thanks. Sorry for getting spooked. I have some major anxiety and insecurities and trust issues with other people which sometimes rears its ugly head and makes things difficult for me. I would love to get some feedback from you on my story.


Thanks I'd forgotten about that. I have done what you suggested so I'll put the link back.

You should be wary of believing something because you think it's useful to believe it, rather than because it's true. For every useful untrue belief, it should be possible to get the same or greater benefit from believing something that's true instead, if you have developed the skills, qualities, attitudes and habits necessary to handle the truth in a sane and healthy manner.

That's the thing. Unitarian-universalist churches accept everyone as members no matter what they believe. They don't require their members to have a particular belief system. So if you change your beliefs at any point you won't have to leave.

1Justin Vriend5y
I agree entirely. What I am arguing is that gods can be a part of "skills, qualities, attitudes, and habits necessary to handle the truth in a sane and healthy manner." They don't require belief in any untruths, merely interpretation of the truth into a euphorically beautiful form. No where in my post do I advise believing anything untrue, and nowhere do I advise deliberately ignoring true things.
Not every belief is about truth. You may hold X to be useful because it's useful, and Y to be true because it's true. The mistake is to automatically hold X to be true as well, or Y to be useful. Restating that as "X being useful is true" erases the distinction or invites unnecessary rigor. In the same vein, there are judgements of falsehood or necessity or need, and these are about falsehood or necessity or need and not about truth.

This is what fiction is for. A principle can be presented as a character which personifies it to explain it better, but that's different than telling people that the character is a real person rather than a fictional symbol meant to demonstrate a point.

A good example of a fictional deity character meant to demonstrate a point but which is not meant to be believed as real would be the Goddess of Everything Else. I will admit that the Goddess of Everything Else represents rationalist/transhumanist values much better than the Christian deity. Reality is often

... (read more)
3Justin Vriend5y
I agree that it's important to be conservative when talking about gods. I would never tell anyone that God is a real person who exists within physical substrate (except in my own brain-meat). But, I also think that "fiction," while technically accurate, fails to capture the way in which I use God. While a "set" of anything is technically a fiction (in that it doesn't exist in the physical substrate), set theory can be a powerful tool. If you were to dismiss any given set as a fiction without first appreciating the details of its use, you would be losing important details. E: I'm not familiar with Unitarian Universalists. What's their credo?

Thanks! Your saying that makes me instantly like you too. :D

To some extent, but not all in one place. One of the main characters of the novelette series I'm writing, Bertie is a self insert at first, and some of his history plays into the plot. I've also posted about some of it on my blog.

SPOILER: Earthlings: People of the Dawn part 6 -- the Ancestor's Legacy version 2 is a depiction of a genuine utopian future, and at one point one of the characters finds out about Bertie's past psych problems in "Ancient Earth: the MMORPG" and there's a very stark contrast between the game's depiction of pre-rationalist Bertie an

... (read more)