I'm not sure it's important that my conclusions be "interesting". The point was that we needed a guideline (or set thereof), and as far as I know this need has not been previously met.
Once we agree on a set of guidelines, then I can go on to show examples of rational moral decisions -- or possibly not, in which case I update my understanding of reality.
Re ethical vs. other kinds: I'm inclined to agree. I was answering an argument that there is no such thing as a rational moral decision. Jack drew this distinction, not me. Yes, I took way too long...
Yes, I agree, it's a balancing act.
My take on references I don't get is either to ignore them, to ask someone ("hey, is this a reference to something? I don't get why they said that."), or possibly to Google it if looks Googleable.
I don't think it should be a cause for penalty unless the references are so heavy that they interrupt the flow of the argument. It's possible that I did that, but I don't think I did.
Yes, that is quite true. However, as you can see, I was indeed discussing how to spot irrationality, potentially from quite a long way away.
Nobody likes me, everybody hates me, I'm gonna go eat worms...
I suppose it would be asking too much to just suggest that if a sentence or phrase seems out of place or perhaps even surreal, that readers could just assume it's a reference they don't get, and skip it?
If the resulting argument doesn't make sense, then there's a legit criticism to be made.
For what it's worth, here are the references. I'll add a link here from the main post.
I can certainly attempt that. I considered doing so originally, but thought it would be too much like "explaining the joke" (a process notorious for efficient removal of humor). I also had this idea that the references were so ubiquitous by now that they were borderline cliche. I'm glad to discover that this is not the case... I think.
I finally figured out what was going on, and fixed it. For some reason it got posted in "drafts" instead of on the site, and looking at the post while logged in gave no clue that this was the case.
Sorry about that!
The subjective part probably could have been shortened, but I thought it was at least partly necessary in order to give proper context, as in "why are you trying to define rationality when this whole web site is supposed to be about that?" or similar.
The question is, was it informative? If not, then how did it fail in that goal?
Maybe I should have started with the conclusions and then explained how I got there.
They were references -- Hitchhicker's Guide to the Galaxy and Monty Python, respectively. I didn't expect everyone to get them, and perhaps I should have taken them out, but the alternative seemed too damn serious and I thought it worth entertaining some people at the cost of leaving others (hopefully not many, in this crowd of geeks) scratching their heads.
I hope that clarifies. In general, if it seems surrealistic and out of place, it's probably a reference.
My main conclusions are, oddly, enough, in the final section:
[paste]
I propose that the key elements of a rational conversation are (where "you" refers collectively to all participants):
I'm not sure I follow. Are you using "values" in the sense of "terminal values"? Or "instrumental values"? Or perhaps something else?
I don't think I have anything to add to your non-length-related points. Maybe that's just because you seem to be agreeing with me. You've spun my points out a little further, though, and I find myself in agreement with where you ended up, so that's a good sign that my argument is at least coherent enough to be understandable and possibly in accordance with reality. Yay. Now I have to go read the rest of the comments and find out why at least seven people thought it sucked...
Yes, it could have been shorter, and that would probably have been clearer.
It also could have been a lot longer; I was somewhat torn by the apparent inconsistency of demanding documentation of thought-processes while not documenting my own -- but I did manage to convince myself that if anyone actually questioned the conclusions, I could go into more detail. I cut out large chunks of it after deciding that this was a better strategy than trying to Explain All The Things.
It could probably have been shorter still, though -- I ended up arriving at some fairly ...
"Immense" wouldn't be "reasonable" unless the problem was of such magnitude as to call for an immense amount of research. That's why I qualify pretty much every requirement with that word.
See my comment about "internal" and "external" terminal values -- I think possibly that's where we're failing to communicate.
Internal terminal values don't have to be rational -- but external ones (goals for society) do, and need to take individual ones into account. Violating an individual internal TV causes suffering, which violates my proposed universal external TV.
For instance... if I'm a heterosexual male, then one of my terminal values might be to form a pair-bond with a female of my species. That's an internal terminal value. Thi...
You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.
I've explained repeatedly -- perhaps not in this subthread, so I'll reiterate -- that I'm only proposing reallocating domestic resources within the US, not resources which would otherwise be spent on foreign aid of any kind. I don't see how that can be harmful to anyone except (possibly) the extremely rich people from whom the resources are being reallocated.
(Will respond to your other points in separate comments, to maximize topic-focus of any subsequent discussion.)
Your example exposes the flaw in the "destroy everything instantly and painlessly" pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed -- or argued for, anyway -- when the gain from continuing to live is believed to be outweighed by the suffering.)
I think this shows that there needs to be a term for pleasure/enjoyment in the formula...
...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we're trying to maximize that term -- where the exact aggregation function has yet to be determined, but we know it has a positive slope.
That seems related to what I was trying to get at with the placeholder-word "freedom" -- I was thinking of things like "freedom to explore" and "freedom to create new things" -- both of which seem highly related to "learning".
It looks like we're talking about two subtly different types of "terminal value", though: for society and for one's self. (Shall we call them "external" and "internal" TVs?)
I'm inclined to agree with your internal TV for "learning", but that doesn't mean t...
Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?
But ok, a rephrase and expansion:
I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be "wrong" unless it can at least be shown to cause significant amounts of such discomfort. (Can w...
It's true that there would be no further suffering once the destruction was complete.
This is a bit of an abstract point to argue over, but I'll give it a go...
I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle -- but perhaps it, or something like it, needs to be included in order to avoid the "destroy everything instantly and painlessly" solution.
That said, I think...
It's not what "we" -- the people making the decision or taking the action -- don't like; it's what those affected by the action don't like.
By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent.
So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?
Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? ("Harm" being the more common term; I tried to refine it a bit as "personally-defined suffering", but I think you're disagreeing with the larger idea -- not my ref...
Points 1 and 2:
I don't know. I admitted that this was an area where there might be individual disagreement; I don't know the exact nature of the fa() and fb() functions -- just that we want to minimize [my definition of] suffering and maximize freedom.
Actually, on thinking about it, I'm thinking "freedom" is another one of those "shorthand" values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would ...
You think you're disagreeing with me, but you're not; I would say that for you, death would be a kind of suffering -- the very worst kind, even.
I would also count the "wipe out all life" scenario as an extreme form of suffering. Anyone with any compassion would suffer in the mere knowledge that it was going to happen.
Much discussion about "minimization of suffering" etc. ensued from my first response to this comment, but I thought I should reiterate the point I was trying to make:
I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.
(Tentative definition: "suffering" is any kind of discomfort over which the subject has no control.)
All other values (from any part of the political continuum) -- "human rights", "justice", "fairness", "morality", "faith&quo...
It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own -- i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).
For example: I also favor ethical pluralism (I'm not sure what "particularism" is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some "dominant" culture) leads to completely un...
A little follow-up... it looks like the major deregulatory change was the Telecommunications Act of 1996; the "freeing of the phone jack" took place in the early 1980s or late 1970s, and modular connectors (RJ11) were widespread by 1985, so either that was a result of earlier, less sweeping deregulation or else it was simply an industry response to advances in technology.
Amen to that... I remember when it was illegal to connect your own equipment to Phone Company wires, and telephones were hard-wired by Phone Company technicians.
The obvious flaw in the current situation, of course, is the regional monopolies -- slowly being undercut by competition from VoIP, but still: as it is, if I want wired phone service in this area, I have to deal with Verizon, and Verizon is evil.
This suggests to me that a little more regulation might be helpful -- but you seem to be suggesting that the lack of competition in the local phone market ...
[woozle] If the government doesn't provide it, just who is going to?
[mattnewport] Charities, family, friends, well meaning strangers...
So, why aren't they? How can we make this happen -- what process are you proposing by which we can achieve universal welfare supported entirely by such means?
You appear to be shifting the goalposts. You started out arguing that your main concern is to minimize suffering...
I didn't state the scope; you just assumed it was global. My goal remains as stated -- minimizing suffering -- but I am not arguing for any global...
I think it is a rational reason to oppose a role for government in providing it.
If the government doesn't provide it, just who is going to?
As I lack much of a nationalist instinct I am endlessly puzzled by the idea that we should draw arbitrary boundaries of care along national borders. If your concern is helping others you can do so with much greater efficiency by targeting that help where it is most needed, which is largely outside the US.
It's not a matter of loyalty, but of having the knowledge and resources to work with to make something possib...
I don't think BLoC has to be slippery, though of course in reality (with the current political system, anyway) it would become politicized. This is not a rational reason to oppose it, however.
I don't know if we can do it for everyone on Earth at the moment, though that is worth looking at and getting some numbers so we know where we are. I was proposing it for the US, since we are the huge outlier in this area; most other "developed" societies (including some less wealthy than the US) already have such a thing.
I would suggest a starting definitio...
Stop me if I'm misunderstanding the argument -- I won't have time to watch the video tonight, and have only read the quotes you excerpted -- but you seem to be posing "markets" against "universal BLoC" as mutually exclusive choices.
I am suggesting that this is a false dilemma; we have more than adequate resources to support socialism at the low end of the economic scale while allowing quite free markets at the upper end. If nobody can suffer horribly -- losing their house, their family, their ability to live adequately -- the risks of g...
I would suggest that it makes no sense to reward getting the right answer without documenting the process you used, because then nobody benefits from your discovery that this process leads (in at least that one case) to the right answer.
Similarly, I don't see the benefit of punishing someone for getting the wrong answer while sincerely trying to follow the right process. Perhaps a neutral response is appropriate, but we are still seeing a benefit from such failed attempts: we learn how the process can be misunderstood (because if the process is right, and ...
Actually, no, that's not quite my definition of suffering-minimization. This is an important issue to discuss, too, since different aggregation functions will produce significantly different final sums.
This is the issue I was getting at when I asked (in some other comment on another subthread) "is it better to take $1 each from 1000 poor people, or $1000 from one millionaire?" (That thread apparently got sucked into an attractor-conversation about libertarian values.)
First, I'm inclined to think that suffering should be weighted more heavily than...
That's actually my main goal, at least now -- to be able to make rational decisions about political issues. This necessarily involves achieving some understanding of the methods by which voter perceptions are manipulated, but that is a means to an end.
In 2004, I thought it entirely possible that I was simply highly biased in some hitherto unnoticed way, and I wanted to come to some understanding of why half the country apparently thought Bush worthy of being in office at all, never mind thinking that he was a better choice than Kerry.
I was prepared to find...
The existence of conversational attractors is why I think any discussion tool needs to be hierarchical -- so any new topic can instantly be "quarantined" in its own space.
The LW comment system does this in theory -- every new comment can be the root of a new discussion -- but apparently in practice some of the same "problem behaviors" (as we say here in the High Energy Children Research Laboratory) still take place.
Moreover, I don't understand why it still happens. If you see the conversation going off in directions that aren't interest...
I probably should have inserted the word "practical" in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?
More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I'm interested in trying to figure out how that might work. (I got pretty hopelessly lost ...
This seems a valid interpretation to me -- but is "wrongness" a one-dimensional concept?
A comment can be wrong in the sense of having incorrect information (as RobinZ points out) but right in the sense of arriving at correct conclusions based on that data -- in which case I would still count it as a valuable contribution by offering the chance to correct that data, and by extension anyone who arrived at that same conclusion by believing that same incorrect data.
By the same token, a comment might include only true factual statements but arrive at ...
This is a good example of why we need a formalized process for debate -- so that irrelevant politicizations can be easily spotted before they grow into partisan rhetoric.
Part of the problem also may be that people often seem to have a hard time recognizing and responding to the actual content of an argument, rather than [what they perceive as] its implications.
For example (loosely based on the types of arguments you mention regarding Knox, but using a topic I'm more familiar with):
If I'm understanding correctly, "terminal values" are end-goals.
If we have different end-goals, we need to understand what they are. (Actually, we should understand what they are anyway -- but if they're different, it becomes particularly important to examine them and identify the differences.)
This seems related to a question that David Brin once suggested as a good one to bring up in political debate: Describe the sort of world do you hope your preferred policies will create. ...or, in other words, describe your large-scale goals for our society...
"Terminal Values" are goals, then, and "Instrumental Values" are the methods used in an attempt to reach those goals. Does that sound right? So now I need to go reply to Jack again...
I'm trying to understand them in a rational context. Most years, that pattern made some kind of sense -- the major candidates were both lizards, neither one obviously better or worse than the other.
Continuing to vote along party lines after 3 years of experience with Bush, however, is a different beast. Either people were simply unaware of many of the key points (or possibly were aware of positive points unknown to anyone that I talked to), or else they were using an entirely different process for evaluating presidential fitness. In the former case, we have a problem; in the latter, something worthy of study.
So yes, liberals would consider voting for a republican as a kind of treason.
Does he have data for this? I would vote for whoever seemed the most sensible, regardless of party. If Ron Paul had run against Obama, I would have had a much harder time deciding.
Also: I think what you're misunderstanding about the POV on the site is that I am prepared to rationally defend everything I have said there, and I am prepared to retract or alter it if I cannot do so. (Note that there are a few articles posted by others, and I don't necessarily agree with what they have said -- but if I have not responded, it means I also don't disagree strongly enough to bother. Maybe you do, and maybe I will too once the flaws are pointed out.)
Are you saying we should stop trying to bridge that gulf, or should I try to explain myself a different way?
I also think you're misunderstanding my criticism of Haidt. Yes, he has lots of data to support his claims -- but he rigged the experiments in the way he asked his questions, and he hasn't responded to the obvious flaws in his analysis.
Nor have you.
Exposition... disinformative?... contradiction... illogical, illogical... Norman, coordinate!