All of woozle's Comments + Replies

Exposition... disinformative?... contradiction... illogical, illogical... Norman, coordinate!

I'm not sure it's important that my conclusions be "interesting". The point was that we needed a guideline (or set thereof), and as far as I know this need has not been previously met.

Once we agree on a set of guidelines, then I can go on to show examples of rational moral decisions -- or possibly not, in which case I update my understanding of reality.

Re ethical vs. other kinds: I'm inclined to agree. I was answering an argument that there is no such thing as a rational moral decision. Jack drew this distinction, not me. Yes, I took way too long... (read more)

Yes, I agree, it's a balancing act.

My take on references I don't get is either to ignore them, to ask someone ("hey, is this a reference to something? I don't get why they said that."), or possibly to Google it if looks Googleable.

I don't think it should be a cause for penalty unless the references are so heavy that they interrupt the flow of the argument. It's possible that I did that, but I don't think I did.

The problem is that the references have such a strained connection to what you're talking about that they are basically non sequiturs whether you understand them or not.

Yes, that is quite true. However, as you can see, I was indeed discussing how to spot irrationality, potentially from quite a long way away.

Nobody likes me, everybody hates me, I'm gonna go eat worms...

I suppose it would be asking too much to just suggest that if a sentence or phrase seems out of place or perhaps even surreal, that readers could just assume it's a reference they don't get, and skip it?

If the resulting argument doesn't make sense, then there's a legit criticism to be made.

But I like you!!! I like humans!!! It's just that I regard your expositions as disinformative.

For what it's worth, here are the references. I'll add a link here from the main post.

  • "Spot the Loonie!" was a Monty Python satire of a game show. I'm using it here to refer to the idea of being able to tell when someone's argument doesn't make sense.
  • "How to Identify the Essential Elements of Rationality from Quite a Long Way Away" refers to the title of a Monty Python episode whose title was, I think, "How to Identify Different Types of Trees from Quite a Long Way Away".
  • "Seven and a Half Million Years Later" ref
... (read more)

I can certainly attempt that. I considered doing so originally, but thought it would be too much like "explaining the joke" (a process notorious for efficient removal of humor). I also had this idea that the references were so ubiquitous by now that they were borderline cliche. I'm glad to discover that this is not the case... I think.

Two years ago, I wouldn't have gotten the brontosaurus reference. I got it today only because last year someone happened to include "Anne Elk" in their reference and that provided enough context for a successful Google. There are no ubiquitous references. That said, cata has a point too, as do you with the thing about explaining jokes. Like everything else in successful communication, it comes down to a balancing act.

I finally figured out what was going on, and fixed it. For some reason it got posted in "drafts" instead of on the site, and looking at the post while logged in gave no clue that this was the case.

Sorry about that!

The subjective part probably could have been shortened, but I thought it was at least partly necessary in order to give proper context, as in "why are you trying to define rationality when this whole web site is supposed to be about that?" or similar.

The question is, was it informative? If not, then how did it fail in that goal?

Maybe I should have started with the conclusions and then explained how I got there.

I felt like I didn't get the informativeness I bargained for, somehow. Your list of requirements for a rational conversation and your definition of a moral rational decision seem reasonable, but straightforward; even after reading your long exposition, I didn't really find out why these are interesting definitions to arrive at. EDIT: One caveat is that it's not totally clear to me where the line between "ethical" goals and other goals lies, if there is such a line. Consequently, I don't know how to distinguish between a moral rational decision and just a plain old rational decision. Are ethical goals ones that have a larger influence on other people? (In particular, I didn't understand the point of contention in the comment thread you linked to, that prompted this post. It seems pretty obvious to me that rationality in a moral context is the same as rationality in any other context; making decisions that are best suited to fulfilling your goals. You never really did address his final question of "how can a terminal value be rational" (my answer would be that it's nonsense to call a value rational or irrational.))

They were references -- Hitchhicker's Guide to the Galaxy and Monty Python, respectively. I didn't expect everyone to get them, and perhaps I should have taken them out, but the alternative seemed too damn serious and I thought it worth entertaining some people at the cost of leaving others (hopefully not many, in this crowd of geeks) scratching their heads.

I hope that clarifies. In general, if it seems surrealistic and out of place, it's probably a reference.

Even references need to be motivated by textual concerns. For example, if you had a post titled "Mostly Harmless" because it talked about the people of Earth but it did not say anything related to harmlessness or lack thereof, it would not be a good title.
Suggestion: Supply links explaining references. You can't achieve common knowledge unless you have common priors.

My main conclusions are, oddly, enough, in the final section:


I propose that the key elements of a rational conversation are (where "you" refers collectively to all participants):

  • 1)you must use only documented reasoning processes: [1.1] using the best known process(es) for a given class of problem [1.2] stating clearly which particular process(es) you use [1.3] documenting any new processes you use
  • 2) making every reasonable effort to verify that: [2.1] your inputs are reasonably accurate, and [2.2] there are no other reasoning processes
... (read more)
I much prefer the new version.
Good question. First attempt at good answer: Because Gonzo journalism [], if done well, can certainly be entertaining, but it generally sucks at being informative. And because, in this community, we generally prefer informative. Except when we don't. Second attempt at good answer: Because Gonzo journalism is so rarely done well.

I'm not sure I follow. Are you using "values" in the sense of "terminal values"? Or "instrumental values"? Or perhaps something else?

I don't think I have anything to add to your non-length-related points. Maybe that's just because you seem to be agreeing with me. You've spun my points out a little further, though, and I find myself in agreement with where you ended up, so that's a good sign that my argument is at least coherent enough to be understandable and possibly in accordance with reality. Yay. Now I have to go read the rest of the comments and find out why at least seven people thought it sucked...

Yes, it could have been shorter, and that would probably have been clearer.

It also could have been a lot longer; I was somewhat torn by the apparent inconsistency of demanding documentation of thought-processes while not documenting my own -- but I did manage to convince myself that if anyone actually questioned the conclusions, I could go into more detail. I cut out large chunks of it after deciding that this was a better strategy than trying to Explain All The Things.

It could probably have been shorter still, though -- I ended up arriving at some fairly ... (read more)

"Immense" wouldn't be "reasonable" unless the problem was of such magnitude as to call for an immense amount of research. That's why I qualify pretty much every requirement with that word.

Here's my answer, finally... or a more complete answer, anyway.

It's not visible, I think you have to publish it.

See my comment about "internal" and "external" terminal values -- I think possibly that's where we're failing to communicate.

Internal terminal values don't have to be rational -- but external ones (goals for society) do, and need to take individual ones into account. Violating an individual internal TV causes suffering, which violates my proposed universal external TV.

For instance... if I'm a heterosexual male, then one of my terminal values might be to form a pair-bond with a female of my species. That's an internal terminal value. Thi... (read more)

I'm fine with that distinction but it doesn't change my point. Why do external terminal values have to be rational? What does it mean for a value to be rational? Can you just answer those two questions?

You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.

I've explained repeatedly -- perhaps not in this subthread, so I'll reiterate -- that I'm only proposing reallocating domestic resources within the US, not resources which would otherwise be spent on foreign aid of any kind. I don't see how that can be harmful to anyone except (possibly) the extremely rich people from whom the resources are being reallocated.

(Will respond to your other points in separate comments, to maximize topic-focus of any subsequent discussion.)

Your example exposes the flaw in the "destroy everything instantly and painlessly" pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed -- or argued for, anyway -- when the gain from continuing to live is believed to be outweighed by the suffering.)

I think this shows that there needs to be a term for pleasure/enjoyment in the formula...

...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we're trying to maximize that term -- where the exact aggregation function has yet to be determined, but we know it has a positive slope.

That seems related to what I was trying to get at with the placeholder-word "freedom" -- I was thinking of things like "freedom to explore" and "freedom to create new things" -- both of which seem highly related to "learning".

It looks like we're talking about two subtly different types of "terminal value", though: for society and for one's self. (Shall we call them "external" and "internal" TVs?)

I'm inclined to agree with your internal TV for "learning", but that doesn't mean t... (read more)

Are you saying that I have to be able to provide you an equation which produces a numeric value as an answer before I can argue that ethical decisions should be based on it?

But ok, a rephrase and expansion:

I propose that (a) the ultimate terminal value of every rational, compassionate human is to minimizing aggregate involuntary discomfort as defined by the subjects of that discomfort, and (b) that no action or decision can be reasonably declared to be "wrong" unless it can at least be shown to cause significant amounts of such discomfort. (Can w... (read more)

It's true that there would be no further suffering once the destruction was complete.

This is a bit of an abstract point to argue over, but I'll give it a go...

I started out earlier arguing that the basis of all ethics was {minimizing suffering} and {maximizing freedom}; I later dropped the second term because it seemed like it might be more of a personal preference than a universal principle -- but perhaps it, or something like it, needs to be included in order to avoid the "destroy everything instantly and painlessly" solution.

That said, I think... (read more)

The classic one is euthanasia.

It's not what "we" -- the people making the decision or taking the action -- don't like; it's what those affected by the action don't like.

By particularism I mean that there are no moral principles and that the right action is entirely circumstance dependent.

So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?

Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? ("Harm" being the more common term; I tried to refine it a bit as "personally-defined suffering", but I think you're disagreeing with the larger idea -- not my ref... (read more)

There is no such thing as "rationally deciding if an action is right or wrong". This has nothing to do with particularism. It's just a metaethical position. I don't know what can be rational or irrational about morality. Again though, I'm not a particularist, I do have principles I can apply if I don't have strong intuitions. A particularist only has her intuitions. I don't believe my own morality can be reduced to language about harm. I'm not sure what "ultimately derives" means but I suspect my answer is no. My morality happens to have a lot to do with harm (again, I'm a Haidtian liberal). But I don't think that makes my morality more rational than a morality that is less about harm. There is no such thing a "rational" or "irrational" morality only moralities I find silly or abhorrent. If it's the case that you care about the rest of the world then I don't think you realize how non-ideal your prescriptions are. You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice. But of course it comes at the price of harming the rest of the world. You're advocating sacrificing political resources to pass legislation. Those resources are to some extent limited which means you're decreasing the chances of or at least delaying changes in policy which would actually benefit the poorest. Moreover, social entitlements are notoriously impossible to overturn which means you're putting all this capital in a place we can't take it from to give to the people who really need it. Shoot, at least the mega-rich are sometimes using their money to invest in developing countries. This doesn't even get us into preventing existential risk. When ever you have a utility-like morality using resources inefficiently is about as bad as actively doing harm. None you'll agree with! You've already said your morality is about preventing harm! But like it or not th

Points 1 and 2:

I don't know. I admitted that this was an area where there might be individual disagreement; I don't know the exact nature of the fa() and fb() functions -- just that we want to minimize [my definition of] suffering and maximize freedom.

Actually, on thinking about it, I'm thinking "freedom" is another one of those "shorthand" values, not a terminal value; I may personally want freedom, but other sentients might not. A golem, for example, would have no use for it (no comments from Pratchett readers, thank you). Nor would ... (read more)

So you want to modify your original statement: To something like: "I propose that the ultimate terminal value of every rational, compassionate human is to minimize [woozle's definition of] suffering (which woozle can't actually define but knows it when he sees it)"? Your proposal seems to be phrased as a descriptive rather than normative statement ('the ultimate terminal value of every rational, compassionate human is' rather than 'should be'). As a descriptive statement this seems factually false unless you define 'rational, compassionate human' as 'human who aims to minimize woozle's definition of suffering'. As a normative statement it is merely an opinion and one which I disagree with. So I don't agree that minimizing suffering by any reasonable definition I can think of (I'm having to guess since you can't provide one) is or should be the terminal value of human beings in general or this human being in particular. Perhaps that means I am not rational or compassionate by your definition but I am not entirely lacking in empathy - I've been known to shed a tear when watching a movie and to feel compassion for other human beings. Well you need to make some effort to clarify your definition then. If killing someone to save them from an eternity of torture is an increase in suffering by your definition what about preventing a potential someone from ever coming into existence? Death represents the cessation of suffering and the cessation of life and is extreme suffering by your definition. Is abortion or contraception also a cause of great suffering due to the denial of a potential life? If not, why not? So everyone shares your self declared terminal value of minimizing suffering but many of them don't know it because they are confused, brainwashed or evil? Is there any point in me debating with you since you appear to have defined my disagreement to be confusion or a form of psychopathy?
So, how much suffering would you say an unoccupied volume of space is subject to? A lump of nonliving matter? A self-consistent but non-instantiated hypothetical person?

You think you're disagreeing with me, but you're not; I would say that for you, death would be a kind of suffering -- the very worst kind, even.

I would also count the "wipe out all life" scenario as an extreme form of suffering. Anyone with any compassion would suffer in the mere knowledge that it was going to happen.

If you're going to define suffering as 'whatever we don't like,' including the possibility that it's different for everyone, then I agree with your assertion but question it's usefulness.

Much discussion about "minimization of suffering" etc. ensued from my first response to this comment, but I thought I should reiterate the point I was trying to make:

I propose that the ultimate terminal value of every rational, compassionate human is to minimize suffering.

(Tentative definition: "suffering" is any kind of discomfort over which the subject has no control.)

All other values (from any part of the political continuum) -- "human rights", "justice", "fairness", "morality", "faith&quo... (read more)

Learning is a terminal value for me, which I hold irreducible to its instrumental advantages in contributing to my well-being.
I think you are wrong but I don't think you've even defined the goal clearly enough to point to exactly where. Some questions: * How do we weight individual contributions to suffering? Are all humans weighted equally? Do we consider animal suffering? * How do we measure suffering? Should we prefer to transfer suffering from those with a lower pain threshold to those with a greater tolerance? * How do you avoid the classic unfriendly AI problem of deciding to wipe out humanity to eliminate suffering? * Do you think that people actually generally act in accordance with this principle or only that they should? If the latter to what extent do you think people currently do act in accordance with this value? There are plenty of other problems with the idea of minimizing suffering as the one true terminal value but I'd like to know your answers to these questions first.
I disagree. I'll take suffering rather than death any day, thank-you-very-much. Furthermore, I have reason to believe that, if I were offered the opportunity to instantaneously and painlessly wipe out all life in the universe, many compassionate humans would support my decision not to do so, despite all the suffering which is thereby allowed to continue.

It seems to me that the terminal values you list are really just means to an end, and that the end in question is similar to my own -- i.e. some combination of minimizing harm and maximizing freedom (to put it in terms which are a bit of an oversimplification).

For example: I also favor ethical pluralism (I'm not sure what "particularism" is), for the reasons that it leads to a more vibrant and creative society, whilst the opposite (which I guess would be suppressing or discouraging any but some "dominant" culture) leads to completely un... (read more)

This is my fault. I don't mean multiculturalism or politcal pluralism. I really do mean pluralism about terminal values []. By particularism [] I mean that there are no moral principles and that the right action is entirely circumstance dependent. Note that I'm not actually a particularist since I did give you moral principles. I would say that I am a value pluralist. But I'm explicitly denying this. For example, I am a cosmopolitan. In your discussion with Matt you've said that for now you care about helping poor Americans, not the rest of the world. But this is totally antithetical to my terminal values. I would vastly prefer to spend political and economic capital to get rid of agricultural subsides in the developed world, liberalize as many immigration and trade laws as I can and test strategies for economic development. Whether or not the American working class has cheap health care really is quite insignificant to me by comparison. Now, when I say I have a terminal value of fairness I really do mean it. I mean I would sacrifice utility or increase overall suffering in some circumstances in order to make the world more fair. I would do the same to make the world more free and the same to make the world more honest in some situations. I would do things that furthered the happiness of my friends and family but increased your suffering (nothing personal). I don't know what gives you reason to deny any of this. Now you're just begging the question. My whole point this entire time is that there is no reason for morality to always be about harm. Indeed, there is no reason for morality to ever be about harm except that we make it so. I frankly don't even understand the application of the word "rationality" as we use it here [] to values. Unless you have a third meaning for the word your usage here is just a cate

A little follow-up... it looks like the major deregulatory change was the Telecommunications Act of 1996; the "freeing of the phone jack" took place in the early 1980s or late 1970s, and modular connectors (RJ11) were widespread by 1985, so either that was a result of earlier, less sweeping deregulation or else it was simply an industry response to advances in technology.

Amen to that... I remember when it was illegal to connect your own equipment to Phone Company wires, and telephones were hard-wired by Phone Company technicians.

The obvious flaw in the current situation, of course, is the regional monopolies -- slowly being undercut by competition from VoIP, but still: as it is, if I want wired phone service in this area, I have to deal with Verizon, and Verizon is evil.

This suggests to me that a little more regulation might be helpful -- but you seem to be suggesting that the lack of competition in the local phone market ... (read more)

An example of the type of special-interest driven regulation presented as consumer protection that I'm talking about is the established phone companies trying to use the E911 regulations [] to hamper VOIP companies that threaten their monopolies. This type of regulatory capture [] is very common.
A little follow-up... it looks like the major deregulatory change was the Telecommunications Act of 1996 []; the "freeing of the phone jack" took place in the early 1980s or late 1970s, and modular connectors (RJ11) were widespread by 1985, so either that was a result of earlier, less sweeping deregulation or else it was simply an industry response to advances in technology.

[woozle] If the government doesn't provide it, just who is going to?

[mattnewport] Charities, family, friends, well meaning strangers...

So, why aren't they? How can we make this happen -- what process are you proposing by which we can achieve universal welfare supported entirely by such means?

You appear to be shifting the goalposts. You started out arguing that your main concern is to minimize suffering...

I didn't state the scope; you just assumed it was global. My goal remains as stated -- minimizing suffering -- but I am not arguing for any global... (read more)

I don't have time to reply to your whole post right now (I'll try to give a fuller response later) but telecom deregulation is the first example that springs to mind of (imperfect but) largely successful deregulation.

I think it is a rational reason to oppose a role for government in providing it.

If the government doesn't provide it, just who is going to?

As I lack much of a nationalist instinct I am endlessly puzzled by the idea that we should draw arbitrary boundaries of care along national borders. If your concern is helping others you can do so with much greater efficiency by targeting that help where it is most needed, which is largely outside the US.

It's not a matter of loyalty, but of having the knowledge and resources to work with to make something possib... (read more)

Charities, family, friends, well meaning strangers... The desire to help others does not exist because of government. There might be more or less resources devoted to charity in the absence of government intervention, I haven't seen much evidence either way. Libertarians commonly argue that private charity is more effective than government welfare and that it makes for a healthier society. A typical example of this case is here [] - the first such argument I found on google. Now you can certainly dispute these claims but you talk as if you are not even aware that such alternative arguments exist. You appear to be shifting the goalposts. You started out arguing that your main concern is to minimize suffering: Now you are saying that because that is an unrealistic goal you instead think it is important to make people who are already relatively well off by global standards (poor Americans) better off than it is to minimize suffering of the global poor. If your goal is really to minimize human suffering I don't see how you can argue that guaranteeing housing and healthcare for Americans is a more effective approach than anti-malarial medications, vaccines or antibiotics for African children. Subtly different question. It is true that many wealthy countries also have relatively generous welfare systems (particularly in Europe) but they have been able to afford these systems because they were already relatively wealthy. Studies that find negative effects are generally looking at relative growth rates but the difficulty of properly controlling such studies makes them somewhat inconclusive. I've been down this road before in discussions of this nature and they usually degenerate into people throwing links to studies back and forth that neither side really has taken the time to read in detail. The discussion usually just derails into arguing about why this or that study is not adequately controlled. I think it is fair

I don't think BLoC has to be slippery, though of course in reality (with the current political system, anyway) it would become politicized. This is not a rational reason to oppose it, however.

I don't know if we can do it for everyone on Earth at the moment, though that is worth looking at and getting some numbers so we know where we are. I was proposing it for the US, since we are the huge outlier in this area; most other "developed" societies (including some less wealthy than the US) already have such a thing.

I would suggest a starting definitio... (read more)

I think it is a rational reason to oppose a role for government in providing it. Governments are bad enough at providing well defined services, a poorly defined goal exacerbates the problem. Just as the vague threat of terrorism provides a cover for ever increasing government encroachment on civil liberties, the vague promise of an ever-rising 'basic level of comfort' provides cover for ever increasing government encroachment on economic liberties. As I lack much of a nationalist instinct I am endlessly puzzled by the idea that we should draw arbitrary boundaries of care along national borders. If your concern is helping others you can do so with much greater efficiency by targeting that help where it is most needed, which is largely outside the US. Incidentally I don't think it is a coincidence that many developed countries with advanced welfare states are less wealthy than the US. The difficulties of making economic comparisons across countries with different cultures and histories make it difficult to draw any conclusive conclusions from the data on these differences but some of it is highly suggestive. The claim that it doesn't seem to hurt their overall wealth at all is highly controversial. Due to the difficulties of controlling for other factors it is always possible to explain away the wealth differences in the data but there are suggestive trends. I don't really want to get into throwing studies back and forth but saying 'it doesn't seem to hurt their overall wealth at all' suggests either ignorance of the relevant data or unjustified confidence in interpretation of it. Those who gain early advantage inevitably using their power to take down other players sounds like a description of the current corporatist system in the US to me, where incumbents use their political influence to buy state protection from competition. You appear to be ignorant of the kinds of problems highlighted by public choice theory [

Stop me if I'm misunderstanding the argument -- I won't have time to watch the video tonight, and have only read the quotes you excerpted -- but you seem to be posing "markets" against "universal BLoC" as mutually exclusive choices.

I am suggesting that this is a false dilemma; we have more than adequate resources to support socialism at the low end of the economic scale while allowing quite free markets at the upper end. If nobody can suffer horribly -- losing their house, their family, their ability to live adequately -- the risks of g... (read more)

Your 'universal basic level of comfort' seems an awfully slippery concept to me. I imagine the average American's idea of what it is differs rather markedly from someone living in rural Africa. Both would differ from that of a medieval peasant. That's somewhat besides the point though. The reason we can support an unprecedented human population with, on average, a level of health, comfort and material well-being that is historically high is that markets are extremely good at allocating resources efficiently and encouraging and spreading innovations. This efficiency stems in large part from the way that a market economy rewards success and not good intentions. Profits tend to flow to those who can most effectively produce goods or services valued by other market participants. Hayek's point is that this can lead to a distribution of wealth that offends many people's natural sense of justice but that attempts to enforce a more 'just' distribution tend to backfire in all kinds of ways, not least of which is through a reduction in the very efficiency we rely on to maintain our standard of living. Part of the problem is that I believe this reflects an overly static view of the way the economy functions and neglects the effects of changes in incentives on individual behaviour and, in time, on societal norms. The idea of a 'culture of dependency' reflects these types of concern. Moral Hazard doesn't only affect too big to fail banks. This ties in with my earlier point about defining a 'basic level of comfort'. I believe Hayek was actually supportive of some level of unemployment insurance. The tremendous inequalities between nations complicate the politics of this issue - many people in the developed world feel they are entitled to a basic level of comfort when unemployed that exceeds the level of comfort of productive workers in the developing world and this has consequences for the politics of free trade, immigration and foreign aid. I'm not sure exactly what Hayek's

I would suggest that it makes no sense to reward getting the right answer without documenting the process you used, because then nobody benefits from your discovery that this process leads (in at least that one case) to the right answer.

Similarly, I don't see the benefit of punishing someone for getting the wrong answer while sincerely trying to follow the right process. Perhaps a neutral response is appropriate, but we are still seeing a benefit from such failed attempts: we learn how the process can be misunderstood (because if the process is right, and ... (read more)

Actually, no, that's not quite my definition of suffering-minimization. This is an important issue to discuss, too, since different aggregation functions will produce significantly different final sums.

This is the issue I was getting at when I asked (in some other comment on another subthread) "is it better to take $1 each from 1000 poor people, or $1000 from one millionaire?" (That thread apparently got sucked into an attractor-conversation about libertarian values.)

First, I'm inclined to think that suffering should be weighted more heavily than... (read more)

Part of the justification is in the discussion with Hayek I linked here [].

That's actually my main goal, at least now -- to be able to make rational decisions about political issues. This necessarily involves achieving some understanding of the methods by which voter perceptions are manipulated, but that is a means to an end.

In 2004, I thought it entirely possible that I was simply highly biased in some hitherto unnoticed way, and I wanted to come to some understanding of why half the country apparently thought Bush worthy of being in office at all, never mind thinking that he was a better choice than Kerry.

I was prepared to find... (read more)

The existence of conversational attractors is why I think any discussion tool needs to be hierarchical -- so any new topic can instantly be "quarantined" in its own space.

The LW comment system does this in theory -- every new comment can be the root of a new discussion -- but apparently in practice some of the same "problem behaviors" (as we say here in the High Energy Children Research Laboratory) still take place.

Moreover, I don't understand why it still happens. If you see the conversation going off in directions that aren't interest... (read more)

I probably should have inserted the word "practical" in that sentence. Bayesianism would seem to be formalized, but how practical is it for daily use? Is it possible to meaningfully (and with reasonable levels of observable objectivity) assign the necessary values needed by the Bayesian algorithm(s)?

More importantly, perhaps, would it be at least theoretically possible to write software to mediate the process of Bayesian discussion and analysis? If so, then I'm interested in trying to figure out how that might work. (I got pretty hopelessly lost ... (read more)

This seems a valid interpretation to me -- but is "wrongness" a one-dimensional concept?

A comment can be wrong in the sense of having incorrect information (as RobinZ points out) but right in the sense of arriving at correct conclusions based on that data -- in which case I would still count it as a valuable contribution by offering the chance to correct that data, and by extension anyone who arrived at that same conclusion by believing that same incorrect data.

By the same token, a comment might include only true factual statements but arrive at ... (read more)

I don't think that is in keeping with the overall goals of this site. You should get points for winning [] (making true statements) not for effort. "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety." This doesn't necessarily mean instantly downvoting anyone who is confused but it does mean that I'm not inclined to award upvotes for well meaning but wrong comments. Yes. Commenters should assume their comments will be read by multiple people and so should make a reasonable effort to check their facts before posting. A few minutes spent fact checking any uncertain claims to avoid wasted time on the part of readers is something I expect of commenters here and punishing factual inaccuracies with a downvote signals that expectation. 'Reasonable effort' is obviously somewhat open to interpretation but if one's readers can find evidence of factual inaccuracy in a minute or two of googling then one has failed to clear the bar.

This is a good example of why we need a formalized process for debate -- so that irrelevant politicizations can be easily spotted before they grow into partisan rhetoric.

Part of the problem also may be that people often seem to have a hard time recognizing and responding to the actual content of an argument, rather than [what they perceive as] its implications.

For example (loosely based on the types of arguments you mention regarding Knox, but using a topic I'm more familiar with):

  • [me] Bush was really awful.
  • [fictional commenter] You're just saying that
... (read more)
I've always felt that a valid use of the karma system is to vote up things that you believe are less wrong and vote down things that you believe to be more wrong.
At the risk of harping on what is after all a major theme of this site, we do in fact have one -- it's called Bayesianism. How should a debate look? Well, here is how I think it should begin, at least []. (Still waiting to see how this will work, if Rolf ever does decide to go through with it.) In fact, let's try to consider your example from a Bayesian perspective: (A) Bush was really awful. (B) You're just saying that because you're a liberal, and liberals hate Bush. Now, of course, you're right that (A) "doesn't address" (B) -- in the sense that (A) and (B) could both be true. But suppose instead that the conversation proceeded in the following way: (A) Bush was really awful. (B') No he wasn't. In this case (B') directly contradicts (A); which is about the most extreme form of "addressing" there is. Yet, this hardly seems an improvement. The reason is that, at least for Bayesians, the purpose of such a conversation is not to arrive at logical contradictions; it's to arrive at accurate beliefs. You'll notice, in this example, that (A) itself isn't much of an argument; it just consists of a statement of the speaker's belief. The actual implied argument is something like this: (A1) I say that Bush was really awful. (A2) Something I say is likely to be true. (A3) Therefore, it is likely that Bush was really awful. The response, (B) You're just saying that because you're a liberal, and liberals hate Bush. should in turn be analyzed like this: (B1) You belong to a set of people ("liberals") whose emotions tend to get in the way of their forming accurate beliefs. (B2) As a consequence, (A2) is likely to be false. (B3) You have therefore failed to convince me of (A3). So, why are political arguments dangerous? Basically, because people tend to say (A) and (B) (or (A) and (B')) -- which are widely-recognized tribal-affiliation-signals -- rather than (A1)-(A3) and (B1)-(B3)
How google translation works [] "n practice, languages are used to say the same things over and over again. " How potentially informative conversations go redundant [] These attractors happen both because they're easy conversation and because they're useful for propagandists to set up [] I'm not sure that the karma system needs to be redesigned-- there's a limit to how much you can say with a number. It might help to have a "that was fun" category, but I think part of the point of karma is that it's easy to do, and having a bunch of karma categories might mean that people won't use it at all or will spend a lot of time fiddling with the categories. We may have reached the point in this group where enough of us can recognize and defuse those conversations which merely wander around the usual flowchart and encourage people to add information.

If I'm understanding correctly, "terminal values" are end-goals.

If we have different end-goals, we need to understand what they are. (Actually, we should understand what they are anyway -- but if they're different, it becomes particularly important to examine them and identify the differences.)

This seems related to a question that David Brin once suggested as a good one to bring up in political debate: Describe the sort of world do you hope your preferred policies will create. ...or, in other words, describe your large-scale goals for our society... (read more)

I don't think any bumper sticker successfully encapsulates my terminal values. I'm highly sympathetic to ethical pluralism and particularism. I value fairness and happiness (politically I'm a cosmopolitan Rawlsian liberal) with additional values of freedom and honesty which under certain conditions can trump fairness and happiness. I also value the existence of what I would recognize as humanity and limiting the possibility of the destruction of humanity can sometimes trump all of the above. Values weighted toward myself, my family and friends. It's possible all of these things could be reduced to more fundamental values, I'm not sure. There are cases where I have no good procedure for evaluating which outcome is more desirable. It is worth noting, if you think these are rationally justifiable somehow, that maximizing two different values is going to leave with an incomplete function in some circumstances. Some options will maximize not suffering but fail to maximize freedom and vice versa. If you were looking for people here with different values, see above (though I don't know how much we differ). But note that the people here are going to have heavy overlap on values for semi-obvious reasons. But there are people out there who assign intrinsic moral relevance to national borders, race, religion, sexual purity, tradition etc. Do you still deny that?
By 'minimize suffering' I assume you mean some kind of utilitarian conception of minimizing aggregate suffering equally weighted across all humans (and perhaps extended to include animals in some way). If so this would be one area we differ. Like most humans, I don't apply equal weighting to all other individuals' utilities. I don't expect other people's weightings to match my own, nor do I think it would be better if we all aimed to agree on a unique set of weightings. I care more about minimizing the suffering of my family and friends than I do about some random stranger, an animal, a serial killer, a child molester or a politician. I do not think this is a problem.

"Terminal Values" are goals, then, and "Instrumental Values" are the methods used in an attempt to reach those goals. Does that sound right? So now I need to go reply to Jack again...

I'm trying to understand them in a rational context. Most years, that pattern made some kind of sense -- the major candidates were both lizards, neither one obviously better or worse than the other.

Continuing to vote along party lines after 3 years of experience with Bush, however, is a different beast. Either people were simply unaware of many of the key points (or possibly were aware of positive points unknown to anyone that I talked to), or else they were using an entirely different process for evaluating presidential fitness. In the former case, we have a problem; in the latter, something worthy of study.

I think your method works better as an attempt to engage in politics without having your mind killed (avoiding the mistakes that are typical of the political world) than as a way to explain real-world political outcomes. If you want more a detailed explanation of a particular election than the structural account in my last comment, I'd offer something like this: In 2004, the campaign involved a lot of noise and harsh criticisms of both candidates, and it wasn't easy to filter out the accurate, damning criticisms of Bush from the rest. This would be especially hard for voters who were inclined to trust Bush over Kerry, and the post-9/11 rally-around-the-flag effect (along with the tendency for Republicans to be more trusted on national defense and patriotism) meant that a lot of voters at least started out with an inclination to trust Bush, especially on the salient issue of national defense. Plus, many of the bad things about Bush also cast the country in a bad light, which meant that voters' natural defensiveness would kick in. The focus is on voters' perceptions, trying to analyze them like a social scientist, rather than more rigorously evaluating the content of political arguments.

So yes, liberals would consider voting for a republican as a kind of treason.

Does he have data for this? I would vote for whoever seemed the most sensible, regardless of party. If Ron Paul had run against Obama, I would have had a much harder time deciding.

Also: I think what you're misunderstanding about the POV on the site is that I am prepared to rationally defend everything I have said there, and I am prepared to retract or alter it if I cannot do so. (Note that there are a few articles posted by others, and I don't necessarily agree with what they have said -- but if I have not responded, it means I also don't disagree strongly enough to bother. Maybe you do, and maybe I will too once the flaws are pointed out.)

No, it really doesn't. "Naively optimised self interest" suggests (a) and libertarianism is almost irrelevent to the question. Maybe if the question was "should people be coerced into b independently of any contract (formal or implicit) with the owner?"
If you think libertarianism argues that a) is the correct and proper action then you don't understand libertarianism. I'm not even sure how you'd arrive at the idea that it does. I'm guessing that you are trying to make some kind of analogy between libertarian attitudes to government and libertarian attitudes to individual interactions but that you are assuming ideas about government that libertarians do not share. As for the natural disaster scenario, the basis of libertarian ethics is that people should not be compelled to do anything by force. Voluntary charity is perfectly compatible with libertarianism and indeed libertarians often believe that voluntary charity is a much more satisfactory solution to most of the social problems that governments currently take it upon themselves to address.
You don't seem like someone well-acquainted with the relevant literature. If a policy seems obviously correct, and doesn't involve coercing someone else into doing things against their will, then Libertarianism (at least, read as roughly equivalent to Lockean classical liberalism) won't tell you not to do it. A lot of libertarians are very enthusiastic about charity and philanthropy; they are less enthusiastic about being forced into it at gunpoint. Is there any point to having this conversation here?

Are you saying we should stop trying to bridge that gulf, or should I try to explain myself a different way?

No, I'm in favour of attempts to bridge the gulf and the fact that you are posting here is a promising sign that it might be possible. I'm reluctant to engage further based on what I've seen of your writing on your site so far however - time is a limited resource and I fear that the value I would gain from engaging with you is not worth the time investment. Your comments in this thread have not exhibited the level of partisan blindness I've been worried by on your site however so there may be hope.

I also think you're misunderstanding my criticism of Haidt. Yes, he has lots of data to support his claims -- but he rigged the experiments in the way he asked his questions, and he hasn't responded to the obvious flaws in his analysis.

Nor have you.

!?!?!??! What evidence have you for this? Note that the theory wasn't designed to say anything about politics. It was designed to describe cross-cultural moral differences in different parts of the world, only later was it applied to the American culture wars.
He's been criticized by some libertarians for neglecting them as a political group and they have raised similar concerns. His reply is here [].
Load More