Wiki Contributions

Comments

The issue at hand is not whether the "logic" was valid (incidentally, you are disputing the logical validity of an informal insinuation whose implication appears to be factually true, despite the hinted connection — that Scott's views on HBD were influenced by Murray's works — being merely probable)

The issues at hand are:

1. whether it is a justified "weapon" to use in a conflict of this sort

2. whether the deed is itself immoral beyond what is implied by "minor sin"

That is an unrealistic and thoroughly unworkable expectation.

World models are pre-conscious. We may be conscious of verbalised predictions that follow from our world models, and various cognitive processes that involve visualisation (in the form of imagery, inner monologue, etc.), since these give rise to qualia. We do not however possess direct awareness of the actual gear-level structures of our world models, but must get at these through (often difficult) inference.

When learning about any sufficiently complex phenomenon, such as pretty much any aspect of psychology or sociology, there are simply too many gears for it to be possible to identify all of them; a lot of them are bound to remain implicit and only be noticed when specifically brought into dispute. This is not to say that there can be no standard by which to expect "theory gurus" to prove themselves not to be frauds. For example, if they have unusual worldviews, they should be able to pinpoint examples (real or invented) that illustrate some causal mechanism that other worldviews give insufficient attention to. They should be able to broadly outline how this mechanism relates to their worldview, and how it cannot be adequately accounted for by competing worldviews. This is already quite sufficient, as it opens up the possibility for interlocutors to propose alternate views of the mechanism being discussed and show how they are, after all, able to be reconciled with other worldviews than the one proposed by the theorist.

Alternatively, they should be able to prove their merit in some other way, like showing their insight into political theory by successfully enacting political change, into crowd psychology by being successful propagandists, into psychology and/or anthropology by writing great novelists with a wide variety of realistic characters from various walks of life, etc.

But expecting them to be able to explicate to you the gears of their models is somewhat akin to expecting a generative image AI to explain its inner workings to you. It's a fundamentally unreasonable request, all the more so because you have a tendency to dismiss people as bluffing whenever they can't follow you into statistical territory so esoteric that there are probably less than a thousand people in the world who could.

Trouble is that even checking the steelman with the other person does not avoid the failure modes I am talking about. In fact, some moments ago, I made slight changes to the post to include a bit where the interlocutor presents a proposed steelman and you reject it. I included this because many redditors objected that this is by definition part of steelmanning (though none of the cited definitions actually included this criterion), and so I wanted to show that it makes no difference at all to my argument whether the interlocutor asks for confirmation of the steelman versus you becoming aware of it by some other mechanism. What's relevant is only that you somehow learn of the steelman attempt, reject it as inadequate, and try to redirect your interlocutor back to the actual argument you made. The precise social forms by which this happens (the ideal being something like "would the following be an acceptable steelman [...]") are only dressing, not substance.

I have in fact had a very long email conversation spanning several months with another LessWronger who kept constructing would-be steelmen of my argument that I kept having to correct.

As it was a private conversation, I cannot give too many details, but I can try to summarize the general gist

I and this user are part of a shared IRL social network, which I have been feeling increasingly alienated from, but which I cannot simply leave without severe consequences. Trouble is that this social network generally treats me with extreme condescension, disdain, patronisation, etc, and that I am constrained in my ability to fight back in my usual manner. I am not so concerned about the underlying contempt, except for its part in creating the objectionable behaviour. It seems to me that they must subconsciously have extreme contempt for me, but since I do not respect their judgement of me, my self-esteem is not harmed by this knowledge. The real problem is that situations where I am treated with contempt and cannot defend myself from it, but must remain polite and simply take it, provide a kind of evidence to my autonomous unconscious status tracking processes (what JBP claims to be the function of the serotoninergic system, though idk if this is true at all), and that this is not so easily overridden by my own contempt for their poor judgement as my conscious reasoning about their disdain for me is.

I repeatedly explained to this LessWrong user that the issue is that these situations provide evidence for contempt for me, and that since I am constrained in my ability to talk back, they also provide systematically false evidence about my level of self respect and about how I deserve to be treated. Speaking somewhat metaphorically, you could say that this social network is inadvertently using black magic against me and that I want them to stop. It might seem that this position could be easily explained, and indeed that was how it seemed to me too at the outset of the conversation, but it was complicated by the need to demonstrate that I was in fact being treated contemptuously, and that I was in fact being constrained in my ability to defend myself against it. It was not enough to give specific examples of the treatment, because that led my interlocutor to overly narrow abstractions, so I had to point out that the specific instances of contemptuous treatment demonstrated the existence of underlying contempt, and that this underlying contempt should a priori be expected to generate a large variety of contemptuous behaviour. This in turn led to a very tedious argument over whether that underlying contempt exists at all, where it would've come from, etc.

Anyway, I eventually approached another member of this social network and tried to explain my predicament. It was tricky, because I had to accuse him of an underlying contempt giving rise to a pattern of disrespectful behaviour, but also explain that it was the behaviour itself I was objecting to and not the underlying contempt, all without telling him explicitly that I do not respect his judgement. Astonishingly, I actually made a lot of progress anyway.

Well, that didn't last long, because the LW user in question took it into his own hands to attempt to fix the schism, and told this man that if I am objecting to a pattern of disrespectful behaviour, then it is unreasonable to assume that I am objecting to the evidence of disrespect, rather than the underlying disrespect itself. You will notice that this is exactly the 180 degree opposite of my actual position.  It also had the effect of cutting off my chance at making any further progress with the man in question, since it is now to my eyes impossible to explain what I actually object to without telling him outright that I have no respect for his judgement.

I am sure he thought he was being reasonable. After all, absent the context, it would seem like a perfectly reasonable observation. But as there were other problems with his behaviour that made it seem smug and self righteous to me, and as the whole conversation up to that point had already been so maddening and let to so much disaster (it seems in fact to have played a major part in causing extreme mental harm to someone who was quite close to me), I decided to cut my losses and not pursue it any further, except for scolding him for what seemed to me like the breach of an oath he had given earlier.

Anyway, the point is not to generalise too much from this example. What I described in the post was actually inspired by other scenarios. The point of telling you this story is simply that even if you are presented with the interlocutor's proposed steelman and given a chance to reject it, this does not save you, and the conversation can still go on for literally months and not get out of the trap I described. I have had other examples of this trap being highly persistent, even with people who were more consistent in explicitly asking for confirmation of each proposed steelman, but what was special about this case was that it was the only one that lasted for literally months with hundreds of emails, that my interlocutor started out with a stated intent to see the conversation through to the end, and that my interlocutor was a fairly prolific LessWrong commenter and poster, whom I would rate as being at least in the top 5% and probably top 1% of smartest LessWrongers

I should mention for transparency that the LessWrong user in question did not state outright that he was steelmanning me, but having been around in this community for a long time, I think I am able to tell which behaviours are borne out of an attempt to steelman, or more broadly, which behaviours spring from the general culture of steelmanning and of being habituated to a steelman-esque mode of discourse. As my post indicated, I think steelmanning is a reasonable way to get to a more expedient resolution between people who broadly speaking "share base realities", but as someone with views that are highly heterodox relative to the dominant worldviews on LessWrong, I can say that my own experience with steelmanning has been that it is one of the nastiest forms of argumentation I know of.

I focused on the practice of steelmanning as emblematic of a whole approach to thinking about good faith that I believe is wrongheaded more generally and not only pertaining to steelmanning. In hindsight, I should have stated this. I considered doing so, but decided to make it the subject of a subsequent post, and I didn't notice that making a more in-depth post about the abstract pattern does not preclude me from making a brief mention in this post that steelmanning is only one instance of a more general pattern I am trying to critique.

The pattern is simply to focus excessively on behaviours and specific arguments as being in bad faith, and paying insufficient attention to the emotional drivers of being in bad faith, which also tend to make people go into denial about their bad faith.

Indeed, that was the purpose of steelmanning in its original form, as it was pioneered on Slate Star Codex.

Interestingly, when I posted it on r/slatestarcodex, a lot of people started basically screaming at me that I am strawmanning the concept of steelmanning, because a steelman by definition requires that the person you're steelmanning accepts the proposed steelman as accurate. Hence, your comment provides me some fresh relief and assures me that there is still a vestige left of the rationalist community I used to know.

I wrote my article mostly concerning how I see the word colloquially used today. I intended it as one of several posts demonstrating a general pattern of bad faith argumentation that disguises itself as exceptionally good faith. 

But setting all that aside, I think my critique still substantially applies to the concept in its original form. It is still the case, for example, that superficial mistakes will tend to be corrected automatically just from the general circulation of ideas within a community, and that the really persistent errors have to do with deeper distortions in the underlying worldview. 

Worldviews are however basically analogous to scientific paradigms as described by Thomas Kuhn. People do not adopt a complicated worldview without it seeming vividly correct from at least some angle, however parochial that angle might be. Hence, the only correct way to resolve a deep conflict between worldviews is by the acquisition of a broader perspective that subsumes both. Of course, either worldview, or both, may be a mixture of real patterns coupled with a bunch of propaganda, but in such a case, the worldview that subsumes both should ideally be able to explain why that propaganda was created and why it seems vividly believable to its adherents. 

At first glance, this might not seem to pose much of a problem for the practice of steelmanning in its original form, because in many cases it will seem like you can completely subsume the "grain of truth"  from the other perspective into your own without any substantial conflict. But that would basically classify it as a "superficial improvement", the kind that is bound to happen automatically just from the general circulation of ideas, and therefore less important than the less inevitable improvements. But if an improvement of this sort is not inevitable, it indicates that your current social network cannot generate the improvement on its own, but instead can only generate it through confrontations with conflicting worldviews from outside your main social network, and that means that your existing worldview cannot properly explain the grain of truth from the opposing view, since it could not predict it in advance, which means there is more to learn from this outside perspective than can be learned by straightforwardly integrating its apparent grain of truth.

This is basically the same pattern I am describing in the post, but just removed from the context of conversations between individuals, and instead applied to confrontations between different social networks with low-ish overlap. The argument is substantially the same, only less concrete.

No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee's mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.

Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it's pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person's actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.

Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.

This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.

Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.

The measuring project is symptomatic of scientism and is part of what needs to be corrected.

That is what I meant when I said that the HBD crowd is reminiscent of utilitarian technocracy and progressive-era eugenics. The correct way of handling race politics is to take an inventory of the current situation by doing case studies and field research, and to develop a no-bullshit commonsense executive-minded attitude for how to go about improving the conditions of racial minorities from where they're currently at.

Obviously, more policing is needed, so as to finally give black business-owners in black areas a break and let them develop without being pestered by shoplifters, riots, etc. Affirmative action is not working, and nor is the whole paradigm of equity politics. Antidiscrimination legislation was what crushed black business districts that had been flourishing prior to the sixties.

Whether the races are theoretically equal in their genetic potential or not is utterly irrelevant. The plain fact is that they are not equal at present, and that is not something you need statistics in order to notice. If you are a utopian, then your project is to make them achieve their full potential as constrained by genetics in some distant future, and if they are genetically equal, then that means you want equal outcomes at some point. But this is a ridiculous way of thinking, because it extrapolates your policy goals unreasonably far into the future, never mind that genetic inequalities do not constrain long-term outcomes in a world that is rapidly advancing in genetic engineering tech.

The scientistic, statistics-driven approach is clearly the wrong tool for the job, as we can see from just looking at what outcomes it has achieved. Instead it is necessary to have human minds thinking reasonably about the issue, instead of trying to replace human reason with statistics "carried on by steam" as Carlyle put it. These human minds thinking reasonably about the issue should not be evaluating policies by whether they can theoretically be extrapolated to some utopian outcome in the distant future, but simply about whether they actually improve things for racial minorities or not. This is one case where we could all learn something from Keynes' famous remark that "in the long run, we are all dead".

In short: scientism is the issue, and statistics by steam are part of it. Your insistence on the measurement project over discussing the real issues is why you do not have much success with these people. You are inadvertently perpetuating the very same stigma on informal reasoning about weighty matters that is the cause of the issue.

They are not doing it in order to troll their political opponents. They are doing it out of scientism and loyalty to enlightenment aesthetics of reason and rationality, which just so happens to entail an extremely toxic stigma against informal reasoning about weighty matters.

The second option, trying to uncover the real origin of the conclusion, being obviously the best of the three. It is also most in-line with canonical works like Is That Your True Rejection?

But it belongs to the older paradigm of rationalist thinking; the one that sought to examine motivated cognition and discover the underlying emotional drives (ideally with delicate sensitivty), whereas the new paradigm merely stigmatizes motivated cognition and inadvertently imposes a cultural standard of performativity, in which we are all supposed to pretend that our thinking is unmotivated. The problems with present rationalist culture would stand out like a glowing neon sign to old-school LessWrongers, but unfortunately there are not many of these left.

And, again, it is not "false pretenses" to engage in a discussion with more than one goal in mind and not explicitly lay out all one's goals in advance.

It saddens me that LessWrong has reached such a state that it is now a widespread behaviour to straw man the hell out of someone's position and then double down when called on it.

What I think is both rude and counterproductive is focusing on what sort of person the other person is, as opposed to what they have done and are doing. In this particular thread the rot begins with "thus flattering your narcissism"

But the problem is at the level of his character, not any given behaviour. I have already explained this in one of my replies to tailcalled; if he simply learns to stay away from one type of narcissistic community, he will still be drawn in by communities where narcissism manifests in other ways than the one he is "immunized" to, so to speak. Likewise with the concrete behaviours: if he learns to avoid some toxic behaviours, the underlying toxicity will simply manifest in other toxic behaviours. I do not say there is therefore no point in calling out the toxic behaviours, but the only point in doing that is to use them as pointers to the underlying problem. If I just get him to recognise a particular pattern of behaviour, then I will have misidentified the pattern to him and might as well have done nothing. The issue is specifically that he is a horrible person and needs to realise it so he can begin practising virtue — this being of course a moral philosophy that LessWrongers are generally averse to, but you can see the result.

And then we get "you've added one more way to feel above it all and congratulate yourself on it" and "your few genuine displays of good faith" and "goal-oriented towards making you appear as the sensible moderate" and "you have a profound proclivity for bullshitting" and so forth.

All of these are criticising behaviours rather than character and thus fit your pretended criterion. Thus, you made no specific complaint about them, because what you actually take issue with is simply my harshness and directness.

I think this sort of comment is basically never helpful

It is the only thing that is ever helpful when an improvement to the underlying character is what is called for.

Load More