Cornelius Dybdahl

Wiki Contributions

Comments

It is part Ayn Rand, part Curtis Yarvin. Ultimately it all comes from Thomas Carlyle anyway.

And there is no need to limit yourself to potential obligations. Unless you have an exceedingly blessed life, then there should be no shortage of friends and loved ones in need of help.

That does not even come close to cancelling out the reduced ability to get a detailed view of the impact, let alone the much less honest motivations behind such giving. 

And lives are not of equal value. Even if you think they have equal innate value, surely you can recognise that a comparatively shorter third-world life with worse prospects for intellectual and artistic development and greater likelihood of abject poverty is much less valuable (even if only due to circumstances) than the lives of people you are surrounded with, and surely you will also recognise that it is the latter that form the basis for your intuitions about the value of life.

By giving your "charity" (actually, the word "charity" stems from Latin caritas meaning care, as in giving to people you care about, whereas "altruism" is cognate with alter, meaning basically otherism, and in practice meaning giving to people you don't care about) to less worthwhile recipients, you behaving in an anti-meritocratic way and cheapening your act of giving.

Moreover, people obviously don't have equal innate value, and there is a distinct correlation between earning potential and being a utility monster, which at least partially cancels out the effect of diminishing marginal utility.

And the whole reason people care so much about morality is because the moral virtues and shortcomings of your friends and associates are going to have a huge impact on your life. If you're redirecting the virtue by giving money to random foreigners, you are basically defaulting on the debt to your friends. One of your closest friend could wind up in deep trouble and need as much help as he can possibly get. He will need virtuous friends he can rely on to help him, and any money you have given to some third worlders you will never meet is money you cannot give to a friend in need. Therefore, any giving to Effective Altruism is inherently unjust and disloyal. By all means, be charitable and give what you can. But not to strangers.

Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora.

That's a lot closer to the truth than you might think. There are plenty of lines going from the Fabian society (and from Trotsky, for that matter) into the rationalist diaspora. On the other hand, there is very little influence from eg. Henry Regnery or Oswald Spengler.

“A real charter city hasn’t been tried!” I reply.

Lee Kuan Yew's Singapore is close enough, surely.

“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”

This argument sounds a lot more Trotskyist than Fabian to me, but it is worth noting that said ruling elites have both been nominally socialist and been widely supported by socialists throughout the world. The same cannot be said in the case of charter cities and their socialist oppositions.

For every logical inference I make, they make the opposite. Every thoughtful prior of mine, they consider to be baseless prejudice. My modus ponens, their modus tollens.

Because your priors are baseless prejudices. The Whig infighting between liberals and socialists is one of many cases where both sides are awful and each side is almost exactly right about the other side. Your example about StarCraft shows that you are prone to using baseless prejudices as your priors, and other parts of your post show that you are indeed doing the very same thing when it comes to politics.

Of all the possible intellectuals I was exposed to, surely it is suspicious that the ones whose conclusions matched my already held beliefs were the ones who stuck.

Your evaluation of both, as well as your selection of opposition (Whig opposition in the form of socialism, rather than Tory opposition in the form of eg. paleoconservatism), shows that your priors on this point are basically theological, or more precisely, eschatological. You implicitly see history as progressing along a course of growing wisdom, increasing emancipation, and widening empathy (Peter Singer's Ever-Expanding Circle). It is simply a residue from your Christian culture. The socialist is also a Christian at heart, but being of a somewhat more dramatic disposition, he doesn't think of history as a steady upwards march to greater insight, but as a series of dramatic conflicts that resolve with the good guys winning.

(unless of course he is a Trotskyist, in which case we are perpetually at a turning point where history could go either way; towards communism or towards fascism)

Yet, the combined efforts of our charity has added up to exactly nothing! I want to yell at the Samaritan whose efforts have invalidated all of mine. Why are they so hellbent on tearing down all the beauty I want to create? Surely we can do better than this.

Sure, I can tell you how to do better: focus your efforts on improving institutions and societies that you are close to and very knowledgeable about. You can do a much better job here, and the resultant proliferation of healthy institutions will, as a pleasant side effect, spread much more prosperity in the third world than effective altruism ever will.

This is the position taken by sensible people (eg. paleocons), and notably not by revolutionaries and utopian technocrats. This is fortunate because it gives the latter a local handicap and enables good, judicious people to achieve at least some success in creating sound institutions and propagating genuine wisdom. This fundamental asymmetry is the reason why there is any functional infrastructure left anywhere, despite the utopian factions far outnumbering the realists.

We both believe in doing the most good, whatever that means, and we both believe in using evidence to inform our decision making.

No, you actually don't. If your intentions really were that good, they would lead you naturally into the right conclusions, but as Robin Hanson has pointed out, even Effective Altruism is still ultimately about virtue signalling, though perhaps directed at yourself. Sorta like HJPEV's desperate effort to be a good person after the sorting hat's warning to him. This is a case of Effective Altruists being mistaken about what their own driving motives actually are.

For us to collaborate we need to agree on some basic principles which, when followed, produces knowledge that can fit into both our existing worldviews.

The correct principle is this: fix things locally (where it is easier and where you can better track the actual results) before you decide to take over the world. There are a lot of local things that need fixing. This way, if your philosophy works, your own community, nation, etc. will flourish, and if it doesn't work, it will fall apart. Interestingly, most EA's are a lot more risk averse when it comes to their own backyard than when it comes to some random country in Africa.

To minimize the chance of statistical noise or incorrect inference polluting our conclusions, we create experiments with randomly chosen intervention and control groups, so we are sure the intervention is causally connected to the outcome.

This precludes a priori any plans that involve looking far ahead, reacting judiciously to circumstances as they arise, or creating institutions that people self-select into. In the latter case, using comparable geographical areas would introduce a whole host of confounders, but having both the intervention and control groups be in an overlapping area would change the nature of the experiment, because the structure of the social networks that result would be quite different. Basically, the statistical method you propose has technocratic policymaking built into its assumptions, and so it is not surprising that it will wind up favouring liberal technocracy. You have simply found another way of using a baseless prejudice as your prior.

But this is the most telling paragraph:

Like my beliefs about Starcraft, it seems so arbitrary. Had my initial instinct been the opposite, maybe I would have breezed past Hanson’s contrarian nonsense to one day discover truth and beauty reading Piketty.

Read both. The marginal clarity you will get from immersing yourself still deeper into your native canon is enormously outshadowed by the clarity you can get from familiarising yourself with more canons. Of course, Piketty is really just another branch of the same canon, with Piketty and Hanson being practically cousins, intellectually. Compare Friedrich List, to see the point.

My initial instinct was social democracy. Later I became a communist, then, after exposure to LessWrong, I became a libertarian. Now I'm a monarchist, and it occurs to me in hindsight that social democracy, communism, and libertarianism are all profoundly Protestant ideologies, and what I thought was me being widely read was actually still me being narrowminded and parochial.

The issue at hand is not whether the "logic" was valid (incidentally, you are disputing the logical validity of an informal insinuation whose implication appears to be factually true, despite the hinted connection — that Scott's views on HBD were influenced by Murray's works — being merely probable)

The issues at hand are:

1. whether it is a justified "weapon" to use in a conflict of this sort

2. whether the deed is itself immoral beyond what is implied by "minor sin"

That is an unrealistic and thoroughly unworkable expectation.

World models are pre-conscious. We may be conscious of verbalised predictions that follow from our world models, and various cognitive processes that involve visualisation (in the form of imagery, inner monologue, etc.), since these give rise to qualia. We do not however possess direct awareness of the actual gear-level structures of our world models, but must get at these through (often difficult) inference.

When learning about any sufficiently complex phenomenon, such as pretty much any aspect of psychology or sociology, there are simply too many gears for it to be possible to identify all of them; a lot of them are bound to remain implicit and only be noticed when specifically brought into dispute. This is not to say that there can be no standard by which to expect "theory gurus" to prove themselves not to be frauds. For example, if they have unusual worldviews, they should be able to pinpoint examples (real or invented) that illustrate some causal mechanism that other worldviews give insufficient attention to. They should be able to broadly outline how this mechanism relates to their worldview, and how it cannot be adequately accounted for by competing worldviews. This is already quite sufficient, as it opens up the possibility for interlocutors to propose alternate views of the mechanism being discussed and show how they are, after all, able to be reconciled with other worldviews than the one proposed by the theorist.

Alternatively, they should be able to prove their merit in some other way, like showing their insight into political theory by successfully enacting political change, into crowd psychology by being successful propagandists, into psychology and/or anthropology by writing great novelists with a wide variety of realistic characters from various walks of life, etc.

But expecting them to be able to explicate to you the gears of their models is somewhat akin to expecting a generative image AI to explain its inner workings to you. It's a fundamentally unreasonable request, all the more so because you have a tendency to dismiss people as bluffing whenever they can't follow you into statistical territory so esoteric that there are probably less than a thousand people in the world who could.

Trouble is that even checking the steelman with the other person does not avoid the failure modes I am talking about. In fact, some moments ago, I made slight changes to the post to include a bit where the interlocutor presents a proposed steelman and you reject it. I included this because many redditors objected that this is by definition part of steelmanning (though none of the cited definitions actually included this criterion), and so I wanted to show that it makes no difference at all to my argument whether the interlocutor asks for confirmation of the steelman versus you becoming aware of it by some other mechanism. What's relevant is only that you somehow learn of the steelman attempt, reject it as inadequate, and try to redirect your interlocutor back to the actual argument you made. The precise social forms by which this happens (the ideal being something like "would the following be an acceptable steelman [...]") are only dressing, not substance.

I have in fact had a very long email conversation spanning several months with another LessWronger who kept constructing would-be steelmen of my argument that I kept having to correct.

As it was a private conversation, I cannot give too many details, but I can try to summarize the general gist

I and this user are part of a shared IRL social network, which I have been feeling increasingly alienated from, but which I cannot simply leave without severe consequences. Trouble is that this social network generally treats me with extreme condescension, disdain, patronisation, etc, and that I am constrained in my ability to fight back in my usual manner. I am not so concerned about the underlying contempt, except for its part in creating the objectionable behaviour. It seems to me that they must subconsciously have extreme contempt for me, but since I do not respect their judgement of me, my self-esteem is not harmed by this knowledge. The real problem is that situations where I am treated with contempt and cannot defend myself from it, but must remain polite and simply take it, provide a kind of evidence to my autonomous unconscious status tracking processes (what JBP claims to be the function of the serotoninergic system, though idk if this is true at all), and that this is not so easily overridden by my own contempt for their poor judgement as my conscious reasoning about their disdain for me is.

I repeatedly explained to this LessWrong user that the issue is that these situations provide evidence for contempt for me, and that since I am constrained in my ability to talk back, they also provide systematically false evidence about my level of self respect and about how I deserve to be treated. Speaking somewhat metaphorically, you could say that this social network is inadvertently using black magic against me and that I want them to stop. It might seem that this position could be easily explained, and indeed that was how it seemed to me too at the outset of the conversation, but it was complicated by the need to demonstrate that I was in fact being treated contemptuously, and that I was in fact being constrained in my ability to defend myself against it. It was not enough to give specific examples of the treatment, because that led my interlocutor to overly narrow abstractions, so I had to point out that the specific instances of contemptuous treatment demonstrated the existence of underlying contempt, and that this underlying contempt should a priori be expected to generate a large variety of contemptuous behaviour. This in turn led to a very tedious argument over whether that underlying contempt exists at all, where it would've come from, etc.

Anyway, I eventually approached another member of this social network and tried to explain my predicament. It was tricky, because I had to accuse him of an underlying contempt giving rise to a pattern of disrespectful behaviour, but also explain that it was the behaviour itself I was objecting to and not the underlying contempt, all without telling him explicitly that I do not respect his judgement. Astonishingly, I actually made a lot of progress anyway.

Well, that didn't last long, because the LW user in question took it into his own hands to attempt to fix the schism, and told this man that if I am objecting to a pattern of disrespectful behaviour, then it is unreasonable to assume that I am objecting to the evidence of disrespect, rather than the underlying disrespect itself. You will notice that this is exactly the 180 degree opposite of my actual position.  It also had the effect of cutting off my chance at making any further progress with the man in question, since it is now to my eyes impossible to explain what I actually object to without telling him outright that I have no respect for his judgement.

I am sure he thought he was being reasonable. After all, absent the context, it would seem like a perfectly reasonable observation. But as there were other problems with his behaviour that made it seem smug and self righteous to me, and as the whole conversation up to that point had already been so maddening and let to so much disaster (it seems in fact to have played a major part in causing extreme mental harm to someone who was quite close to me), I decided to cut my losses and not pursue it any further, except for scolding him for what seemed to me like the breach of an oath he had given earlier.

Anyway, the point is not to generalise too much from this example. What I described in the post was actually inspired by other scenarios. The point of telling you this story is simply that even if you are presented with the interlocutor's proposed steelman and given a chance to reject it, this does not save you, and the conversation can still go on for literally months and not get out of the trap I described. I have had other examples of this trap being highly persistent, even with people who were more consistent in explicitly asking for confirmation of each proposed steelman, but what was special about this case was that it was the only one that lasted for literally months with hundreds of emails, that my interlocutor started out with a stated intent to see the conversation through to the end, and that my interlocutor was a fairly prolific LessWrong commenter and poster, whom I would rate as being at least in the top 5% and probably top 1% of smartest LessWrongers

I should mention for transparency that the LessWrong user in question did not state outright that he was steelmanning me, but having been around in this community for a long time, I think I am able to tell which behaviours are borne out of an attempt to steelman, or more broadly, which behaviours spring from the general culture of steelmanning and of being habituated to a steelman-esque mode of discourse. As my post indicated, I think steelmanning is a reasonable way to get to a more expedient resolution between people who broadly speaking "share base realities", but as someone with views that are highly heterodox relative to the dominant worldviews on LessWrong, I can say that my own experience with steelmanning has been that it is one of the nastiest forms of argumentation I know of.

I focused on the practice of steelmanning as emblematic of a whole approach to thinking about good faith that I believe is wrongheaded more generally and not only pertaining to steelmanning. In hindsight, I should have stated this. I considered doing so, but decided to make it the subject of a subsequent post, and I didn't notice that making a more in-depth post about the abstract pattern does not preclude me from making a brief mention in this post that steelmanning is only one instance of a more general pattern I am trying to critique.

The pattern is simply to focus excessively on behaviours and specific arguments as being in bad faith, and paying insufficient attention to the emotional drivers of being in bad faith, which also tend to make people go into denial about their bad faith.

Indeed, that was the purpose of steelmanning in its original form, as it was pioneered on Slate Star Codex.

Interestingly, when I posted it on r/slatestarcodex, a lot of people started basically screaming at me that I am strawmanning the concept of steelmanning, because a steelman by definition requires that the person you're steelmanning accepts the proposed steelman as accurate. Hence, your comment provides me some fresh relief and assures me that there is still a vestige left of the rationalist community I used to know.

I wrote my article mostly concerning how I see the word colloquially used today. I intended it as one of several posts demonstrating a general pattern of bad faith argumentation that disguises itself as exceptionally good faith. 

But setting all that aside, I think my critique still substantially applies to the concept in its original form. It is still the case, for example, that superficial mistakes will tend to be corrected automatically just from the general circulation of ideas within a community, and that the really persistent errors have to do with deeper distortions in the underlying worldview. 

Worldviews are however basically analogous to scientific paradigms as described by Thomas Kuhn. People do not adopt a complicated worldview without it seeming vividly correct from at least some angle, however parochial that angle might be. Hence, the only correct way to resolve a deep conflict between worldviews is by the acquisition of a broader perspective that subsumes both. Of course, either worldview, or both, may be a mixture of real patterns coupled with a bunch of propaganda, but in such a case, the worldview that subsumes both should ideally be able to explain why that propaganda was created and why it seems vividly believable to its adherents. 

At first glance, this might not seem to pose much of a problem for the practice of steelmanning in its original form, because in many cases it will seem like you can completely subsume the "grain of truth"  from the other perspective into your own without any substantial conflict. But that would basically classify it as a "superficial improvement", the kind that is bound to happen automatically just from the general circulation of ideas, and therefore less important than the less inevitable improvements. But if an improvement of this sort is not inevitable, it indicates that your current social network cannot generate the improvement on its own, but instead can only generate it through confrontations with conflicting worldviews from outside your main social network, and that means that your existing worldview cannot properly explain the grain of truth from the opposing view, since it could not predict it in advance, which means there is more to learn from this outside perspective than can be learned by straightforwardly integrating its apparent grain of truth.

This is basically the same pattern I am describing in the post, but just removed from the context of conversations between individuals, and instead applied to confrontations between different social networks with low-ish overlap. The argument is substantially the same, only less concrete.

No, the reasoning generalises to those fields too. The problem with those areas driving their need to have measurement of cognitive abilities is excessive bureaucratisation and lack of a sensible top-down structure with responsibilities and duties in both directions. A wise and mature person can get a solid impression of an interviewee's mental capacities from a short interview, and can even find out a lot of useful details that are not going to be covered by an IQ test. For example, mental health, maturity, and capacity to handle responsibility.

Or consider it from another angle: suppose I know someone to be brilliant and extremely capable, but when taking an IQ test, they only score 130 or so. What am I supposed to do with this information? Granted, it's pretty rare — normally the IQ would reflect my estimation of their brilliance, but in such cases, it adds no new information. But if the score does not match the person's actual capabilities as I have been able to infer them, I am simply left with the conclusion that IQ is not a particularly useful metric for my purposes. It may be highly accurate, but an experienced human judgement is considerably more accurate still.

Of course, individualised judgements of this sort are vulnerable to various failure modes, which is why large corporations and organizations like the military are interested in giving IQ tests instead. But this is often a result of regulatory barriers or other hindrances to simply requiring your job interviewers to avoid those failure modes and holding them accountable to it, with the risk of demotion or termination if their department becomes corrupt and/or grossly incompetent.

This issue is not particular to race politics. It is a much more general matter of fractal monarchy vs procedural bureaucracy.

Edit: or, if you want a more libertarian friendly version, it is a general matter of subsidiarity vs totalitarianism.

The measuring project is symptomatic of scientism and is part of what needs to be corrected.

That is what I meant when I said that the HBD crowd is reminiscent of utilitarian technocracy and progressive-era eugenics. The correct way of handling race politics is to take an inventory of the current situation by doing case studies and field research, and to develop a no-bullshit commonsense executive-minded attitude for how to go about improving the conditions of racial minorities from where they're currently at.

Obviously, more policing is needed, so as to finally give black business-owners in black areas a break and let them develop without being pestered by shoplifters, riots, etc. Affirmative action is not working, and nor is the whole paradigm of equity politics. Antidiscrimination legislation was what crushed black business districts that had been flourishing prior to the sixties.

Whether the races are theoretically equal in their genetic potential or not is utterly irrelevant. The plain fact is that they are not equal at present, and that is not something you need statistics in order to notice. If you are a utopian, then your project is to make them achieve their full potential as constrained by genetics in some distant future, and if they are genetically equal, then that means you want equal outcomes at some point. But this is a ridiculous way of thinking, because it extrapolates your policy goals unreasonably far into the future, never mind that genetic inequalities do not constrain long-term outcomes in a world that is rapidly advancing in genetic engineering tech.

The scientistic, statistics-driven approach is clearly the wrong tool for the job, as we can see from just looking at what outcomes it has achieved. Instead it is necessary to have human minds thinking reasonably about the issue, instead of trying to replace human reason with statistics "carried on by steam" as Carlyle put it. These human minds thinking reasonably about the issue should not be evaluating policies by whether they can theoretically be extrapolated to some utopian outcome in the distant future, but simply about whether they actually improve things for racial minorities or not. This is one case where we could all learn something from Keynes' famous remark that "in the long run, we are all dead".

In short: scientism is the issue, and statistics by steam are part of it. Your insistence on the measurement project over discussing the real issues is why you do not have much success with these people. You are inadvertently perpetuating the very same stigma on informal reasoning about weighty matters that is the cause of the issue.

They are not doing it in order to troll their political opponents. They are doing it out of scientism and loyalty to enlightenment aesthetics of reason and rationality, which just so happens to entail an extremely toxic stigma against informal reasoning about weighty matters.

Load More