JenniferRM

Wiki Contributions

Comments

The Rationalists of the 1950s (and before) also called themselves “Rationalists”

Yeah. The communist associations of past iterations of "rationalist" schools or communities is one the biggest piles of skulls I know about and try to always keep in mind.

Wikipedia uses this URL about Stalin, Wells, Shaw, and the holodomor as a citation to argue that, in fact, many of them were either duped fools or worse into denying the holodomor. Quoting from the source there:

Shaw had met Stalin in the Kremlin on 29 July 1931, and the Soviet leader had facilitated his tour of Russia in which he was able to observe, at least to his own satisfaction, that the statements being circulated about the famine in the Ukraine were merely rumours. He had seen that the peasants had plenty of food. In fact the famine had notoriously been caused by Stalin in his desperation to achieve the goals of his five-year plan. An estimated ten million people, mostly Ukrainians, died of starvation.

As someone who flirts with identifying as part of some kind of "rationalist" community, I find the actions of Shaw to be complicatedly troubling, and to disrupt "easy clean identification".

Either I feel I must disavow Shaw, people like Shaw, and their gross and terrible political errors that related to some of the biggest issues and tragedies of their era, or else I must say that Shaw is still a sort of somehow a tolerably acceptable human to imagine collaborating with in limited ways in spite of his manifest flaws. 

(From within judeo-christian philosophic frames this doesn't seem super hard. The story is simply that all humans are quite bad by default, and it is rare and lucky for us to rise above our normal brokenness, and so any big non-monstrous actions a human performs is nearly pure bonus, and worthy of at least some praise no matter what other bad things are co-occuring in the soul of any given person.)

Shaw's kind of error also troubles me when I imagine that there might be some deep substructure to reasoning and philosophy such that he and I share a philosophy somehow, and he did that while having a philosophy like mine... then if "beliefs cause behavior" (rather than mostly just being confabulated rationalizations after behaviors have already occured) then I find myself somewhat worried about the foundations of my own philosophy, and what horrible things it might cause me to "accidentally" endorse or promote through my own actions.

Maybe there is some way to use Shaw's failure as a test case, and somehow trace the causality of his thinking to his bad actions, and then find any analogous flaws in myself and perform cautious self modification until analogous flaws are unlikely to exist?  But that seems like a BIG project. I'm not sure my life is long enough to learn all the necessary facts and reasoning carefully enough to carry a project like that to adequate completion.

Thus the practical upshot, for me, is to be open to "fearing to tread" even more than normal until or unless there are pretty subjectively clear reasons to advance.

Also, my acknowledged limitations lead me to feel a minor duty to sometimes point out obviously evil things that my mental stance can't help but see as pretty darn evil? Not all of them. Just really really big and important and obvious ones.

My current working test case for this is the FDA, which I suspect should be legislatively gutted

Maybe I'm wrong? Maybe in saying "FDA delenda est" semi-regularly I'm making a "Shaw and the Holodomor level error" by doing the opposite of what is good? 

It seems virtuous, then, to at least emit such an idea every so often, when I actually really can't help but believe in and directly see a certain evil, and see if anyone can offer coherent disagreement or agreement and thereby either (1) help fix the world by reducing that particular evil or else (2) help me get a better calibrated moral compass.  Either way it seems like it would be good?

Also, in general, I feel that it is a good practice to, minimally, acknowledge the skulls so that I know that "ideas and identities and tendencies similar to mine" might have, in the past, lead to bad places.

To hide or flinch from the fact that former-"people calling themselves rationalists" were sometimes pretty bad at the biggest questions of suffering and happiness, or good and evil, seems like... like... probably not what someone who was good at virtue epistemology would do?  So, I probably shouldn't flinch. Probably.

The Rationalists of the 1950s (and before) also called themselves “Rationalists”

That quote is metal as hell <3

It might not be actually true, or actually good advice... but it is metal as hell :-)

Omicron Post #4

I think I'm in that 2% slice, and my feeling is that this position arises from:

  1. Having a moderately coherent and relatively rare theory of "benevolent government and the valid use of state power" that focuses on public goods and equilibria and subsidiarity and so on.
  2. Having a relatively rare belief that vaccinated people seem much more likely to get asymptomatically infected and to have lower mortality BUT also noting that vaccines do NOT prevent infectiousness and probably cannot push R0 below 1.0.

Thus, I consider covid vaccines primarily a private good that selfishly helps the person who gets the vaccine while turning them into a walking death machine to the unvaccinated

They get a better outcome from infection (probably, assuming the (likely larger) side effects of boosters aren't worse than the (likely more mild) version of the disease, etc, etc) but such vaccine takers DO NOT protect their neighbor's neighbor's neighbor's neighbor's neighbor's neighbor from "risk of covid at all"...

...and thus my theory of government says that a benevolent government will not force people into medical procedures of this sort.

A competently benevolent government wouldn't mandate a "probably merely selfishly good thing" in a blanket way, that prevents individuals from opting out based on private/local/particular reasoning (such that it might NOT be selfishly beneficial for some SPECIFIC individuals and where for those individuals the policy amounts to some horrific "government hurts you and violates your body autonomy for no reason at all" bullshit).

Like abortion should be legal and rare, I think? Because body autonomy! The argument against abortion is that the fetus also has a body, and should (perhaps) be protected from infanticide in a way that outweighs the question of the body autonomy of the fetus's mother. But a vaccine mandate for a vaccine that only selfishly benefits the vaccinated person, without much reducing infectiousness, is a violation of medical body autonomy with even less of a compensating possible-other-life-saved.  Vaccine mandates (for vaccines that don't push R0 lower than 1) are probably worse than outlawing abortion, from the perspective of moral and legal principles that seem pretty coherent, at least to me.

I think many many many people are gripped by a delusion that vaccines drop R0 enough that with high enough vaccination rates covid will be eradicated and life will go back to normal.

(Maybe they are right? But I think they are wrong. I think R0 doesn't go down with higher vaccinations enough to matter for the central binary question.)

Then, slightly separately, a lot of people think that the daddy/government can make them do whatever it wants so long as daddy/government verbally says "this is for your own good" even if that's a lie about what is objectively good for some individual people.

A key point here is that my theory of government says that IF there is a real public good here, like an ability to pass a law, throw some people in jail for violating the law, and then having a phase change in the public sphere leading to a dramatically better life for essentially everyone else such that the whole thing is a huge Kaldor-Hicks improvement... THEN that would be a just use of government coercion.  

This general formula potentially explains why stealing gets you thrown in jail. Also why cars speeding fast enough in the wrong place gets you thrown in jail. You want "a world where bikes can be left in front yards and kids can safely cross the streets" and throwing people who break these equilibria in jail protects the equilibria.

I don't see how "infecting people with a lab created disease" is vastly different from speeding or stealing? Harm is harm. Negligence (or full on mens rea) is negligence (or full on mens rea).

If vaccines prevented transmission enough to matter in general, then vaccines COULD be mandated. But the decision here should trigger (or not) on a "per vaccine" basis that takes into account the effects on R0.

Then, sociologically, most people haven't even heard of Pareto efficiency (much less Kaldor-Hicks) and most people think these vaccines are a public good that will eventually end the nightmare.

So I guess... you could test my "theory of opinion" by explaining "classical liberal political economy" and "bleak viral epidemiology" to lots of people, and then, as a post test, see if the 2% slice of the population grows to 3% or 4% maybe?  

If lots of people learn these two things, and lots of people start opposing mandates for the current set of vaccines, that would confirm my theory. I guess you could also falsify my theory if anti-mandate sentiment rose (somehow?) without any corresponding cause related to a big educational program around "libertarian epidemiology"?

I have heard of Kaldor-Hicks efficiency AND ALSO I think the nightmare will "stop" only when the virus evolves to be enough-less-nightmarish that it seems no worse than the flu.

But note! My model is that the virus is in charge.  And "covid" will in some sense happen forever, and the situation we're in right now is plausibly the beginning of the new normal that will never really stop. 

Hopefully milder and and milder variants evolve as covid learns to stop "playing with its food", and things sorta/eventually become "over" in the sense that the deaths and disabilities fall to background levels of biosadness? But that's the only realistic hope left?

And I wish I was more hopeful, but I'm focusing on "what's probably true" instead of "what's comforting" :-/

I guess hypothetically in 2022 or 2024 politicians could run on (and win with?) a proposal to "totally and completely revamp all of the medical industry from top to bottom in a dramatic enough way that actual disease eradication is possible, such as by deleting the FDA, and quickly constructing a public disease testing framework with new better tests under a new regulatory system, that quickly tests everyone who wants a test every single day they are willing to voluntarily spit into a cup, and then do automated tracing with data science over the voluntary test data, and then impose involuntary quarantine on the infectious in judiciously limited but adequate ways, and just generally make covid stop existing in <NationalRegion> from a big push, and then also have adequate testing be required to get in and out of the country at every border and port, and so on with every coherent and sane step that, altogether, would make covid (and in fact ALL infectious disease in the long run) simply stop being a problem in any part of the world with governmental biocompetence."

But this will also probably not happen because we live in the real world, which is not a world with good politicians or wise voters or competently benevolent cultural elites. 

We are bioincompetent. We could have eradicated syphilis for example, and we chose not to.  Syphilis mostly affects black communities, and the US medical system doesn't competently care about black communities. We suck. Covid is our due, based on our specific failures. Covid is our nemesis.

The view from the 2% slice says: lean back, hunker down, and enjoy the ride. Its gonna suck, but at least, if you survive (and assuming the singularity doesn't happen, and grandkids are still a thing 40 years from now, etc, etc, etc) then you can come out of it with a story about generalized civilizational inadequacy to tell your grandkids.

Speaking of Stag Hunts

Yeah! This is great. This is the kind of detailed grounded cooperative reality that really happens sometimes :-)

Speaking of Stag Hunts

Mechanistically... since stag hunt is in the title of the post... it seems like you're saying that any one person committing "enough of these epistemic sins to count as playing stag" would mean that all of lesswrong fails at the stag hunt, right?

And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)

Also, what you're calling "projection" there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can't choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(

(For myself, I try not to assume I even know what's happening in my own head, because experimentally, it seems like humans in general lack high quality introspective access to their own behavior and cognition.)

The practical upshot here, to me, is that if the models you're advocating here are true, then it seems to me like lesswrong will inevitably fail at "hunting stags".

...

And yet it also seems like you're exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then... maybe we will eventually all play stag and thus eventually, as a group, catch a stag?

So under the models that you seem to me to have offered, the (numerous individual) costs won't buy any (group) benefits? I think? 

There will always inevitably be a fly in the ointment... a grain of sand in the chip fab... a student among the masters... and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?

And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!

And that's (in my book) quite good... even if it means we will always fail at hunting stags.

...

The thing I think that's good about lesswrong has almost nothing to do with bringing down a stag on this actual website.

Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can "do more good thinking" in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.

I'm (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time... Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).

You're against "engaging in, and tolerating/applauding" lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.

Am I missing something? What?

Speaking of Stag Hunts

This word "fucky" is not native to my idiolect, but I've heard it from Berkeley folks in the last year or two. Some of the "fuckiness" of the dynamic might be reduced if tapping out as a respectable move in a conversation.

I'm trying not to tap out of this conversation, but I have limited minutes and so my responses are likely to be delayed by hours or days. 

I see Duncan as suffering, and confused, and I fear that in his confusion (to try to reduce his suffering), he might damage virtues of lesswrong that I appreciate, but he might not. 

If I get voted down, or not upvoted, I don't care. My goal is to somehow help Duncan and maybe be less confused and not suffer, and also not be interested in "damaging lesswrong".

I think Duncan is strongly attached to his attempt to normatively move LW, and I admire the energy he is willing to bring to these efforts. He cares, and he gives because he cares, I think? Probably?

Maybe he's trying to respond to every response as a potential "cost of doing the great work" which he is willing to shoulder?  But... I would expect him to get a sore shoulder though, eventually :-(

If "the general audience" is the causal locus through which a person's speech act might accomplish something (rather than really actually wanting primarily to change your direct interlocutor's mind (who you are speaking to "in front of the audience")) then tapping out of a conversation might "make the original thesis seem to the audience to have less justification" and then, if the audience's brains were the thing truly of value to you, you might refuse to tap out?

This is a real stress. It can take lots and lots of minutes to respond to everything.

Sometimes problems are so constrained that the solution set is empty, and in this case it might be that "the minutes being too few" is the ultimate constraint? This is one of the reasons that I like high bandwidth stuff, like "being in the same room with a whiteboard nearby". It is hard for me to math very well in the absence of shared scratchspace for diagrams.

Other options (that sometimes work) including PMs, or phone calls, or IRC-then-post-the-logs as a mutually endorsed summary. I'm coming in 6 days late here, and skipped breakfast to compose this (and several other responses), and my next ping might not be for another couple days. C'est la vie <3

Speaking of Stag Hunts

If you look at some of the neighboring text, I have some mathematical arguments about what the chances are for N people to all independently play "stag" such that no one plays rabbit and everyone gets the "stag reward".

If 3 people flip coins, all three coins come up "stag" quite often. If a "stag" is worth roughly 8 times as much as a rabbit, you could still sanely "play stag hunt" with 2 other people whose skill at stag was "50% of the time they are perfect".  

But if they are less skilled than that, or there are more of them, the stag had better be very very very valuable.

If 1000 people flip coins then "pure stag" comes up one in every 9.33x10^302 times. Thus, de facto, stag hunts fail at large N except for one of those "dumb and dumber" kind of things where you hear the one possible coin pattern that gives the stag reward and treat this as good news and say "so you're telling me there's a chance!"

I think stag hunts are one of these places where the exact same formal mathematical model gives wildly different pragmatic results depending on N, and the probability of success, and the value of the stag... and you have to actually do the math, not rely on emotions and hunches to get the right result via the wisdom one one's brainstem and subconscious and feelings and so on.

Speaking of Stag Hunts

I see that you have, in fact, caught me in a simplification that is not consistent with literally everything you said. 

I apologize for over-simplifying, maybe I should have added "primarily" and/or "currently" to make it more literally true.

In my defense, and to potentially advance the conversation, you also did say this, and I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood... maybe looking to score points for unfairness?

What I'm getting out of LessWrong these days is readership.  It's a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn't have ever come to on my own.

My model here is that this is your self-identified "revealed preference" for actually being here right now.

Also, in my experience, revealed preferences are very very very important signals about the reality of situations and the reality of people.

This plausible self-described revealed preference of yours suggests to me that you see yourself as more of a teacher than a student. More of a producer than a consumer. (This would be OK in my book. I explicitly acknowledge that I see my self as more of a teacher than a student round these parts. I'm not accusing you of something bad here, in my own normative frame, though perhaps you feel it as an attack because you have difference values and norms than I do?)

It is fully possible, I guess, (and you would be able to say this much better than I) that you would actually rather be a student than a teacher?

And it might be that that you see this as being impossible until or unless LW moves from a rabbit equilibrium to a stag equilibrium?

...

There's an interesting possible equivocation here.

(1) "Duncan growing as a rationalist as much and fast as he (can/should/does?) (really?) want does in fact require a rabbit-to-stag nash equilibrium shift among all of lesswrong".

(2) "Duncan growing as a rationalist as much as and fast as he wants does seems to him to require a rabbit-to-stag nash equilibrium shift among all of lesswrong... which might then logically universally require removing literally every rabbit player from the game, either by conversion to playing stag or banning".

These are very similar. I like having them separate so that I can agree and disagree with you <3

Also, consider then a third idea:

(3) A rabbit-to-stag nash equilibrium shift among all of lesswrong is wildly infeasible because of new arrivals, and the large number of people in-and-around lesswrong, and the complexity of the normative demands that would be made on all these people, and various other reasons.

I think that you probably think 1 and 2 are true and 3 is false.

I think that 2 is true, and 3 is true.

Because I think 3 is true, I think your implicit(?) proposals would likely be very costly up front while having no particularly large benefits on the backend (despite hopes/promises of late arriving large benefits). 

Because I think 2 is true, I think you're motivated to attempt this wildly infeasible plan and thereby cause harm to something I care about.

In my opinion, if 1 is really true, then you should give up on lesswrong as being able to meet this need, and also give up on any group that is similarly large and lacking in modular sub-communities, and lacking in gates, and lacking in an adequate intake curricula with post tests that truly measure mastery, and so on. 

If you need growth as a rationalist to be happy, AND its current shape (vis-a-vis stage hunts etc) means this website is a place that can't meet that need, THEN (maybe?) you need to get those needs met somewhere else.

For what its worth, I think that 1 is false for many many people, and probably it is also false for you.

I don't think you should leave, I just think you should be less interested in a "pro-stag-hunting jihad" and then I think you should get the need (that was prompting your stag hunting call) met in some new way.

I think that lesswrong as it currently exists has a shockingly high discourse level compared to most of the rest of the internet, and I think that this is already sufficiently to arm people with the tools they need to read the material, think about it, try it, and start catching really really big rabbits (that is, coming to make truly a part of them some new and true and very useful ideas), and then give rabbit hunting reports, and to share rabbit hunting techniques, and so on. There's a virtuous cycle here potentially!

In my opinion, such a "skill building in rabbit hunting techniques" sort of rationality... is all that can be done in an environment like this.

Also I think this kind of teaching environment is less available in many places, and so it isn't that this place is bad for not offering more, it is more that it is only "better by comparison to many alternatives" while still failing to hit the ideal. (And maybe you just yearn really hard for something more ideal.)

So in my model (where 2 is true) "because 1 is false for many (and maybe even for you)" and 3 is true... therefore your whole stag hunt concept, applied here, suggests to me that you're "low key seeking to gain social permission" from lesswrong to drive out the rabbit hunters and silence the rabbit hunting teachers and make this place wildly different.

I think it would de facto (even if this is not what you intend) become a more normal (and normally bad) "place on the internet" full of people semi-mindlessly shrieking at each other by default.

If I might offer a new idea that builds on the above material: lesswrong is actually a pretty darn good hub for a quite a few smaller but similar subcultures

These subcultures often enable larger quantities of shared normative material, to be shared with much higher density in that little contextual bubble than is possible in larger and more porous discourse environments.  

In my mind, Lesswrong itself has a potential function here as being a place to learn that the other subcultures exist, and/or audition for entry or invitation, and so on. This auditioning/discovery role seems highly compatible to me to the "rabbit hunting rationality improvement" function.

In my model, you could have a more valuable-for-others role here on lesswrong if you were more inclined to tolerantly teach without demanding a "level" that was required-at-all to meet your particular educational needs.

To restate: if you have needs that are not being met, perhaps you could treat this website as a staging area and audition space for more specific and more demanding subcultures that take lesswrong's canon for granted while also tolerating and even encouraging variations... because it certainly isn't the case that lesswrong is perfect.

(There's a larger moral thing here: to use lesswrong in a pure way like this might harm lesswrong as all the best people sublimate away to better small communities. I think such people should sometimes return and give back so that lessswrong (in pure "smart person mental elbow grease" and also in memetic diversity) stays over longer periods of time on a trajectory of "getting less wrong over time"... though I don't know how to get this to happen for sure in a way that makes it a Pareto improvement for returnees and noobs and so on. The institution design challenge here feels like an interesting thing to talk about maybe? Or maybe not <3)

...

So I think that Dragon Army could have been the place that worked the way you wanted it to work, and I can imagine different Everett branches off in the counter-factual distance where Dragon Army started formalizing itself and maybe doing security work for third parties, and so there might be versions of Earth "out there" where Dragon Army is now a mercenary contracting firm with 1000s of employees who are committed to exactly the stag hunting norms that you personally think are correct.

Personally, I would not join that group, but in the spirit of live-and-let-live I wouldn't complain about it until or unless someone hired that firm to "impose costs" on me... then I would fight back. Also, however, I could imagine sometimes wanting to hire that firm for some things. Violence in service to the maintenance of norms is not always bad... it is just often the "last refuge of the incompetent".

In the meantime, if some of the officers of that mercenary firm that you could have counter-factually started still sometimes hung out on Lesswrong, and were polite and tolerant and helped people build their rabbit hunting skills (or find subcultures that help them develop whatever other skills might only be possible to develop on groups) then that would be fine with me...

...so long as they don't damage the "good hubness" of lesswrong itself while doing so (which in my mind is distinct from not damaging lesswrong's explicitly epistemic norms because having well ordered values is part of not being wrong, and values are sometimes in conflict, and that is often ok... indeed it might be a critical requirement for positive sum pareto improving cooperation in a world full of conservation laws).

Speaking of Stag Hunts

Thank you for this great comment. I feel bad not engaging with Duncan directly, but maybe I can engage with your model of him? :-)

I agree that Duncan wouldn't agree with my restatement of what he might be saying. 

What I attributed to him was a critical part (that I object to) of the entailment of the gestalt of his stance or frame or whatever. My hope was that his giant list of varying attributes of statements and conversational motivations could be condensed into a concept with a clean intensive definition other than a mushy conflation of "badness" and "irrational". For me these things are very very different and I'll say much more about this below.

One hope I had was that he would vigorously deny that he was advocating anything like what I mentioned by making clear that, say, he wasn't going to wander around (or have large groups of people wander around) saying "I don't like X produced by P and so let's impose costs (ie sanctions (ie punishments)) on P and on all X-like things, and if we do this search-and-punish move super hard, on literally every instance, then next time maybe we won't have to hunt rabbits, and we won't have to cringe and we won't have to feel angry at everyone else for game-theoretically forcing 'me and all of us' to hunt measly rabbits by ourselves because of the presence of a handful of defecting defectors who should... have costs imposed on them... so they evaporate away to somewhere that doesn't bother me or us".

However, from what I can tell, he did NOT deny any of it? In a sibling comment he says:

Completely ignoring the assertion I made, with substantial effort and detail, that it's bad right now, and not getting better.  Refusing to engage with it at all.  Refusing to grant it even the dignity of a hypothesis.

But the thing is, the reason I'm not engaging with his hypothesis that I don't even know what his hypothesis is other than trivially obvious things that have been true, but which it has always been polite to mostly ignore?

Things have never been particularly good, is that really "a hypothesis"? Is there more to it than "things are bad and getting worse"? The hard part isn't saying "things are imperfect". 

The hard part, as I understand it, is figuring out a cheap and efficient solution that, that actually work, and that actually work systematically, in ways that anyone can use once they "get the trick" like how anyone can use arithmetic. He doesn't propose any specific coherent solution that I can see? It is like he wants to offer an affirmative case, but he's only listing harms (and boy does he stir people up on the harms) but then he doesn't have a causal theory of the systematic cause of the harms in the status quo, and he doesn't have a specific plan to fix them, and he doesn't demonstrate that the plan mechanistically links to the harms in the status quo. So if you just grant the harms... that leaves him with a blank check to write more detailed plans that are consistent with the gestalt frame that he's offered? And I think this gestalt frame is poorly grounded, and likely to authorize much that is bad.

Speaking of models, I like this as the beginning of a thoughtful distinction:

my model of Duncan predicts that there are some people on LW whose presence here is motivated (at least significantly in part) by wanting to grow as a rationalist, and also that there are some people on LW whose presence here is only negligibly motivated by that particular desire, if at all.

I'm not sure if Duncan agrees with this, but I agree with it, and relevantly I think that it is likely that neither Duncan nor I likely consider ourselves in the first category. I think both of us see ourselves as "doctors around these parts" rather than "patients"? Then I take Duncan's advocacy to move in the direction of a prescription, and his prescription sounds to me like bleeding the patient with leeches. It sounds like a recipe for malpractice.

Maybe he thinks of himself as being around here more as a patient or as a student, but, this seems to be his self-reported revealed preference for being here:

What I'm getting out of LessWrong these days is readership.  It's a great place to come and share my thoughts, and have them be seen by people—smart and perceptive people, for the most part, who will take those thoughts seriously, and supply me with new thoughts in return, many of which I honestly wouldn't have ever come to on my own.

(By contrast I'm still taking the temperature of the place, and thinking about whether it is useful to me larger goals, and trying to be mostly friendly and helpful while I do so. My larger goals are in working out a way to effectively professionalize "algorthmic ethics" (which was my last job title) and get the idea of it to be something that can systematically cause pro-social technology to come about, for small groups of technologists, like lab workers and programmers who are very smart, such that an algorithmic ethicist could help them systematically not cause technological catastrophes before they explode/escape/consume or other wise "do bad things" to the world, and instead cause things like green revolutions, over and over.)

So I think that neither of us (neither me nor Duncan) really expects to "grow as Rationalists" here because of "the curriculum"? Instead we seem to me to both have theories of what a good curriculum looks like, and... his curriculum leaves me aghast, and so I'm trying to just say that even if it might cut against his presumptively validly selfish goals for and around this website.

Stepping forward, this feels accurate to me:

My model of Duncan further predicts that both of these groups, sharing the common vice of being human, will at least occasionally produce epistemic violations; but model!Duncan predicts that the first group, when called out for this, is more likely to make an attempt to shift their thinking towards the epistemic ideal, whereas the second group's likelihood of doing this is significantly lower.

So my objection here is simply that I don't simply don't think think that "shifting  one's epistemics closer to the ideal" is a universal solvent, nor even a single coherent unique ideal.

The core point is that agency is not simply about beliefs, it is also about values. 

Values can be objective: the objective needs for energy, for atoms to put into shapes to make up the body of the agent, for safety from predators and disease, etc.  Also, as planning becomes more complex, instrumentally valuable things (like capital investments) are subject to laws of value (related to logistics and option pricing and so on) and if you get your values wrong, that's another way to be a dysfunctional agent. 

VNM rationality (which, if it is not in the cannon of rationality right now, then the cannon of rationality is bad) isn't just about probabilities being bayesian it is also about expected values being linearly orderable and having no privileged zero, for example.

Most of my professional work over the last 4 years has not hinged on having too little bayes. Most of it has hinged on having too little mechanism design, and too little appreciation for the depths of coase's theorem, and too little appreciation for the sheer joyous magic of humans being good and happy and healthy humans with each other, who value and care about each other FIRST and then USE epistemology to make our attempts at caring work better.

Over in that other sibling comments Duncan is yelling at me for committing logical fallacies, and he is ignoring that I implied he was bad and said that if we're banning the bad people maybe we should ban him. That was not nice of me at all. I tried to be clear about this sort of thing here:

On human decency and normative grounds: The thing you should be objecting to is that I directly implied that you personally might not be "sane and good" because your advice seemed to be violating ideas about conflict and economics that seem normative to me.

But he just... ignored it? Why didn't he ask for an apology? Is he OK? Does he not think of people on this website as people who owe each other decent treatment?

My thesis statement, at the outset, such as it was:

This post makes me kind of uncomfortable and I feel like the locus is in... bad boundaries maybe? Maybe an orientation towards conflict, essentializing, and incentive design? 

So like... the lack of an ability to acknowledge his own validly selfish emotional needs... the lack of of a request for an apology... these are related parts of what feels weird to me. 

I feel like a lot of people's problems aren't rationality, as such... like knowing how to do modus tollens or knowing how to model and then subtract out the effects of "nuisance variables"... the main problem is that truth is a gift we give to those we care about, and we often don't care about each other enough to give this gift.

To return to your comments on moral judgements:

Note also that this model makes no assumption that epistemic violations ("errors") are in any way equivalent to "defection", intentional or otherwise. Assuming intent is not necessary; epistemic violations occur by default across the whole population, so there is no need to make additional assumptions about intent.

I don't understand why "intent" arises here, except possibly if it is interacting with some folk theory about punishment and concepts like mens rea?

"Defecting" is just "enacting the strategy that causes the net outcome for the participants to be lower than otherwise for reasons partly explainable by locally selfish reasons". You look at the rows you control and find the best for you. Then you look at the columns and worry about what's the best for others. Then maybe you change your row in reaction. Robots can do this without intent. Chessbots are automated zero sum defectors (and the only reason we like them is that the game itself is fun, because it can be fun to practice hating and harming in small local doses (because play is often a safe version of violence)).

People don't have to know that they are doing this to do this. If I person violates quarantine protocols that are selfishly costly they are probably not intending to spread disease into previously clean areas where mitigation practices could be low cost. They only intend to like... "get back to their kids who are on the other side of the quarantine barrier" (or whatever). The millions of people whose health in later months they put at risk are probably "incidental" and/or "unintentional" to their violation of quarantine procedures.

People can easily be modeled as "just robots" who "just do things mechanistically" (without imagining alternatives or doing math or running an inner simulator otherwise trying to taking all the likely consequences into account and imagine themselves personally responsible for everything under their causal influence, and so on). 

Not having mens reas, in my book, does NOT mean they should be protected necessarily, if their automatic behaviors hurts others.

I think this is really really important, and that "theories about mens rea" are a kind of thoughtless crux that separates me (who has thought about it a lot) from a lot of naive people who have relatively lower quality theories of justice.  

The less intent there is, the worse it it from an easy/cheap harms reduction perspective. 

At least with a conscious villain you can bribe them to stop. In many cases I would prefer a clean honest villain. "Things" (fools, robots, animals, whatever) running on pure automatic pilot can't be negotiated with :-(

...

Also, Duncan seems very very attached to the game-theory "stag hunt" thing? Like over in a cousin comment he says:

In part, this is because a major claim of the OP is "LessWrong has a canon; there's an essay for each of the core things (like strawmanning, or double cruxing, or stag hunts)."

(I kind of want to drop this, because it involves psychologizing, and even when I privately have detailed psychological theories that make high quality predictions that other people will do bad things, I try not to project them, because maybe I'm wrong, and maybe there's a chance for them to stop being broken, but:

I think of "stag hunt" as a "Duncan thing" strongly linked to the whole Dragon Army experiment and not "a part of the lesswrong canon". 

Double cruxing is something I've been doing for 20 years, but not under that name. I know that CFAR got really into it as a "named technique", but they never put that on LW in a highly formal way that I managed to see, so it is more part of a "CFAR canon" than a "Lesswrong canon" in my mind?

And so far as I'm aware "strawmanning" isn't even a rationalist thing... its something from old school "critical thinking and debate and rhetoric" content? The rationalist version is to "steelman" one's opponents who are assumed to need help making their point, which might actually be good, but so far poorly expressed by one's interlocutor.

I am consciously lowering my steelmanning of Duncan's position. My objection is to his frame in this case. Like I think he's making mistakes, and it would help him to drop some of his current frames, and it would make lesswrong a safer place to think and talk if he didn't try to impose these frames as a justification for meddling with other people, including potentially me and people I admire.)

...

Pivoting a bit, since he is so into the game theory of stag hunts... my understanding is that in 2-person Stag Hunt a single member of the team causes a failure of both to "get the benefit" so it becomes essential to get perfect behavior from literally everyone. The key difference with a prisoner's dilemma is that "non-defection (to get the higher outcome)" is a nash equilibrium, because playing different things is even worse for each of the two players than playing any similar move.

A group of 5 playing stag hunt, with a history of all playing stag, loves their equilibrium and wants to protect it and each probably has a detailed mental model of all the others to keep it that way, and this is something humans do instinctively, and it is great.

But what about N>5? Suppose you are in a stag hunt where each of N persons has probability P of failing at the hunt, and "accidentally playing rabbit". Then everyone gets a bad outcome with probability (1-(1-P)^N). So almost any non-trivial value of N causes group failure.

If you see that you're in a stag hunt with 2000 people: you fucking play rabbit! That's it. That's what you do. 

Even if the chances of each person succeeding is 99.9% and you have 2000 in a stag hunt... the hunt succeeds with probability 13.52% and that stag had better be really really really really valuable. Mostly it fails, even with that sort of superhuman success rate. 

But there's practically NOTHING that humans can do with better than maybe a 98% success rate. Once you take a realistic 2% chance of individual human failure into account, with 2000 people in your stag hunt you get a 1 in 2.83x10^18 chance of a successful stag hunt.

If you are in a stag hunt like this, it is socially and morally and humanistically correct to announce this fact. You don't play rabbit secretly (because that hurts people who didn't get the memo). 

You tell everyone that you're playing rabbit, even if they're going to get angry at you for doing so, because you care about them.

You give them the gift of truth because you care about them, even if it gets you yelled at and causes people with dysfunctional emotional attachments to attack you.

And you teach people rabbit hunting skills, so that they get big rabbits, because you care about them.

And if someone says "we're in a stag hunt that's essentially statistically impossible to win and the right answer is to impose costs on everyone hunting rabbit" that is the act of someone who is either evil or dumb.

And I'd rather have a villain, who knows they are engaged in evil, because at least I can bribe the villain to stop being evil. 

You mostly can't bribe idiots, more's the pity.

Note that at no point does this model necessitate the frequent banning of users. Bans (or other forms of moderator action) may be one way to achieve the desired outcome, but model!Duncan thinks that the ideal process ought to be much more organic than this--which is why model!Duncan thinks the real Duncan kept gesturing to karma and voting patterns in his original post, despite there being a frame (which I read you, Jennifer, as endorsing) where karma is simply a number.

I think maybe your model of Duncan isn't doing the math and reacting to it sanely? 

Maybe by "stag hunt" your model of Duncan means "the thing in his head that 'stag hunt' is a metonym for" and it this phrase does not have a gears level model with numbers (backed by math that one plug-and-chug), driving its conclusions in clear ways, like long division leads clearly to a specific result at the end?

An actual piece of the rationalist canon is "shut up and multiply" and this seems to be something that your model of Duncan is simply not doing about his own conceptual hobby horse?

I might be wrong about the object level math. I might be wrong about what you think Duncan thinks. I might be wrong about Duncan himself. I might be wrong to object to Duncan's frame.

But I currently don't think I am wrong, and I care about you and Duncan and me and humans in general, and so it seemed like the morally correct (and also the epistemically hygienic thing ) is to flag my strong hunch (which seems wildly discrepant compared to Duncan's hunches, as far as I understand them) about how best to make lesswrong a nurturing and safe environment for people to intellectually grow while working on ideas with potentially large pro-social impacts.

Duncan is a special case. I'm not treating him like a student, I'm treating him like an equal who should be able to manage himself and his own emotions and his own valid selfish needs and the maintenance of boundaries for getting these things, and then, to this hoped-for-equal, I'm saying that something he is proposing seems likely to be harmful to a thing that is large and valuable. Because of mens rea, because of Dunbar's Number, because of "the importance of N to stag hunt predictions", and so on.

Speaking of Stag Hunts

"Black and white thinking" is another name for a reasonably well defined cognitive tendency that often occurs in proximity to reasonably common mental problems.

Part of the reason "the fallacy of gray" is a thing that happens is that advice like that can be a useful and healthy thing for people who are genuinely not thinking in a great way. 

Adding gray to the palette can be a helpful baby step in actual practice.

Then very very similar words to this helpful advice can also be used to "merely score debate points" on people who have a point about "X is good and Y is bad". This is where the "fallacy" occurs... but I don't think the fallacy would occur if it didn't have the "plausible cover" that arises from the helpful version. 

A typical fallacy of gray says something like "everything is gray, therefore lets take no action and stop worrying about this stuff entirely".

One possible difference, that distinguishes "better gray" from "worse gray" is whether you're advocating for fewer than 2 or more than 2 categories.

Compare: "instead of two categories (black and white), how about more than two categories (black and white and gray), or maybe even five (pure black, dark gray, gray, light gray, pure white), or how about we calculate the actual value of the alternatives with actual axiological math which in some sense gives us infinite categories... oh! and even better the math might be consistent with various standards like VNM rationality and Kelly and so on... this is starting to sound hard... let's only do this for the really important questions maybe, otherwise we might get bogged down in calculations and never do anything... <does quick math> <acts!>"

My list of "reasons to vote up or down" was provided partly for this reason. 

I wanted to be clear that comments could be compared, and if better comments had lower scores than worse comments that implied that the quantitative processes of summing up a bunch of votes might not be super well calibrated, and could be improved via saner aggregate behavior. 

Also the raw score is likely less important than the relative score. 

Also, numerous factors are relevant and different factors can cut in opposite ways... it depends on framing, and different people bring different frames, and that's probably OK. 

I often have more than one frame in my head at the same time, and it is kinda annoying, but I think maybe it helps me make fewer mistakes? Sometimes? I hope?

Phrasings like "And why would a good and sane person ever [...]" seem to prepare to mark individuals for rejection. And again it has a question word but doesn't read like a question.

It was a purposefully pointed and slightly unfair question. I didn't predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).

If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.

I wasn't expecting him to just totally dodge it.

To answer my own question: cops are an example of people who can be good and sane even though they go around hurting people.

However, cops do this mostly only while wearing a certain uniform, while adhering to written standards, and while under the supervision of elected officials who are also following written standards. Also, all the written standards were written by still other people who were elected, and the relevant texts are available for anyone to read. Also, courts have examined many many real world examples, and made judgement calls, with copious commentary, illustrating how the written guidelines can be applied to various complex situations.

The people cops hurt, when they are doing "a good job imposing costs on bad behavior" are people who are committing relatively well defined crimes that judges and juries and so on would agree are bad, and which violate definitions written by people who were elected, etc.

My general theory here is that vigilantism (and many other ways of organizing herds of humans) is relatively bad and "right's respecting rule of law" (generated by the formal consent of the governed), is the best succinct formula I know of for virtuous people to engage in virtuous self rule.

In general, I think governors should be very very very very careful about imposing costs and imposing sanctions for unclear reasons rather than providing public infrastructure and granting clear freedoms.

Load More