All of hg00's Comments + Replies

Speaking of Stag Hunts

There's a major challenge in all of this in that I see any norms you introduce as being additional tools that can be abused to win–just selectively call out your opponents for alleged violations to discredit them.

I think this is usually done subconsciously -- people are more motivated to find issues with arguments they disagree with.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

It seems like you wanted me to respond to this comment, so I'll write a quick reply.

Now for the rub: I think anyone working on AI alignment (or any technical question of comparable difficulty) mustn't exhibit this attitude with respect to [the thing they're working on]. If you have a problem where you're not able to achieve high confidence in your own models of something (relative to competing ambient models), you're not going to be able to follow your own thoughts far enough to do good work--not without being interrupted by thoughts like "But if I multi

... (read more)

Thanks for the reply.

But it seems like maybe you're proposing that people self-deceive in order to get themselves confident enough to explore the ramifications of a particular hypothesis. I think we should be a bit skeptical of intentional self-deception.

I want to clarify that this is not my proposal, and to the extent that it had been someone's proposal, I would be approximately as wary about it as you are. I think self-deception is quite bad on average, and even on occasions when it's good, that fact isn't predictable in advance, making choosing to s... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Separately, I don't think the MIRI/CFAR associated social circle is a cult.

Nor do I. (I've donated money to at least one of those organizations.) [Edit: I think they might be too tribal for their own good -- many groups are -- but the word "cult" seems too strong.]

I do think MIRI/CFAR is to some degree an "internet tribe". You've probably noticed that those can be pathological.

Anyway, you're writing a lot of words here. There's plenty of space to propose or cite a specific norm, explain why you think it's a generally good norm, and explain why Ilya ... (read more)

Basically I'm getting more of an "ostracize him!" vibe than a "how can we keep the garden clean?" vibe -- you were pretending to do the second one in your earlier comment, but I think the cursing here makes it clear that your true intention is more like the first.

I didn't respond to this earlier, but I think I'd also like to flag here that I don't appreciate this (inaccurate) attempt to impute my intentions. I will state it outright: your reading of my intention is incorrect, and also seems to me to be based on a very flimsy reasoning process.

(To expand... (read more)

4dxu3moOkay, sure. I think LW should (and for the most part, does) have a norm against personal attacks. I think LW should also (and again, for the most part, does) have a norm against low-effort sniping. I think Ilya's comment[ing pattern] runs afoul of both of these norms (and does so rather obviously to boot), neither of which (I claim) is "suspiciously specific" in the way you describe.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

It's not obvious to me that Ilya meant his comment as aggressively as you took it. We're all primates and it can be useful to be reminded of that, even if we're primates that go to space sometimes. Asking yourself "would I be responding similar to how I'm responding now if I was, in fact, in a cult" seems potentially useful. It's also worth remembering that people coded as good aren't always good.

Your comment was less crass than Ilya's, but it felt like you were slipping "we all agree my opponent is a clear norm violator" into a larger argument without ... (read more)

So, there are a number of things I want to say to this. It might first be meaningful to establish the following, however:

Asking yourself "would I be responding similar to how I'm responding now if I was, in fact, in a cult" seems potentially useful.

I don't think I'm in a cult. (Separately, I don't think the MIRI/CFAR associated social circle is a cult.)

The reason I include the qualifier "separately" is because, in my case, these are very much two separate claims: I do not live in the Bay Area or any other rationalist community "hot spot", I have had (t... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

It is an invitation to turn the comments section into something like a factionalized battleground

If you want to avoid letting a comments section descend into a factionalized battleground, you also might want to avoid saying that people "would not much be missed" if they are banned. From my perspective, you're now at about Ilya's level, but with a lot more words (and a lot more people in your faction).

From my perspective, the commenters here have, with very few exceptions, performed admirably at not turning this thread into a factionalized battleground. (Note that my use of "admirably" here is only in relation to my already-high expectations for LW users; in the context of the broader Internet a more proper adverb might be "incredibly".) You may note, for example, that prior to my comment, Ilya's comment had not received a single response, indicating that no one found his bait worth biting on. Given this, I was (and remain) quite confident that my state... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I am interested in the fact that you find the comment so cult-y though, because I didn't pick that up.

It's a fairly incoherent comment which argues that we shouldn't work to overcome our biases or engage with people outside our group, with strawmanning that seems really flimsy... and it has a bunch of upvotes. Seems like curiosity, argument, and humility are out, and hubris is in.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Thanks, this is encouraging.

I think mostly everyone agrees with this, and has tried, and in practice, we keep hitting "inferential distance" shaped walls, and become discouraged, and (partially) give up.

I've found an unexpected benefit of trying to explain my thinking and overcome the inferential distance is that I think of arguments which change my mind. Just having another person to bounce ideas off of causes me to look at things differently, which sometimes produces new insights. See also the book passage I quoted here.

5Scott Garrabrant3moNote that I think the form of inferential distance is often about trying to communicate across different ontologies. Sometimes a person will even correctly get the arguments of their discussion partner to the point where they can internally inhabit that point of view, but it is still hard to get the argument to dialogue productively with your other views because the two viewpoints have such different ontologies.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

which in turn I fundamentally see as a consequence of epistemic learned helplessness run rampant

Not sure I follow. It seems to me that the position you're pushing, that learning from people who disagree is prohibitively costly, is the one that goes with learned helplessness. ("We've tried it before, we encountered inferential distances, we gave up.")

Suppose there are two execs at an org on the verge of building AGI. One says "MIRI seems wrong for many reasons, but we should try and talk to them anyways to see what we learn." The other says "Nah, tha... (read more)

So I think my orientation on seeking out disagreement is roughly as follows. (This is going to be a rant I write in the middle of the night, so might be a little incoherent.)

There are two distinct tasks: 1)Generating new useful hypotheses/tools, and 2)Selecting between existing hypotheses/filtering out bad hypotheses.

There are a bunch of things that make people good at both these tasks simultaneously. Further, each of these tasks is partially helpful for doing the other. However, I still think of them as mostly distinct tasks. 

I think skill at these t... (read more)

7Scott Garrabrant3moI believe they are saying that cheering for seeking out disagreement is learned helplessness as opposed to doing a cost-benefit analysis about seeking out disagreement. I am not sure I get that part either. I was also confused reading the comment, thinking that maybe they copied the wrong paragraph, and meant the 2nd paragraph. I am interested in the fact that you find the comment so cult-y though, because I didn't pick that up.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I'm not sure I agree with Jessica's interpretation of Eliezer's tweets, but I do think they illustrate an important point about MIRI: MIRI can't seem to decide if it's an advocacy org or a research org.

"if you actually knew how deep neural networks were solving your important mission-critical problems, you'd never stop screaming" is frankly evidence-free hyperbole, of the same sort activist groups use (e.g. "taxation is theft"). People like Chris Olah have studied how neural nets solve problems a lot, and I've never heard of them screaming about what they... (read more)

as a tiny, mostly-uninformed data point, i read "if you realized how bad taxation is for the economy, you'd never stop screaming" to have a very diff vibe from Eliezer's tweet, cause he didn't use the word bad. I know it's a small diff but it hits diff. Something in his tweet was amusing because it felt like it was pointing to a presumably neutral thing and making it scary? whereas saying the same thing about a clearly moralistic point seems like it's doing a different thing. 

Again - a very minor point here, just wanted to throw it in.

MIRI can't seem to decide if it's an advocacy org or a research org.

MIRI is a research org. It is not an advocacy org. It is not even close. You can tell by the fact that it basically hasn't said anything for the last 4 years. Eliezer's personal twitter account does not make MIRI an advocacy org.

(I recognize this isn't addressing your actual point. I just found the frame frustrating.)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

The most natural shared interest for a group united by "taking seriously the idea that you are a computation" seems like computational neuroscience, but that's not on your list, nor do I recall it being covered in the sequences. If we were to tell 5 random philosophically inclined STEM PhD students to write a lit review on "taking seriously the idea that you are a computation" (giving them that phrase and nothing else), I'm quite doubtful we would see any sort of convergence towards the set of topics you allude to (Haskell, anthropics, mathematical logic)... (read more)

I notice I like "you are an algorithm" better than "you are a computation", since "computation" feels like it could point to a specific instantiation of an algorithm, and I think that algorithm as opposed to instantiation of an algorithm is an important part of it.

I agree that the phrase "taking seriously the idea that you are a computation" does not directly point at the cluster, but I still think it is a natural cluster. I think that computational neuroscience is in fact high up on the list of things I expect less wrongers to be interested in. To the extent that they are not as interested in it as other things, I think it is because it is too hard to actually get much that feels like algorithmic structure from neuroscience.

I think that the interest in anthropics is related to the fact that computations are the kin... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Something I try to keep in mind about critics is that people who deeply disagree with you are also not usually very invested in what you're doing, so from their perspective there isn't much of an incentive to put effort into their criticism. But in theory, the people who disagree with you the most are also the ones you can learn the most from.

You want to be the sort of person where if you're raised Christian, and an atheist casually criticizes Christianity, you don't reject the criticism immediately because "they didn't even take the time to read the Bible!"

4Rob Bensinger3moI think I have a lot less (true, useful, action-relevant) stuff to learn from a random fundamentalist Christian than from Carl Shulman, even though I disagree vastly more with the fundamentalist than I do with Carl.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Thanks. After thinking for a bit... it doesn't seem to me that Topher frobnitzes Scott, so indeed Eliezer's reaction seems inappropriately strong. Publishing emails that someone requested (and was not promised) privacy for is not an act of sadism.

5philh3moI believe the idea was not that this was an act of frobnitzing, but that * Topher is someone who openly frobnitzes. * Now he's done this, which is bad. * It is unsurprising that someone who openly frobnitzes does other bad things too.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

It seems quite possible to me that the philosophical stance + mathematical taste you're describing aren't "natural kinds" (e.g. the topics you listed don't actually have a ton in common, besides being popular MIRI-sphere topics).

If that's the case, selecting for people with the described philosophical stance + mathematical taste could basically be selecting for "people with little resistance to MIRI's organizational narrative" (people who have formed their opinions about math + philosophy based on opinions common in/around MIRI).

selection on philosophica

... (read more)

I don't want to speak for/about MIRI here, but I think that I personally do the "patting each other on the back for how right we all are" more than I endorse doing it. I think the "we" is less likely to be MIRI, and more likely to be a larger group that includes people like Paul.

I agree that it would be really really great if MIRI can interact with and learn from different views. I think mostly everyone agrees with this, and has tried, and in practice, we keep hitting "inferential distance" shaped walls, and become discouraged, and (partially) give up. To ... (read more)

If that's the case, selecting for people with the described philosophical stance + mathematical taste could basically be selecting for "people with little resistance to MIRI's organizational narrative"

 

So, I do think that MIRI hiring does select for people with "little resistance to MIRI's organizational narrative," through the channel of "You have less mental resistance to narratives you agree with" and "You are more likely to work for an organization when you agree with their narrative." 

I think that additionally people have a score on "mental ... (read more)

It sounds like you're saying that at MIRI, you approximate a potential hire's philosophical competence by checking to see how much they agree with you on philosophy. That doesn't seem great for group epistemics?

 

I did not mean to imply that MIRI does this any more than e.g. philosophy academia. 

When you don't have sufficient objective things to use to judge competence, you end up having to use agreement as a proxy for competence. This is because when you understand a mistake, you can filter for people who do not make that mistake, but when you do... (read more)

It seems quite possible to me that the philosophical stance + mathematical taste you're describing aren't "natural kinds" (e.g. the topics you listed don't actually have a ton in common, besides being popular MIRI-sphere topics).

 

So, I believe that the philosophical stance is a natural kind. I can try to describe it better, but note that I won't be able to point at it perfectly:

I would describe it as "taking seriously the idea that you are a computation[Edit: an algorithm]." (As opposed to a collection of atoms, or a location in spacetime, or a Christ... (read more)

4throwaway462378963moThe rant (now somewhat redacted) can be found here [https://www.facebook.com/yudkowsky/posts/10159408250519228], in response to the leaked emails of Scott more-or-less outright endorsing people like Steve Sailer [https://imgur.com/a/gWeIK6c] re:"HBD". There was a major backlash against Scott at the time, resulting in the departure of many longtime members of the community (including me), and Eliezer's post was in response to that. It opened with: ...which is, to put it mildly, absurd.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

These claims seem rather extreme and unsupported to me:

  • "Lots of upper middle class adults hardly know how to have conversations..."

  • "the average workplace [is] more than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019."

I suggest if you write a toplevel post, you search for evidence for/against them.

Elaborating a bit on my reasons for skepticism:

  • It seems like for the past 10+ years, you've been mostly interacting with people in CFAR-adjacent contexts. I'm not sure what your source of knowledge is on "

... (read more)
9Douglas_Knight2moA couple books suggesting that white collar workplaces are more traumatic than blue collar ones are Moral Mazes (cited by Jessica) and Bullshit Jobs.

RE: "Lots of upper middle class adults hardly know how to have conversations..."

I will let Anna speak for herself, but I have evidence of my own to bring... maybe not directly about the thing she's saying but nearby things. 

  • I have noticed friends who jumped up to upper middle class status due to suddenly coming into a lot of wealth (prob from crypto stuff). I noticed that their conversations got worse (from my POV). 
    • In particular: They were more self-preoccupied. They discussed more banal things. They spent a lot of time optimizing things that mo
... (read more)
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.

If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?

If being part of Leverage is causing less reality-b... (read more)

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

Does anyone have thoughts about avoiding failure modes of this sort?

Especially in the "least convenient possible world" where some of the bullet points are actually true -- like, if we're disseminating principles for wannabe AI Manhattan Projects, and we're optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?


Most of my ideas are around "staying grounded" -- spend significant time hanging out with "normies" who don't buy into your worldview, maintain your sense... (read more)

Does anyone have thoughts about avoiding failure modes of this sort?

Meredith from Status451 here. I've been through a few psychotic episodes of my own, often with paranoid features, for reasons wholly unrelated to anything being discussed at the object-level here; they're unpleasant enough, both while they're going on and while cleaning up the mess afterward, that I have strong incentives to figure out how to avoid these kinds of failure modes! The patterns I've noticed are, of course, only from my own experience, but maybe relating them will be helpful.

  • In
... (read more)

IMO, A large number of mental health professionals simply aren't a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful.

I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn't provide. 

When deciding whether a personal development group is culty I think it's a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents. 

9Avi3moI agree, and think it's important to 'stay grounded' in the 'normal world' if you're involved in any sort of intense organization or endeavor. You've made some great suggestions. I would also suggest that having a spouse who preferably isn't too involved, or involved at all, and maybe even some kids, is another commonality among people who find it easier to avoid going too far down these rabbit holes. Also, having a family is positive in countless other ways, and what I consider part of the 'good life' for most people.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

The community still seems in the middle of sensemaking around Leverage

Understanding how other parts of the community were similar/dissimilar to Leverage seems valuable from a sensemaking point of view.

Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions.

I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations.

Common knowledge about Leverage Research 1.0

The rationalist community did in fact have to have such conversations about Eliezer over the years, and (IMO) mostly concluded that he actively wants to just sit in a comfortable cave and produce FAI progress with his team, and so he delegates any social authority/power he gains to trusted others, making him a safer weirdo leader figure than most.

Was this conversation held publicly on a non-Eliezer-influenced online forum?

I think there's a pretty big difference -- from accounts I've read about Leverage, the "Leverage community" had non-public conversations about Geoff as well, and they concluded he was a great guy.

Review: The End Is Always Near

https://xkcd.com/808/

If it worked, militaries would be using it to... Ditch boot camp and instead harden soldiers for battle by having them participate in group therapy

I think you're both too credulous of the research you linked, and extrapolating from it too confidently.

Jimrandomh's Shortform

The advice I've heard is to eat a variety of fruits and vegetables of different colors to get a variety of antioxidants in your diet.

Until recently, the thinking had been that the more antioxidants, the less oxidative stress, because all of those lonely electrons would quickly get paired up before they had the chance to start mucking things up in our cells. But that thinking has changed.

Drs. Cleva Villanueva and Robert Kross published a 2012 review titled “Antioxidant-Induced Stress” in the International Journal of Molecular Sciences. We spoke via Sky

... (read more)
[Link] Musk's non-missing mood

An interesting missing mood I've observed in discussions of AI safety: When a new idea for achieving safe AI is proposed, you might expect that people concerned with AI risk would show a glimmer of eager curiosity. Perhaps the AI safety problem is actually solvable!

But I've pretty much never observed this. A more common reaction seems to be a sort of an uneasy defensiveness, sometimes in combination with changing the subject.

Another response I occasionally see is someone mentioning a potential problem in a manner that practically sounds like they are rebu... (read more)

2[comment deleted]6mo
[Prediction] What war between the USA and China would look like in 2050

Hiding an aircraft carrier battle group on the open sea isn't possible.

This think tank disagrees.

Can we hold intellectuals to similar public standards as athletes?

Somewhere I read that a big reason IQ tests aren't all that popular is because when they were first introduced, lots of intellectuals took them and didn't score all that high.  I'm hoping prediction markets don't meet a similar fate.

6steven04611yRelatedly, the term "superforecasting" is already politicized to death [https://twitter.com/bbcpolitics/status/1229707564696424455] in the UK.
8ozziegooen1yThere's a funny thing about new signaling mechanisms. If they disagree with old ones, then at least some people who did well in the old ones will complain (loudly). If they perfectly agree with old ones, then they provide no evaluative value. In general, introducing new signaling mechanisms is challenging, very much for this reason. If they can last though, then eventually those in power will be ones who did well on them, so these people will champion them vs. future endeavors. So they can have lasting lock-in. It's more of a reason to work hard to get them right.
Universal Eudaimonia

It's fiction ¯\_(ツ)_/¯

I guess I'll say a few words in defense of doing something like this... Supposing we're taking an ethically consequentialist stance.  In that case, the only purpose of punishment, basically, is to serve as a deterrent.  But in our glorious posthuman future, nanobots will step in before anyone is allowed to get hurt, and crimes will be impossible to commit.  So deterrence is no longer necessary and the only reason to punish people is due to spite.  But if people are feeling spiteful towards one another on Eudaimonia... (read more)

Rationality and Climate Change

For some thoughts on how climate change stacks up against other world-scale issues, see this.

Universal Eudaimonia

Yep. Good thing a real AI would come up with a much better idea! :)

2chirag03k1yI'm confused -- please forgive me if this is a dumb comment, this is my first contribution. What was the purpose of the post if the idea was, on its own, not durable enough to stand? I'm genuinely confused on how this would avoid harming the 'good' people in the short term. How does this post expand our thoughts of AI if it "would come up with a better idea"? I'm not trying to criticize you (hence why I didn't downvote this post). I just want to better understand its intention so that I can understand LW better. Thanks
Needed: AI infohazard policy

It seems to me that under ideal circumstances, once we think we've invented FAI, before we turn it on, we share the design with a lot of trustworthy people we think might be able to identify problems.  I think it's good to have the design be as secret as possible at that point, because that allows the trustworthy people to scrutinize it at their leisure.  I do think the people involved in the design are liable to attract attention--keeping this "FAI review project" secret will be harder than keeping the design itself secret.  (It's easier to... (read more)

EA Relationship Status

Dating is a project that can easily suck up a lot of time and attention, and the benefits seem really dubious (I know someone who had their life ruined by a bad divorce).

I would be interested in the opposite question: Why *would* an EA try and find someone to marry? I'm not trying to be snarky, I genuinely want to hear why in case I should change my strategy. The only reason I can think of is if you're a patient longtermist and you think your kids are more likely to be EAs.

2jefftk1yWhile we don't have controlled studies, married people do tend to be happier. Overall, many married people find the companionship of having another person to spend their life with fulfilling, rewarding, comforting, and generally very positive. There is a risk of an unhappy marriage, but there is also a risk of missing out on what could be a really important relationship. You could consider talking to people a few decades older than you who seem like the kind of people you might be in a few decades time, and asking whether they're married and how they feel about it?
Open & Welcome Thread - June 2020

I spent some time reading about the situation in Venezuela, and from what I remember, a big reason people are stuck there is simply that the bureaucracy for processing passports is extremely slow/dysfunctional (and lack of a passport presents a barrier for achieving a legal immigration status in any other country). So it might be worthwhile to renew your passport more regularly than is strictly necessary, so you always have at least a 5 year buffer on it say, in case we see the same kind of institutional dysfunction. (Much less effort than acquiring a se... (read more)

Open & Welcome Thread - July 2020

Worth noting that we have at least one high-karma user who is liable to troll us with any privileges granted to high-karma users.

Do Women Like Assholes?
I was always nice and considerate, and it didn’t work until I figured out how to filter for women who are themselves lovely and kind.

Does anyone have practical tips on finding lonely single women who are lovely and kind? I've always assumed that these were universally attractive attributes, and thus there would be much more competition for such women.

Most reliable news sources?
Answer by hg00Jun 06, 20202

The Financial Times, maybe FiveThirtyEight

hedonometer.org is a quick way to check if something big has happened

4Pablo2yI like FiveThirtyEight, but it's not the sort of publication you can refer to for "what happened in the past 3 days" (except for very specific events like 'how much Trump's popularity changed in the intervening period'). I second the Financial Times recommendation.
Open & Welcome Thread - June 2020

Permanent residency (as opposed to citizenship) is a budget option. For example, for Panama, I believe if you're a citizen of one of 50 nations on their "Friendly Nations" list, you can obtain permanent residency by depositing $10K in a Panamanian bank account. If I recall correctly, Paraguay's permanent residency has similar prerequisites ($5K deposit required) and is the easiest to maintain--you just need to be visiting the country every 3 years.

The Chilling Effect of Confiscation

I think this is the best argument I've seen in favor of mask seizure / media misrepresentation on this:

https://www.reddit.com/r/slatestarcodex/comments/g5yh64/us_federal_government_seizing_ppe_to_what_end/fo6pyel/

3jefftk2yThat comment doesn't address seizures and their incentive effects at all?
Judgment, Punishment, and the Information-Suppression Field

Downvoted because I don't want LW to be the kind of place where people casually make inflammatory political claims, in a way that seems to assume this is something we all know and agree with, without any supporting evidence.

3Said Achmiz2yI also downvoted for precisely this reason. I agree with Ben Pace’s take [https://www.lesswrong.com/posts/LfPYqcECjz9hJmdNE/judgment-punishment-and-the-information-suppression-field#aEuKasketxsH75Wwi] , but not with his voting decision, though I entirely agree that this post is strong-upvote-worthy with this sort of thing removed.
8Ben Pace2y*nods* I agree that the opening has one line which is both off-topic and predictably distracting [https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer]. I strong-upvoted because I found the model in the rest of the post to be helpful and quite accurate.
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists

Nice post. I think one thing which can be described in this framework is a kind of "distributed circular reasoning". The argument is made that "we know sharing evidence for Blue positions causes harmful effects due to Green positions A, B, and C", but the widespread acceptance of Green positions A, B, and C itself rests on the fact that evidence for Green positions is shared much more readily than evidence for Blue positions.

1wizzwizz42yhttps://ncase.me/loopy/ [https://ncase.me/loopy/]
Religion as Goodhart

The trouble is that tradition is undocumented code, so you aren't sure what is safe to change when circumstances change.

Self-consciousness wants to make everything about itself
Seems like a bad comparison, since, as an atheist, you don't accept the Bible's truth, so the things the preacher is saying are basically spam from your perspective. There's also no need to feel self-conscious or defend your good-person-ness to this preacher, as you don't accept the premises he's arguing from.

Yes, and the preacher doesn't ask me about my premises before attempting to impose their values on me. Even if I share some or all of the preacher's premises, they're trying to force a strong conclusion about m... (read more)

Self-consciousness wants to make everything about itself

I think I see a motte and bailey around what it means to be a good person. Notice at the beginning of the post, we've got statements like

Anita reassured Susan that her comments were not directed at her personally

...

they spent the duration of the meeting consoling Susan, reassuring her that she was not at fault

And by the end, we've got statements like

it's quite hard to actually stop participating in racism... In societies with structural racism, ethical behavior requires skillfully and consciously reducing harm

...

almost every person's b
... (read more)
2jessicata3ySeems like a bad comparison, since, as an atheist, you don't accept the Bible's truth, so the things the preacher is saying are basically spam from your perspective. There's also no need to feel self-conscious or defend your good-person-ness to this preacher, as you don't accept the premises he's arguing from. It's a different situation if you do accept the truth of the Bible. In that case, if the preacher has good Biblical evidence that you're doing bad things and can't stop without God's grace, that would be worth listening to, and shutting down the preacher by asserting that you're a "good person" is illegitimate. Of course, you may be concerned that the preacher is misinterpreting the Bible in order to illegitimately gain power over people. That would be an issue of epistemology and of legitimacy. You may be able to resolve this by doing your own Biblical scholarship and conversing with the preacher if it seems he has relevant ideas.
Self-consciousness wants to make everything about itself

Maybe you're right, I haven't seen it used much in practice. Feel free to replace "Something like Nonviolent Communication" with "Advice for getting along with people" in that sentence.

Self-consciousness wants to make everything about itself

Agreed. Also, remember that conversations are not always about facts. Oftentimes they are about the relative status of the participants. Something like Nonviolent Communication might seem like tone policing, but through a status lens, it could be seen as a practice where you stop struggling for higher status with your conversation partner and instead treat them compassionately as an equal.

8Said Achmiz3yIt has been my experience that NVC is used exclusively as a means of making status plays. Perhaps it may be used otherwise, but if so, I have not seen it.
The Relationship Between Hierarchy and Wealth

Interesting post. I think it might be useful to examine the intuition that hierarchy is undesirable, though.

It seems like you might want to separate out equality in terms of power from equality in terms of welfare. Most of the benefits from hierarchy seem to be from power inequality (let the people who are the most knowledgable and the most competent make important decisions). Most of the costs come in the form of welfare inequality (decision-makers co-opting resources for themselves). (The best argument against this frame would probably be something a... (read more)

2Raemon3yI think a background belief (based on some half-remembered writings of Sarah) is that power inequality often fairly directly causes welfare inequality. (fake edit: she links to an older post of hers that talks about the physiological effects of status regulation among mammals: https://srconstantin.wordpress.com/2017/09/12/patriarchy-is-the-problem/ [https://srconstantin.wordpress.com/2017/09/12/patriarchy-is-the-problem/])
Reverse Doomsday Argument is hitting preppers hard

If you're willing to go back more than 70 years, in the US at least, the math suggests prepping is a good strategy:

https://medium.com/s/story/the-surprisingly-solid-mathematical-case-of-the-tin-foil-hat-gun-prepper-15fce7d10437

“She Wanted It”

+1 for this. It's tremendously refreshing to see someone engage the opposing position on a controversial issue in good faith. I hope you don't regret writing it.

Would your model predict that if we surveyed fans of *50 Shades of Grey*, they have experienced traumatic abuse at a rate higher than the baseline? This seems like a surprising but testable prediction.

Personally, I think your story might be accurate for your peer group, but that your peer group is also highly non-representative of the population at large. There is very wide variation ... (read more)

You Are Being Underpaid

https://kenrockwell.com/business/two-hour-rule.htm

2ialdabaoth4ySomething that always baffled me - all of this was regularly cited for why otherwise productive employees were fired. And everything was also done by unproductive employees, who never got caught for it. I could never quite figure out the rules for who gets punished for slacking off vs. who gets rewarded for it.
Load More