What's something you believe, that would get negative Karma if earnestly expressed in a normal LessWrong conversation? Write it in quotes. Vote on the meta-claim "would get negative karma" using ✔️/ X, where ✔️ = yes this would get negative karma, and X = no this would get positive or nonnegative karma.
"Eliezer Yudkowsky being deeply irrational in some specific ways, and yet being very popular here, has always been, and continues to be, much of why the community is less effective than it could be at the things he's interested in. If he wants to become a good influence on the world, he should be more humble and curious, and more willing to brave the gauntlet of posting on this website, rather than hiding in his twitter safe space."
Intentionally saying the inflammatory version I'd normally soften.
(ninja edit: I also think he's importantly right on important things; I'm an IABIED-pilled person, at the moment. I just also think he should try to engage with the frontier of research that IABIED-pilled people put out more regularly.)
Well, I do think he could stand to be slightly more diplomatic, not enough to not say people are being foolish, but are you seriously saying that Seb fucking Krier is a midwit? like, he's being a fool, he's not thinking carefully, these are actions he's taking, but "midwit" just sounds like yudkowsky has spent too much time on twitter. This isn't actually the behavior I want yudkowsky to change most, though his abrasive style definitely has something to do with my objection to how he processes others' claims; I also think being abrasive when necessary is important and good and one should just say what one thinks unless it's actually unsafe to do so. But I think being brave enough to be abrasive and then, if and only if you are actually not convinced by objections, just keep being abrasive, might be closer.
My actual complaint is actually not centrally about whether he posts here, I guess. The central example of why I think there's something wrong is that IABIED seems to use more metaphor than it should. His ontology feels out of date. If he's right, and I sure do think he is, then I wish he was able to explain why he's right in terms that are more reliably technically insightful.
Idk...
Nah I want him to do slightly less of what he does and slightly more of trying to keep up with research, because I think it would make his communication more able to land for technical people. This is not a fully general request, I think he has a specific blind spot about underrating the value of skimming technical work that isn't immediately obviously relevant or is in the wrong ontology to immediately weigh on what he's doing. And generally keeping up with subfields that feel like they should produce relevant insights even if they haven't. Being able to speak their latest language when telling them why one thinks they're making a mistake.
It is a somewhat general claim, this is just an example. But like, I'd hope for a specific kind of research flavor curiosity to come from being slightly more humble.
"A rationalist community better at following its ideals would be explicitly antifascist/antiracist/antisexist/etc, and explicitly exclusionary of many fascists/racists/reactionaries/etc it currently tolerates. The community's current norms around political tolerance and neutrality are more rooted in exclusion trauma, upper-middle-class conflict avoidance norms, and a desire to protect politically valent false beliefs from scrutiny, rather than any aid those norms bring to the community's rationality."
I've been intending to write a more careful and less provocative version of this into a post or sequence of posts for a while, so I figured I would post the basic thesis in this somewhat safer thread to get the ball rolling. Apologies for doing the more inflammatory version first; hopefully I'll find time to write the more careful version sometime in the next few months or so.
"Buddhism has been damaging to the epistemics of everyone in this sphere. Buddhism was only ever privileged as a hypothesis due to background SF/Bay-Area spiritualism rather than real merit.
Buddhist materials are explicitly selected for reshaping how you think within their frames. This makes it like joining a minor cult to learn their social skills. Some can extract the useful parts without buying in, but they are notably underrepresented in any discussion (some selection effects of course). The default assumption should be that you won't, especially as the topic is treated without notable suspicion. Most other religions are massively safer to practice for a few years, though not without their risks, as they have more ritual rather than mental molding, and more argumentation for their Rightness. You're already primed to notice flaws in arguments. Buddhism operates more directly on your mindset, framing, and probably even values as humans are not idealized agents where those are separate.
Meditation is useful, and probably doesn't result in a lot of the central and surrounding Buddhist thought. However just like joining a cult, or playing a gacha game, you should be skeptical of Budd...
Meta: not that much stuff that is contentful gets negative karma in isolation, only as a response IMO. Like negative karma is way more likely for things that are responding in a way people think is bad/reasonable than things that are just unreasonable statements in isolation.
I'm genuinely unsure about the voting, but:
"(In most relevant senses, with substantial translation work and ontological sophistication) God is good and real, (some) religion is both good and true relative to what we have, and many of the classic "new atheist" arguments are bad and the religious counterarguments are largely correct, and LessWrongers (as well as many others) are in the final evaluation being irrational in their allergies to this, and humanity would benefit from investing to make good conceptual progress on this. See https://tsvibt.github.io/theory/index_What_is_God_.html "
AI existential risks, especially extinction risks from a long-termist perspective are now way overfunded compared to better futures work, and longtermism properly interpreted agrees with the common view amongst the general public that sub-existential catastrophes that collapse civilization are at least as important as risks that kill everybody, and are more important to prevent in practice than extinction risks.
One major upshot of this is that bio-threats, wars that can collapse civilization entirely, or other threats that kill off a large fraction of the population but don't make them extinct, especially coming from AI is quite a bit more important to prevent than classical AI risk scenarios, and probably deserve more funding than current AI safety.
Related to this, the maxipok heuristic is a bad guide to action, because expected (and quite likely the actual distribution) distributions of futures are nowhere near as dichotomous as some people think, and because the probability of AGI this century is quite high, it's quite likely that non-existential interventions persist.
A better heuristic is to instead focus on a wider portfolio of grand challenges, which were defined in the artic...
"Lesswrong community underestimates the risk of nuclear ww3 and overestimates the chance of humanity extinction due to AI".
"quantum mechanics is probably important to the structure of agency/the mind in some way we don't understand yet".
The counting arguments for misalignment, even if they were correct do not show that AI safety is as difficult as some groups like MIRI claim without other very contestable premises that we could attempt to make false.
"People often submit incredibly epistemically rude and short-sighted comments on forums, but they deceive people into upvoting them by putting on a veneer of politeness. 'John, I feel like you've got a nail in your head.' they say. 'Your conclusion is wrong so you must not have thought of this thing you explicitly mentioned in your post.'"
Posting things that are adjacent in frame but implies beliefs that are more associated with AI Ethics or normie crowd. E.g let's say someone does a deep dive into John Rawls A Theory of Justice (fictional example but I've seen similar) and doesn't preface it with relating it to some sort of decision theory or similar it is often assumed that it is not meant for the LW community as it doesn't make the connections clear enough. I'm not sure this is only a bad thing but sometimes I find that it signals a lack of good faith in accepting other people's frames?
Separate them out lol, that way I can more clearly disagree with one of your statements while agreeing with the other ;). Well i mean, i disagree with both :D