[note: the following is essentially an expanded version of this LessWrong comment on whether appeals to consequences are normative in discourse. I am exasperated that this is even up for debate, but I figure that making the argumentation here explicit is helpful]

Carter and Quinn are discussing charitable matters in the town square, with a few onlookers.

Carter: "So, this local charity, People Against Drowning Puppies (PADP), is nominally opposed to drowning puppies."

Quinn: "Of course."

Carter: "And they said they'd saved 2170 puppies last year, whereas their total spending was $1.2 million, so they estimate they save one puppy per $553."

Quinn: "Sounds about right."

Carter: "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies."

Quinn: "Hold it right there. Regardless of whether that's true, it's bad to say that."

Carter: "That's an appeal to consequences, well-known to be a logical fallacy."

Quinn: "Is that really a fallacy, though? If saying something has bad consequences, isn't it normative not to say it?"

Carter: "Well, for my own personal decisionmaking, I'm broadly a consequentialist, so, yes."

Quinn: "Well, it follows that appeals to consequences are valid."

Carter: "It isn't logically valid. If saying something has bad consequences, that doesn't make it false."

Quinn: "But it is decision-theoretically compelling, right?"

Carter: "In theory, if it could be proven, yes. But, you haven't offered any proof, just a statement that it's bad."

Quinn: "Okay, let's discuss that. My argument is: PADP is a good charity. Therefore, they should be getting more donations. Saying that they didn't save as many puppies as they claimed they did, in public (as you just did), is going to result in them getting fewer donations. Therefore, your saying that they didn't save as many puppies as they claimed to is bad, and is causing more puppies to drown."

Carter: "While I could spend more effort to refute that argument, I'll initially note that you only took into account a single effect (people donating less to PADP) and neglected other effects (such as people having more accurate beliefs about how charities work)."

Quinn: "Still, you have to admit that my case is plausible, and that some onlookers are convinced."

Carter: "Yes, it's plausible, in that I don't have a full refutation, and my models have a lot of uncertainty. This gets into some complicated decision theory and sociological modeling. I'm afraid we've gotten sidetracked from the relatively clear conversation, about how many puppies PADP saved, to a relatively unclear one, about the decision theory of making actual charity effectiveness clear to the public."

Quinn: "Well, sure, we're into the weeds now, but this is important! If it's actually bad to say what you said, it's important that this is widely recognized, so that we can have fewer... mistakes like that."

Carter: "That's correct, but I feel like I might be getting trolled. Anyway, I think you're shooting the messenger: when I started criticizing PADP, you turned around and made the criticism about me saying that, directing attention against PADP's possible fraudulent activity."

Quinn: "You still haven't refuted my argument. If you don't do so, I win by default."

Carter: "I'd really rather that we just outlaw appeals to consequences, but, fine, as long as we're here, I'm going to do this, and it'll be a learning experience for everyone involved. First, you said that PADP is a good charity. Why do you think this?"

Quinn: "Well, I know the people there and they seem nice and hardworking."

Carter: "But, they said they saved over 2000 puppies last year, when they actually only saved 138, indicating some important dishonesty and ineffectiveness going on."

Quinn: "Allegedly, according to your calculations. Anyway, saying that is bad, as I've already argued."

Carter: "Hold up! We're in the middle of evaluating your argument that saying that is bad! You can't use the conclusion of this argument in the course of proving it! That's circular reasoning!"

Quinn: "Fine. Let's try something else. You said they're being dishonest. But, I know them, and they wouldn't tell a lie, consciously, although it's possible that they might have some motivated reasoning, which is totally different. It's really uncivil to call them dishonest like that. If everyone did that with the willingness you had to do so, that would lead to an all-out rhetorical war..."

Carter: "God damn it. You're making another appeal to consequences."

Quinn: "Yes, because I think appeals to consequences are normative."

Carter: "Look, at the start of this conversation, your argument was that saying PADP only saved 138 puppies is bad."

Quinn: "Yes."

Carter: "And now you're in the course of arguing that it's bad."

Quinn: "Yes."

Carter: "Whether it's bad is a matter of fact."

Quinn: "Yes."

Carter: "So we have to be trying to get the right answer, when we're determining whether it's bad."

Quinn: "Yes."

Carter: "And, while appeals to consequences may be decision theoretically compelling, they don't directly bear on the facts."

Quinn: "Yes."

Carter: "So we shouldn't have appeals to consequences in conversations about whether the consequences of saying something is bad."

Quinn: "Why not?"

Carter: "Because we're trying to get to the truth."

Quinn: "But aren't we also trying to avoid all-out rhetorical wars, and puppies drowning?"

Carter: "If we want to do those things, we have to do them by getting to the truth."

Quinn: "The truth, according to your opinion-"

Carter: "God damn it, you just keep trolling me, so we never get to discuss the actual facts. God damn it. Fuck you."

Quinn: "Now you're just spouting insults. That's really irresponsible, given that I just accused you of doing something bad, and causing more puppies to drown."

Carter: "You just keep controlling the conversation by OODA looping faster than me, though. I can't refute your argument, because you appeal to consequences again in the middle of the refutation. And then we go another step down the ladder, and never get to the truth."

Quinn: "So what do you expect me to do? Let you insult well-reputed animal welfare workers by calling them dishonest?"

Carter: "Yes! I'm modeling the PADP situation using decision-theoretic models, which require me to represent the knowledge states and optimization pressures exerted by different agents (both conscious and unconscious), including when these optimization pressures are towards deception, and even when this deception is unconscious!"

Quinn: "Sounds like a bunch of nerd talk. Can you speak more plainly?"

Carter: "I'm modeling the actual facts of how PADP operates and how effective they are, not just how well-liked the people are."

Quinn: "Wow, that's a strawman."

Carter: "Look, how do you think arguments are supposed to work, exactly? Whoever is best at claiming that their opponent's argumentation is evil wins?"

Quinn: "Sure, isn't that the same thing as who's making better arguments?"

Carter: "If we argue by proving our statements are true, we reach the truth, and thereby reach the good. If we argue by proving each other are being evil, we don't reach the truth, nor the good."

Quinn: "In this case, though, we're talking about drowning puppies. Surely, the good in this case is causing fewer puppies to drown, and directing more resources to the people saving them."

Carter: "That's under contention, though! If PADP is lying about how many puppies they're saving, they're making the epistemology of the puppy-saving field worse, leading to fewer puppies being saved. And, they're taking money away from the next-best-looking charity, which is probably more effective if, unlike PADP, they're not lying."

Quinn: "How do you know that, though? How do you know the money wouldn't go to things other than saving drowning puppies if it weren't for PADP?"

Carter: "I don't know that. My guess is that the money might go to other animal welfare charities that claim high cost-effectiveness."

Quinn: "PADP is quite effective, though. Even if your calculations are right, they save about one puppy per $10,000. That's pretty good."

Carter: "That's not even that impressive, but even if their direct work is relatively effective, they're destroying the epistemology of the puppy-saving field by lying. So effectiveness basically caps out there instead of getting better due to better epistemology."

Quinn: "What an exaggeration. There are lots of other charities that have misleading marketing (which is totally not the same thing as lying). PADP isn't singlehandedly destroying anything, except instances of puppies drowning."

Carter: "I'm beginning to think that the difference between us is that I'm anti-lying, whereas you're pro-lying."

Quinn: "Look, I'm only in favor of lying when it has good consequences. That makes me different from pro-lying scoundrels."

Carter: "But you have really sloppy reasoning about whether lying, in fact, has good consequences. Your arguments for doing so, when you lie, are made of Swiss cheese."

Quinn: "Well, I can't deductively prove anything about the real world, so I'm using the most relevant considerations I can."

Carter: "But you're using reasoning processes that systematically protect certain cached facts from updates, and use these cached facts to justify not updating. This was very clear when you used outright circular reasoning, to use the cached fact that denigrating PADP is bad, to justify terminating my argument that it wasn't bad to denigrate them. Also, you said the PADP people were nice and hardworking as a reason I shouldn't accuse them of dishonesty... but, the fact that PADP saved far fewer puppies than they claimed actually casts doubt on those facts, and the relevance of them to PADP's effectiveness. You didn't update when I first told you that fact, you instead started committing rhetorical violence against me."

Quinn: "Hmm. Let me see if I'm getting this right. So, you think I have false cached facts in my mind, such as PADP being a good charity."

Carter: "Correct."

Quinn: "And you think those cached facts tend to protect themselves from being updated."

Carter: "Correct."

Quinn: "And you think they protect themselves from updates by generating bad consequences of making the update, such as fewer people donating to PADP."

Carter: "Correct."

Quinn: "So you want to outlaw appeals to consequences, so facts have to get acknowledged, and these self-reinforcing loops go away."

Carter: "Correct."

Quinn: "That makes sense from your perspective. But, why should I think my beliefs are wrong, and that I have lots of bad self-protecting cached facts?"

Carter: "If everyone were as willing as you to lie, the history books would be full of convenient stories, the newspapers would be parts of the matrix, the schools would be teaching propaganda, and so on. You'd have no reason to trust your own arguments that speaking the truth is bad."

Quinn: "Well, I guess that makes sense. Even though I lie in the name of good values, not everyone agrees on values or beliefs, so they'll lie to promote their own values according to their own beliefs."

Carter: "Exactly. So you should expect that, as a reflection to your lying to the world, the world lies back to you. So your head is full of lies, like the 'PADP is effective and run by good people' one."

Quinn: "Even if that's true, what could I possibly do about it?"

Carter: "You could start by not making appeals to consequences. When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question."

Quinn: "But how do I prevent actually bad consequences from happening?"

Carter: "If your head is full of lies, you can't really trust ad-hoc object-level arguments against speech, like 'saying PADP didn't save very many puppies is bad because PADP is a good charity'. You can instead think about what discourse norms lead to the truth being revealed, and which lead to it being obscured. We've seen, during this conversation, that appeals to consequences tend to obscure the truth. And so, if we share the goal of reaching the truth together, we can agree not to do those."

Quinn: "That still doesn't answer my question. What about things that are actually bad, like privacy violations?"

Carter: "It does seem plausible that there should be some discourse norms that protect privacy, so that some facts aren't revealed, if such norms have good consequences overall. Perhaps some topics, such as individual people's sex lives, are considered to be banned topics (in at least some spaces), unless the person consents."

Quinn: "Isn't that an appeal to consequences, though?"

Carter: "Not really. Deciding what privacy norms are best requires thinking about consequences. But, once those norms have been decided on, it is no longer necessary to prove that privacy violations are bad during discussions. There's a simple norm to appeal to, which says some things are out of bounds for discussion. And, these exceptions can be made without allowing appeals to consequences in full generality."

Quinn: "Okay, so we still have something like appeals to consequences at the level of norms, but not at the level of individual arguments."

Carter: "Exactly."

Quinn: "Does this mean I have to say a relevant true fact, even if I think it's bad to say it?"

Carter: "No. Those situations happen frequently, and while some radical honesty practitioners try not to suppress any impulse to say something true, this practice is probably a bad idea for a lot of people. So, of course you can evaluate consequences in your head before deciding to say something."

Quinn: "So, in summary: if we're going to have suppression of some facts being said out loud, we should have that through either clear norms designed with consequences (including consequences for epistemology) in mind, or individuals deciding not to say things, but otherwise our norms should be protecting true speech, and outlawing appeals to consequences."

Carter: "Yes, that's exactly right! I'm glad we came to agreement on this."

New Comment
82 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The motivating example for this post is whether you should say "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies", with Quinn arguing that you shouldn't say it because saying it has bad consequences. The problem is, saying this has very clearly good consequences, which means trying to use it as a tool for figuring out what you think of appeals to consequences sets up your intuitions to confuse you.

(It has clearly good consequences because "how much money goes to PADP right now" is far less import than "building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones". Plus if, say, Animal Charity Evaluators trusted this higher number of puppies saved and it had lead them to recommend PADP as I've if their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you can't do this if you declare discussing negative information about organizations off limit

... (read more)

The post would be a lot clearer if it had a motivating example that really did have bad consequences, ask things considered.

The extreme case would be a scientific discovery which enabled anyone to destroy the world, such as the supernova thing in Three Worlds Collide or the thought experiment that Bostrom discusses in The Vulnerable World Hypothesis:

So let us consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? Szilard becomes gravely concerned. He sees that his discovery must be kept secret at all costs. But how. His insight is bound to occur to others. He could talk to a few of his physicist friends, the ones most likely to stumble upon the idea, and try to persuade them not to publish anything on nuclear chain reactions or on any of the reasoning steps leading up to the dangerous discovery. (That is what Szilard did in actual history.)

[...] Soon, figuring out how to initiate a nuclear chain reaction with pieces of metal, glass, and electricity will no longer take genius but will be within reach of any STEM student with an inventive mindset.

Note, I'm not arguing for a positive obligation to always inform everyone (see last few lines of dialogue), it's important for people to use their discernment sometimes. But, in the case you mentioned, if your study really did find that a vaccine caused autism, by the logic of the dialogue, that casts doubt on the "vaccines don't cause autism and antivaxxers are wrong and harmful" belief. (Maybe you're not the only one who has found that vaccines cause autism, and other researchers are hiding it too). So, you should at least update that belief on the new evidence before evaluating consequences. (It could be that, even after considering this, the new study is likely to be a fluke, and discerning researchers will share the new study in an academic community without going to the press)

My main objection is that the post is built around a case where Quinn is very wrong in their initial "bad consequences" claim, and that this leads people to have misleading intuitions. I was trying to propose an alternative situation where the "bad consequences" claim was true or closer to true, but where Quinn would still be wrong to suggest Carter shouldn't describe what they'd found.

(Also, for what it's worth, I find the Quinn character's argumentative approach very frustrating to read. This makes it hard to take anything that character describes seriously.)

Instead of Quinn admitting lying is sometimes good, I wish he had said something like:

“PADP is widely considered a good charity by smart people who we trust. So we have a prior on it being good. You’ve discovered some apparent evidence that it’s bad. So now we have to combine the prior and the evidence, and we end up with some percent confidence that they’re bad.
If this is 90% confidence they’re bad, go ahead. What if it’s more like 55%? What’s the right action to take if you’re 55% sure a charity is incompetent and dishonest (but 45% chance you misinterpreted the evidence)? Should you call them out on it? That’s good in the world where you’re right, but might disproportionately tarnish their reputation in the world where they're wrong. It seems like if you’re 55% sure, you have a tough call. You might want to try something like bringing up your concerns privately with close friends and only going public if they share your opinion, or asking the charity first and only going public if they can’t explain themselves. Or you might want to try bringing up your concerns in a nonconfrontational way, more like ‘Can anyone figure out what’s going on with PADP’s math?’ rather than ‘PADP
... (read more)
8Wei Dai
Part of this is pretty close to what I wrote in the actual debate. The part about climate science is new though and I'd like to see a response to it.

The part about climate science seems like a pretty bog-standard outside view argument, which in turn means I find it largely uncompelling. Yes, there are people who are so stupid, they can only be saved from their own stupidity by executing an epistemic maneuver that works regardless of the intelligence of the person executing it. This does not thereby imply that everyone should execute the same maneuver, including people who are not that stupid, and therefore not in need of saving. If someone out there is so incompetent that they mistakenly perceive themselves as competent, then they are already lost, and the fact that an illegal (from the perspective of normative probability theory) epistemic maneuver exists which would save them if they executed it, does not thereby make that maneuver a normatively good move. (And even if it were, it's not as though the people who would actually benefit from said maneuver are going to execute it--the whole reason that such people are loudly, confidently mistaken is that they don't take the outside view seriously.)

In short: there is simply no principled justification for modesty-based arguments, and--though it may be somewhat impolite to say--I a

... (read more)

The bar that is set for appeals to consequences imply the sort of equilibrium world you'll end up in. Erring on the side of higher is better, because it is hard to go the other way because epistemic standards tend to slide in the face of local incentives.

I also want to note an argumentative tactic that occurs on the tacit level whereby people will push you into a state where you need to expend more energy on average per truth bit than they do, so they eventually win by attrition. Related to evaporative cooling. The subjective experience of this feels like talking to the cops. You sense that no big wins are available (because they have their bottom line) but big losses are, so you stop talking. If you've encountered this dynamic, you recognize things like this

> "You still haven't refuted my argument. If you don't do so, I win by default."

as part of the supporting framework for the dynamic and it will make you very angry...which others will then use as part of the dynamic which makes you angry which......

When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question.”

It seems to me that they key issue here is the need for both public and private conversational spaces.

In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we're all fighting / negotiating over. In those contexts it is reasonable (I don't know if it is correct, or not), to constrain what things you say, even if they're true, because of their consequences. It is often the case that one piece of information, though true, taken out of context, does more harm than good, and often conveying the whole informational context to a large group of people is all but impossible.

But we need to be able to figure out which policies to support, somehow, separately from supporting them on this political battlefield. We also need private spaces, where we can think and our initial thoughts can be isolated from their possible consequences, or we won't be able to think freely.

It seems like Carter thinks they are having a private conversation, in a private space, and Quinn thinks they're having a public conversation in a public space.

(Strong-upvoted for making something explicit that is more often tacitly assumed. Seriously, this is an incredibly useful comment; thanks!!)

In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we're all fighting / negotiating over

Can you unpack what you mean by "have to be" in more detail? What happens if you just report your actual reasoning (even if your voice trembles)? (I mean that as a literal what-if question, not a rhetorical one. If you want, I can talk about how I would answer this in a future comment.)

I can imagine creatures living in a hyper-Malthusian Nash equilibrium where the slightest deviation from the optimal negotiating stance dictated by the incentives just gets you instantly killed and replaced with someone else who will follow the incentives. In this world, if being honest isn't the optimal negotiating stance, then honesty is just suicide. Do you think this is a realistic description of life for present-day humans? Why or why not? (This is kind of a leading question on my part. Sorry.)

But we need to be able to figure out which policies to support, somehow, separately from

... (read more)
5Eli Tyre
Huh. Can you say why?
You're clearly and explicitly advocating for a policy I think is abhorrent. This is really valuable, because it gives me a chance to argue that the policy is abhorrent, and potentially change your mind (or those of others in the audience who agree with the policy). I want to make sure you get socially-rewarded for clearly and explicitly advocating for the abhorrent policy (thus the strong-upvote, "thanks!!", &c.), because if you were to get punished instead, you might think, "Whoops, better not say that in public so clearly", and then secretly keep on using the abhorrent policy. Obviously—and this really should just go without saying—just because I think you're advocating something abhorrent doesn't mean I think you're abhorrent. People make mistakes! Making mistakes is OK as long as there exists enough optimization pressure to eventually correct mistakes. If we're honest with each other about our reasoning, then we can help correct each other's mistakes! If we're honest with each other about our reasoning in public, then even people who aren't already our closest trusted friends can help us correct our mistakes!
5Eli Tyre
Well, I think the main thing is that this depends on onlookers having the ability, attention, and motivation to follow the actual complexity of your reasoning, which is often a quiet unreasonable assumption. Usually, onlookers are going to round off what you're saying to something simpler. Sometimes your audience has the resources to actually get on the same page with you, but that is not the default. If you're not taking that dynamic into account, then you're just shooting yourself in the foot. Many of the things that I believe are nuanced, and nuance doesn't travel well in the public sphere, where people will overhear one sentence out of context (for instance), and then tell their friends what "I believe." So tact requires that I don't say those things, in most contexts. To be clear, I make a point to be honest, and I am not suggesting that you should ever outright lie. This does not seem right to me, so it seems like one of us is missing the other somehow.

Okay, I was getting too metaphorical with the encyclopedia; sorry about that. The proposition I actually want to defend is, "Private deliberation is extremely dependent on public information." This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you've heard in public discourse, rather than things you've directly seen and verified for yourself. But if everyone in Society is, like you, simplifying their public arguments in order to minimize their social "attack surface", then the information you bring to your private discussion is based on fear-based simplifications, rather than the best reasoning humanity has to offer.

In the grandparent comment, the text "report your actual reasoning" is a link to the Sequences post "A Rational Argument", which you've probably read. I recommend re-reading it.

If you omit evidence against your preferred conclusion, people can't take your reasoning at face value anymore: if you first write at the bottom of a piece of paper, "... and therefore, Policy P is the best," it doesn't matter what you write on the l

... (read more)
(I'm not sure this comment is precisely a reply to the previous one, or more of a general reply to "things Zack has been saying for the past 6 months") I notice that I basically by this point agree with some kind of "something about the overton window of norms should change in the direction Zack is pushing in", but it seems... like you're pushing more for an abstract principle than a concrete change, and I'm not sure how to evaluate it. I'd find it helpful if you got more specific about what you're pushing for. I'd summarize my high-level understanding of the push you're making as: 1. "Geez, the appropriate mood for 'hmm, communicating openly and honestly in public seems hard' is not 'whelp, I guess we can't do that then'. Especially if we're going to call ourselves rationalists"  2. Any time that mood seems to cropping up or underlying someone's decision procedure it should be pushed back against. [is that a fair high level summary?] I think I have basically come to agree (or at least take quite seriously), point #1 (this is a change from 6 months ago). There are some fine details about where I still disagree with something about your approach, and what exactly my previous and new positions are/were. But I think those are (for now) more distracting than helpful. My question is, what precise things do you want changed from the status quo? (I think it's important to point at missing moods, but implementing a missing mood requires actually operationalizing it into actions of some sort). I think I'd have an easier time interacting with this if I understood better what exact actions policies you're pushing for. I see roughly two levels of things one might operationalize: 1. Individual Action – Things that individuals should be trying to do (and, if you're a participant on LessWrong or similar spaces, the "price for entry" should be something like "you agree that you are supposed to be trying to do this thing" 2. Norm Enforcement – Things that people should be

you're pushing more for an abstract principle than a concrete change

I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely "pushed for." If a lawful physical process results in the states of physical system A becoming correlated with the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I'm claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs that exhibit this kind of evidential-entanglement relationship.

Notably, the process whereby you can use your observations about C to help make better predictions about A doesn't work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.

Any time that mood seems to cropping up or underlying someone's decision procedure it should be pushed back against.

The word "should" definitely doesn't belong here. Like, that's definitely a fair description of the push I'm making. Because I actually feel that way. But obviously, other people shouldn't passionately advocate for open

... (read more)
The unpacked "should" I imagined you implying was more like "If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it's likely that you're not noticing the damage you're doing and if you really reflected on it honestly you'd probably " That part is technical knowledge (and so is the related "the observation process doesn't work [well] if system B is systematically distorting things in some way, whether intentional or not."). And I definitely agree with that part and expect Eli does to and generally don't think it's where the disagreement lives. But, you seem to have strongly implied, if not outright stated, that this isn't just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is. There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.  There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don't seem like they're going to come about by accident.  There is a fact of the matter of what happens if you push for "thick skin" and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for peo
2Matt Goldenberg
This seems to me like you're saying "people shouldn't have to advocate for being open and honest because people should be open and honest" And then the question becomes... If you think it's true that people should be open and honest, do you have policy proposals that help that become true?

Not really? The concept of a "policy proposal" seems to presuppose control over some powerful central decision node, which I don't think is true of me. This is a forum website. I write things. Maybe someone reads them. Maybe they learn something. Maybe me and the people who are better at open and honest discourse preferentially collaborate with each other (and ignore people who we can detect are playing a different game), have systematically better ideas, and newcomers tend to imitate our ways in a process of cultural evolution.

I separated out the question of "stuff individuals should do unilaterally" from "norm enforcement" because it seems like at least some stuff doesn't require any central decision nodes.

In particular, while "don't lie" is an easy injunction to follow, "account for systematic distortions in what you say" is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. "Publicly say literally ever inconvenient thing you think of" probably isn't what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts

I'm asking because I'm actually interested in improving on this dimension.

(some current best guesses of mine are, at least for my own values, are: * "Practice noticing heretical thoughts I think and actually notice what things you can't say, without obligating yourself to say them, so that you don't accidentally train yourself not to think them" * "Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle" (it's unclear to me how much to prioritize this, because there's two separate potential models of 'social/epistemic courage is a muscle' and 'social/epistemic courage is a resource you can spend, but you risk using up people's willingness to listen to you, as well a "most things one might be courageous about actually aren't important and you'll end up spending a lot of effort on things that don't matter") But, I am interested in what you actually do within your own frame/value setup.
2Matt Goldenberg
I'm more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I've shared elsewhere on radical transparency norms seem one way to go about this. I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.
4Matt Goldenberg
I would love to see a summary what particular arguments of Zach's changed your mind, and how it changed over time.
Most of the harm here comes not from public discourse being filtered in itself, but from people updating on filtered public discourse as if it were unfiltered. This makes me think it's better to get people to realize that public discourse isn't going to contain all the arguments than to get them to include all the arguments in public discourse.

I agree that that's much less bad—but "better"? "Better"!? By what standard? What assumptions are you invoking without stating them?

I should clarify: I'm not saying submitting to censorship is never the right thing to do. If we live in Amazontopia, and there's a man with a gun on the streetcorner who shoots anyone who says anything bad about Jeff Bezos, then indeed, I would not say anything bad about Jeff Bezos—in this specific (silly) hypothetical scenario with that specific threat model.

But ordinarily, when we try to figure out which cognitive algorithms are "better" (efficiently produce accurate maps, or successful plans), we tend to assume a "fair" problem class unless otherwise specified. The theory of "rational thought, except you get punished if you think about elephants" is strictly more complicated than the theory of "rational thought." Even if we lived in a world where robots with MRI machines who punish elephant-thoughts were not unheard of and needed to be planned for, it would be pedagogically weird to treat that as the central case.

I hold "discourse algorithms" to the same standard: we need to figure out how to think together in the simple, unconstrained case before

... (read more)
we need to figure out how to think together

This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we're limited by temperament rather than understanding. I agree that if we're trying to think about how to think together we can treat no censorship as the default case.

worthless cowards

If cowardice means fear of personal consequences, this doesn't ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don't do it is because I'd feel guilt about harming the discourse. This motivation doesn't disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.

who just assume as if it were a law of nature that discourse is impossible

I don't know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far re... (read more)

The reason why I mostly don't do it is because I'd feel guilt about harming the discourse

Woah, can you explain this part in more detail?! Harming the discourse how, specifically? If you have thoughts, and your thoughts are correct, how does explaining your correct thoughts make things worse?

Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.

I want to distinguish between "harming the discourse" and "harming my faction in a marketing war."

When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance), then other people who aren't already your closest trusted friends have the opportunity to learn from the arguments and evidence that actually convinced you, combine it with their own knowledge, and potentially make better decisions. ("Discourse" might not be the right word here—the concept I want to point to includes unilateral truthtelling, as on a blog with no comment section, or where your immediate interlocutor doesn't "reciprocate" in good faith, but someone in the audience might learn something.)

If you think other people can't process arguments at all, but that you can, how do you account for your own existence? For myself: I'm smart, but I'm not that smart (IQ ~130). The Sequences were life-changingly great, but I was still interested in philosophy and argument before that. Our little robot cult does not have a monopoly on reasoning itself.


... (read more)
I want to agree with the general point here, but I find it breaking down in some of the cases I'm considering. I think the underlying generator is something like "communication is a two-way street", and it makes sense to not just emit sentences that compile and evaluate to 'true' in my ontology, but that I expect to compile and evaluate to approximately what I wanted to convey in their ontology. Does that fall into 'harming my faction in a marketing war' according to you?

No, I agree that authors should write in language that their audience will understand. I'm trying to make a distinction between having intent to inform (giving the audience information that they can use to think with) vs. persuasion (trying to exert control over the audience's conclusion). Consider this generalization of a comment upthread—

Consider the idea that X implies Y. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by concluding that not-X, because they're emotionally attached to not-Y, and I care a lot more about people having correct beliefs about the truth value of X than Y.

This makes perfect sense as part of a consequentialist algorithm for maximizing the number of people who believe X. The algorithm works just as well, and for the same reasons whether X = "superintelligence is an existential risk" and Y = "returns from stopping global warming are smaller than you might otherwise think" (when many audience members have global warming "cause-area loyalty"), or whether X = "you should drink Coke" and Y = "returns from drinking Pepsi are smaller than you might otherwise think" (when many audience mem

... (read more)

"Intent to inform" jives with my sense of it much more than "tell the truth."

On reflection, I think the 'epistemic peer' thing is close but not entirely right. Definitely if I think Bob "can't handle the truth" about climate change, and so I only talk about AI with Bob, then I'm deciding that Bob isn't an epistemic peer. But if I have only a short conversation with Bob, then there's a Gricean implication point that saying X implicitly means I thought it was more relevant to say than Y, or is complete, or so on, and so there are whole topics that might be undiscussed because I don't want to send the implicit message that my short thoughts on the matter are complete enough to reconstruct my position or that this topic is more relevant than other topics.


More broadly, I note that I often see "the discourse" used as a term of derision, I think because it is (currently) something more like a marketing war than an open exchange of information. Or, like a market left to its own devices, it has Goodharted on marketing. It is unclear to me whether it's better to abandon it (like, for example, not caring about what people think on Twitter) or attempt to recapture it (by pushing for the sorts of 'public goods' and savvy customers that cause markets to Goodhart less on marketing).

Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don't always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one's side, and is even more importantly different from anything similar to what pops into people's minds when they hear "psychological manipulation". If I'm worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it's good to press hypertech buttons under because they've always vaguely heard that set of thoughts is disreputable and so never looked into it, I don't think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let's still talk some time.
That's fair. Let me scratch "psychologically manipulate", edit to "persuade", and refer to my reply to Vaniver and Ben Hoffman's "The Humility Argument for Honesty" (also the first link in the grandparent) for the case that generic persuasion techniques are (counterintuitively!) Actually Bad. I don't think it's the long-form medium so much as it is the fact that I am on a personal vindictive rampage against appeals-to-consequences lately. You should take my vindictiveness into account if you think it's biasing me!
This agrees with Carter: Carter is arguing that appeals to consequences should be disallowed at the level of discourse norms, including public discourse norms. That is, in public, "but saying that has bad consequences!" is considered invalid. It's better to fight on a battlefield with good rules than one with bad rules.
4Eli Tyre
Hmm...something about that seems not quite right to me. I'm going to see if I can draw out why. The thing at stake for Quinn_Eli is not whether or not this kind of argument is "invalid". It's whether or not she has the affordance to make a friendly, if sometimes forceful, bid to bring this conversation into a private space, to avoid collateral damage. (Sometimes of course, the damage won't be collateral. If in private discussion, Quinn concludes, to the best of her ability to reason, that, in fact, it would be good if fewer people donated to PADP, she might then give that argument in public. And if others make bids to say explore that privately, at that stage, she might respond, "No. I am specifically arguing that onlookers should donate less to PADP (or think that decreasing their donations is a reasonable outcome of this argument). That isn't accidental collateral damage. It's the thing that's at stake for me right now.") I don't know if you already agree with what I'm saying here. . . . I don't think we get to pick the rules of the battlefield. The rules of the battlefield are defined only by what causes one to win. Nature alone chooses the rules.

Bidding to move to a private space isn't necessarily bad but at the same time it's not an argument. "I want to take this private" doesn't argue for any object-level position.

It seems that the text of what you're saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don't think you actually believe that. Perhaps you've given up on affecting them, though.

("What wins" is underdetermined given choice is involved in what wins; you can't extrapolate from two player zero sum games (where there's basically one best strategy) to multi player zero sum games (where there isn't, at least due to coalitional dynamics implying a "weaker" player can win by getting more supporters))

It seems that the text of what you're saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don't think you actually believe that.

How much agency we have is proportional to how many other actors are in a space. I think it's quite achievable (though requires a bit of coordination) to establish good norms for a space with 100 people. It's still achievable, but... probably at least (10x?) as hard to establish good norms for 1000 people.

But "public searchable internet" is immediately putting things in in a context with at least millions if not billions of potentially relevant actors, many of whom don't know anything about your norms. I'm still actually fairly optimistic about making important improvements to this space, but those improvements will have a lot of constraints for anyone with major goals that affect the world-stage.

2Eli Tyre
Yes. This, exactly. Thank you for putting it so succinctly.
2Eli Tyre
Furthermore, you have a lot more ability to enforce norms regarding what people say, as opposed to norms about how people interpret what people say.
6Eli Tyre
I do think that is possible and often correct to push for some discourse norms over others. I will often reward moves that I think are good, and will sometimes challenge moves that I think are harmful to our collective epistemology. But I don't think that I have much ability to "choose" how other people will respond to my speech acts. The world is a lot bigger than me, and it would be imprudent to miss-model that fact that, for instance, many people will not or cannot follow some forms of argument, but will just round what you're saying to the closest thing that they can understand. And that this can sometimes cause damage. (I think that you must agree with this? Or maybe you think that you should refuse to engage in groups where the collective epistemology can't track nuanced argument? I don't think I'm getting you yet.) I absolutely agree. I think the main thing I want to stand for here is both that obviously the consequences of believing or saying a statement have no bearing on its truth value (except in unusual self-fulfilling prophecy edge cases), and it is often reasonable to say "Hey man, I don't think you should say that here in this context where bystanders will overhear you." I'm afraid that those two might being conflated, or that one is is being confused for the other (not in this dialogue, but in the world). To be clear, I'm not sure that I'm disagreeing with you. I do have the feeling that we are missing each other somehow.
Yes, and Carter is arguing in a context where it's easy to shift the discourse norms, since there are few people present in the conversation. LW doesn't have that many active users, it's possible to write posts arguing for discourse norms, sometimes to convince moderators they are good, etc. Sure, and also "that's just your opinion, man, so I'll keep talking" is often a valid response to that. It's important not to bias towards saying exposing information is risky while hiding it is not.
I think you meant 'do not think'?
2Eli Tyre
Yep. Fixed.
4Eli Tyre
Notably, many other commenters seem to be implicitly or explicitly pointing to the private vs. public distinction.

Well, I certainly agree with the position you’re defending. Yet I can’t help but feel that the arguments in the OP lack… a certain concentrated force, which I feel this topic greatly deserves.

Without disagreeing, necessarily, with anything you say, here is my own attempt, in two (more or less independent) parts.

The citadel of truth

If the truth is precious, its pursuit must be unburdened by such considerations as “what will happen if we say this”. This is impractical, in the general case. You may not be interested in consequences, after all, but the consequences are quite interested in you…

There is, however, one way out of the quagmire of consequential anxiety. Let there be a place around which a firewall of epistemology is erected. Let appeals to consequences outside that citadel, be banned within its walls. Let no one say: “if we say such a thing, why, think what might happen, out there, in the wider world!”. Yes, if you say this thing out there, perhaps unfortunate consequences may follow out there. But we are not speaking out there; so long as we speak in here, to each other, let us consider it irrelevant what effects our words may produce upon the world outside. In here, we c

... (read more)
Note: I had originally intended to write a response post to this called "Building the Citadel of Truth", basically arguing: "Yup, the Citadel of Truth sounds great. Let's build it. Here are my thoughts about the constraints and design principles that would need to go into constructing it" For various reasons I didn't do that at the time (I think shortly afterwards I sort of burned out on the overall surrounding discourse). I might still do that someday.  I touch upon the issues in this comment, which seems worth quoting here for now:

It seems like a quite de­sir­able prop­erty to able to talk freely about which lo­cal orgs and peo­ple de­serve money and pres­tige – but I don’t cur­rently know of ro­bust game me­chan­ics that will ac­tu­ally, re­li­ably en­able this in any en­vi­ron­ment where I don’t per­son­ally know and trust each per­son.

There should not be any “local orgs” inside the citadel; and if the people who participate in the citadel also happen to, together, constitute various other orgs… well, first of all, that’s quite a bad sign; but in any case discussions of them, and whether they deserve money and so on, should not take place inside the citadel.

If this is not obvious, then I have not communicated the concept effectively. I urge you to once again consider this part:

Any among us who have some­thing to pro­tect, in the world be­yond the citadel, may wish to take the truths we find, and ap­ply them to that out­side world, and dis­cuss these things with oth­ers who feel as they do. In these dis­cus­sions, of plans and strate­gies for act­ing upon the wider world, the con­se­quences of their words, for that world, may be of the ut­most im­por­tance. But if so, to have such dis­cus­sions, these

... (read more)
Okay, yeah the thing I'm thinking about is definitely different from the thing you're thinking about and I'll refrain from referring to my thing as "The Citadel of Truth".  [Edit: the thing-in-my-head still has a focus on "within the citadel-esque-thing, the primary sacred value is the truth, because to actually Use Truth to Do Desirable Things you need to actually to Focus On Truth For It's Own Sake, and yes, this is a bit contradictory, and I'm not 100% sure how to resolve the contradiction.  But, a citadel that's just focused on truth without paying attention to how that truth will actually get applied to anything, that doesn't attempt to resolve the contradiction, doesn't seem very interesting to me. That's not the hard part] I do think this is plausibly quite relevant to The Thing I'm Thinking of, independent of whether it's relevant to The Thing You're Thinking Of. Will think on that a bit. I'm left with sort of a confused "what problem is your conception of the Citadel actually trying to solve", though? The two main problems that the status quo face AFAICT (i.e. if you put down a flag and say "Truth!" and then some people show up and start talking, but nonetheless find that their talk isn't always truthtracking), is: * There might be people Out There who dislike what you say, and harm or impose costs on you in some way * There might be people In the Conversation who have some kind of stake in the conversation, that are motivated to warp it. I... think the first one is relatively straightforward. (There are two primary strategies I can see, of either "deciding not to care", or "being somewhat private / obfuscated". I think the latter is a better strategy but if you're precommitting to "literally just focus truth with no optimization towards being able to use that truth later" I think the former strategy probably works fine) For the second problem... well, if your solution is to filter/arrange things such that the citadel just doesn't have Local Polit
4Said Achmiz
Note that these problems are not separate, but in fact are inextricably linked. This is because people Out There can come In Here (and will absolutely attempt to do so, in proportion to how successful your Citadel becomes), and also people In Here may decide to interact with social forces Out There. Indeed, it does not. Nor is it meant to. Figuring out the truth. Note, as per my other comment, that we currently do not have any institutions that have just that as their goal. Really—none. (If you think that this claim is obviously wrong, then, as usual: provide examples!)
I'm not sure our models here are that different. What I'd argue (not sure if you'd disagree), is something like:  We have no institutions whose sole goal is figure out the truth, but the reason for this is that to be an "institution" (as opposed to some random collection of people just quietly figuring stuff out) you need some kind of mechanism for maintaining the institution, and this inevitably ends up instantiating it's own version of Local Politics even it initially didn't have such a thing. I don't have clear examples, no, but my guess is that there are, in fact, various small citadels throughout the world, but any citadel that's successful enough for both of us to have heard of it, was necessarily successful enough to attract attention from Powers That Be. Wikipedia and Academic Science both come to mind as institutions that have their own politics, but which I (suspect), still do okay-ish at generating little pocket-citadels that succeed at focusing on whatever subset of truthseeking they've specialized in – individual departments, projects, or research groups. The trouble lies on the outside world distinguishing which pockets are generating "real truth" and which are not (because any institution that became known as a distinguishing tool would probably become corrupted) Perhaps one core disagreement here is about which problem is 'actually impossible'? * I say, you don't have the option of avoiding Local Politics, so the task is figuring out how to minimize the damage that local politics can do to epistemics (possibly aided by forking off private bubbles that are mostly inert to outsiders, thinking on their own, but reporting their findings periodically) * You say... something like 'local politics is so toxic that the task must be figure out a way to avoid it'? Does that sound right?
2Said Achmiz
Well, roughly. I don’t think it’s possible to entirely avoid “local politics”, in a totally literal sense, because any interaction of people within any group will end up being ‘politics’ in some sense. But, certainly my view is closer to the latter than to the former, yes. Basically, it’s just what I said in this earlier comment. To put it another way: if you already have “local politics”, you’re starting out with a disadvantage so crippling that there’s no point in even trying to build any “citadel of truth”.
2Said Achmiz
I do not think there is any way to resolve the contradiction. It seems clear to me that just as no man may serve two masters, no organization may serve two goals. “What you are will­ing to trade off, may end up traded away”. And ultimately, you will sacrifice your pursuit of truth, if what you are actually pursuing is something else—because there will come a time when your actual goal turns out (in that situation, at that time, in that moment) to not be best served by pursuing Truth, for its own sake or otherwise. And then your Citadel will not even be a Citadel of Truth And Something Else, but only a Citadel of Something Else, And Not Truth At All.
I think there's still some highly technical apparent-contradiction-resolution to do in the other direction: in a monist physical universe, you can't quite say, "only Truth matters, not consequences", because that just amounts to caring about the consequence of there existing a physical system that implements correct epistemology: the map is part of the territory. To be clear, I think almost everyone who brings this up outside the context of AI design is being incredibly intellectually dishonest. ("It'd be irrational to say that—we'd lose funding! And if we lose funding, then we can't pursue Truth!") But I want to avoid falling into the trap of letting the forceful rhetoric I need to defend against bad-faith appeals-to-consequences, obscure my view of actually substantive philosophy problems.
2Said Achmiz
Everything else you said aside… It is the hard part. It really, really is. If you doubt this, witness the fact that we currently have no such institutions.

[speaking for myself, not for any organization]

If this is an allegory against appeals to consequences generally, well and good.

If there's some actual question about whether wrong cost effectiveness numbers are being promoted, could people please talk about those numbers specifically so we can all have a try at working out if that's really going on? E.g. this post made a similar claim to what's implied in this allegory, but it was helpful that it used concrete examples so people could work out whether they agreed (and, in that case, identify factual errors).

This is an allegory. While I didn't have any particular real-world example in mind, my dialogue-generation was influenced by a time I had seen appeals to consequences in EA; see EA Has A Lying Problem and this comment thread. So this was one of the more salient cases of a plausible moral case for shutting down true speech.

I think this is strawmanning the appeal to consequences argument, by mixing up private beliefs and public statements, and by ending with a pretty superficial agreement on rule-consequentialism without exploring how to pick which rules (among one for improving private beliefs, one for sharing relevant true information and one for suppressing harmful information) applies.

The participants never actually attempt to resolve the truth about puppies saved per dollar, calling the whole thing into question - both whether their agreement is real and whether it's the right thing. Many of these discussions should include a recitation of [ https://wiki.lesswrong.com/wiki/Litany_of_Tarski ], and a direct exploration whether it's beliefs (private) or publication (impacting presumed-less-rational agents) that is at issue.

In any case, appeals to consequences at the meta/rule level still HAS to be grounded in appeals to consequences at the actual object consequence level. A rule that has so many exceptions that it's mostly wrong is actively harmful. My objection to the objection to "appeal to consequences" is that the REAL objection is to bad epistemology of consequence... (read more)

Carter is a mistake theorist, Quinn is a conflict theorist. At no point does Quinn ever talk about truth, or about anything, really. His words are weapons to achieve an end by whatever means possible. There is no more meaning in them than in a fist. Carter's meta-mistake is to believe that he is arguing with someone. Quinn is not arguing; he is in a fist fight.

Quinn: “Hold it right there. Regardless of whether that’s true, it’s bad to say that.”

Carter: “That’s an appeal to consequences, well-known to be a logical fallacy.”

The link in Carter's statement leads to a page that clearly contradicts Carter's claim:

In logic, appeal to consequences refers only to arguments that assert a conclusion's truth value (true or false) without regard to the formal preservation of the truth from the premises; appeal to consequences does not refer to arguments that address a premise's consequential desirability (good or bad, or right or wrong) instead of its truth value.

It sounds to me like Jessica is using "appeal to consequences" expansively to include not just "X has bad consequences so you should not believe X" to "saying X has bad consequences so you should not say X"?

Yes. In practice, if people are discouraged from saying X on the basis that it might be bad to say it, then the discourse goes on believing not-X. So, the discourse itself makes an invalid step that's analogous to an appeal to consequences "if it's bad for us to think X is true then it's false".

Be careful with unstated assumptions about belief aggregation. "the discourse" doesn't have beliefs. People have beliefs, and discourse is one of the mechanisms for sharing and aligning those beliefs. It helps a lot to give names to people you're worried about, to make it super-clear whether you're talking about your beliefs, your current conversational partner's beliefs, or beliefs of other people who hear a summary from one of you. If Alice discourages Bob from saying X, then Charlie might go on believing not-X. This is a very different concern from Bob being worried about believing a false not-X if not allowed to discuss the possibility. Both concerns are valid, IMO, but they have different thresholds of importance and different trade-offs to make in resolution..
In a math conversation, people are going to say and possibly write down a bunch of beliefs, and make arguments that some beliefs follow from each other. The conversation itself could be represented as a transcript of beliefs and arguments. The beliefs in this transcript are what I mean by "the discourse's beliefs".

Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into l... (read more)

This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic. If you want to comment on my AI timelines post, do that (although you haven't read it so I don't even know which of my content you're trying to comment on).
I think that if a given "meta-level point" has obvious ties to existing object-level discussions, then attempting to suppress the object-level points when they're raised in response is pretty disingenuous. (What I would actually prefer is for the person making the meta-level point to be the same person pointing out the object-level connection, complete with "and here is why I feel this meta-level point is relevant to the object level". If the original poster doesn't do that, then it does indeed make comments on the object-level issues seem "off-topic", a fact which ought to be laid at the feet of the original poster for not making the connection explicit, rather than at the feet of the commenter, who correctly perceived the implications.) Now, perhaps it's the case that your post actually had nothing to do with the conversations surrounding EA or whatever. (I find this improbable, but that's neither here nor there.) If so, then you as a writer ought to have picked a different example, one with fewer resemblances to the ongoing discussion. (The example Jeff gave in his top-level comment, for example, is not only clearer and more effective at conveying your "meta-level point", but also bears significantly less resemblance to the controversy around EA.) The fact that the example you chose so obviously references existing discussions that multiple commenters pointed it out is evidence that either (a) you intended for that to happen, or (b) you really didn't put a lot of thought into picking a good example.
I shouldn't have to argue about the object-level political consequences of 1+4=5 in a post arguing exactly that. This is the analytic synthetic distinction / logical uncertainty / etc. Yes, I could have picked a better less political example, as recommended in Politics is the Mind Killer. In retrospect, that would have caused less confusion. Anyway, Evan has the option of commenting on my AI timelines post, open thread, top level post, shortform, etc.
In metaphysical conflicts people don't win by coming up with the best evidence, they win by controlling what gets counted as evidence. By default, memeplexes gain stability by creating an environment in which evidence against them can't be taken seriously. Arguments that EA has failed to actually measure the things it claims are worth measuring should be taken very seriously on their face, since that is core to the claims of moral obligation (which is itself a bad frame, but less serious.)

So, in summary: if we’re going to have suppression of some facts being said out loud, we should have that through either clear norms designed with consequences (including consequences for epistemology) in mind, or individuals deciding not to say things, but otherwise our norms should be protecting true speech, and outlawing appeals to consequences.

  1. Are you happy with a LW with multiple norm sets, where this is one of the norm sets you can choose?

  2. What's your plan if communities or sub-communities with these norms don't draw enough participants to bec

... (read more)
1. Yes. 2. Think about why that is and adjust strategy and norms correspondingly. (Sorry that's underspecified, but it actually depends on the reasons). I don't know what happened to LW1, but it did have pretty high intellectual generativity for a while.
I think Wei Dai said that too elsewhere. When each of you says intellectual generativity, do you the site a whole (post + discussions), or specifically that the discussions in comments were more generative? Other question is if you think you can quantitatively state some factor by which LW1 was more generative than LW2? If it was only 2x, that would suggest less generativity per person/comment than current LW, since old LW had much more than double the number of users and comments. If it was 10x, then LW1 was qualitatively better in some way. (I'd expect the output to be a right-tailed distribution over individuals. LW2 could be less generative than LW1 because the top N users which produced 80% of the value left, so it's not really about the raw number of users/comments. The most interesting scenario would be if it were all the same people, but they were being less generative.)
The site as a whole. I wasn't around in early LW, so this is hard for me to estimate. My very, very rough guess is 5x. (Note, IMO the recent good content is disproportionately written by people willing to talk about adversarial optimization patterns in a somewhat-forceful way despite pressures to be diplomatic)
I have noted this as well, and I find it worrisome. Many recent interesting conversations are more about social and interpersonal communication / alignment than about personal or theoretical rationality and decision-making. I like it because they are actually interesting topics. I worry that they're crowding out or hiding a painful decline of more core rationality discussions. I don't worry that they're too close to politics (I think they are close to politics, but are narrow enough that they seem to fall prey to the standard problems more because they're trying to skate around the issue rather than being direct). I had not framed them as "adversarial optimization patterns", mostly because they seriously bury that lede. A direct acknowledgement would be useful that almost all groups of more than one (and in some models, including an individual human) contain multiple simultaneous games, with very different payout matrices and equilibria which impact other games. Values start out divergent, and this can't be assumed away for any part of reality.
This is maybe half or more of what Robin Hanson wrote about back when it was still all on overcomingbias.com
Yeah, granted that it's going to be rough. 5x seems consistent with the raw activity numbers though. Eyeballing it, seems like 4x more active in terms of comments and commenters. Number of posts is pretty close.
One of my current beliefs, based on skimming older posts periodically (esp. since recommendations), is that a lot of the old comments just weren't that good. Not sure about posts.

So if evidence against X is being suppressed, then people's belief in X is unreliable, so it can't justify suppressing evidence against X. That's a great argument for free speech, thanks! Do you know if it's been stated before?

This doesn’t seem quite right to me.

Consider this example:

“Evidence against the Holocaust is being suppressed[1]. Therefore people’s belief in the Holocaust is unreliable. And so we cannot justify suppressing Holocaust denial by appealing to the (alleged) fact of the Holocaust having occurred.”

Something is wrong here, it seems to me. Not with the conclusion, mind you, the policy proposal, as it were; that part is all right. But the logic feels odd, don’t you think?

I don’t have a full account, yet, of cases like this, but it seems to me that some of the relevant considerations are as follows. Firstly, we previously undertook a comprehensive project (or multiple such) to determine the truth of the matter, which operated under no such restrictions as we now defend, and came to conclusions which cannot be denied. Secondly, we have people whose belief in the facts of the matter come from personal experience, and are not at all contingent on (nor even alterable by) any evidence we may or may not now present. Thirdly, as the question is one of historical fact, no new evidence may be generated; previously unknown but existing evidence may be uncovered, or currently known evidence may be sh

... (read more)
Yes, having strong unfiltered evidence for X can justify suppressing evidence against X. But if suppression is already in effect, and someone doesn't already have unfiltered evidence, I'm not sure where they'd get any. So the share of voters who can justify suppression will decrease over time.

"If we want to do those things, we have to do them by getting to the truth"

This seems fair if it focuses on the rationalist strategy on trying to interface with the world and how truth is essential. However it's probably not literally true in that there are probably Dark Arts and such which provide those spesific sought goods with outrageous prices. "Have" in this context means "within our options we have created for ourselfs" and not "it is not possible to produce the effect via other means"

Carter states that t... (read more)