All of LVSN's Comments + Replies

Split and Commit

Ten billion times Yes.

Or! This idea sounds superficially reasonable and even (per the appendix) gets praise from a few people, but is actually useless or harmful. Currently working out a hypothesis for how that could be the case...

Sasha Chapin on bad social norms in rationality/EA

It seems like any cultural prospiracy to increase standards to exceptional levels, which I see as a good thing, would be quickly branded as 'toxic' by this outlook. It is a matter of contextual objection-solving whether or not large parts of you can be worse than a [desirably [normatively basic]] standard. If it is toxic, then it is not obvious to me that toxicity is bad, and if toxicity must be bad, then it is not clear to me that you can, in fair order, sneak in that connotation by characterizing rationalist self-standards as toxic.

The OP provides examples to illustrate what they mean by an overly extreme standard. They also say that many EA/rationalist principles are good, and that there’s less toxicity in these communities than in others.

Might be good to taboo “toxicity.” My definition is “behavior in the name of a worthy goal that doesn’t really deliver that goal, but makes the inflictor of the toxicity feel better or get a selfish benefit in the short run, while causing problems and bad feelings for others.”

For example, a trainer berating trainees in the name of high standards af... (read more)

This critique seems to rely on a misreading of the post. The author isn’t saying the rationality community has exceptionally toxic social norms.

I’m not mentioning these communities because I think they’re extra toxic or anything, by the way. They’re probably less toxic than the average group, and a lot of their principles are great.

Rather that goals, even worthy goals, can result in certain toxic social dynamics that no one would endorse explicitly:

Sometimes—often—these forbidden thoughts/actions aren’t even contrary to the explicit values. They just

... (read more)
5Kaj_Sotala17dI read it not as saying that having high standards would be bad by itself, but that the toxicity is about a specific dynamic where the standards become interpreted as disallowing even things that have nothing to do with the standards themselves. E.g. nothing about having high standards for rationality requires one to look down on art.
3Alexander17dYou make good points. Toxicity is relative to some standard. A set of social norms that are considered toxic from the perspective of, say, a postmodern studies department (where absolute non-offensiveness is prime), might be perfectly healthy from the perspective of a physics research department (where objectivity is prime). It’s important to ask, “Toxic according to who, and with respect to what?” Emile Durkheim asked his readers to imagine what would happen in a “society of saints.” There would still be sinners because “faults which appear venial to the layman” would there create scandal.
Taking a simplified model

But surely there are not *only* differences, right? Some features of sub-Dunbar groups generalize to super-Dunbar groups. I want to know the full Venn diagram; otherwise I would lose a tool which may on average be more useful (e.g. there may be more useful similarities for my particular interests than there are for your interests).

Depositions and Rationality

The mindset being employed here is extremely insensitive to the evidence that, actually, things are complex and people aren't just "being evasive".

My impression is that when rationalists make objections, they tend not to explicitly distinguish between correcting failure and revealing possible improvements. 

If A is abstractly true, and B is 
1. abstractly true 
2. superficially contradictory with A
3. true in a more relevant way most of the time to most people

I expect rationalists who want to prioritize B to speak as if issuing corrections to people who focus on A, instead of being open-minded that there's good reason for A in unrecognized/rare(ly considered) but necessarily existing conte... (read more)

Is there a decision theory which works as follows? (Is there much literature on decision theories given iterated dilemmas/situations?)

If my actual utility doesn't match my expected utility, something went wrong.
Whatever my past self could have done better in this kind of situation in order to make actual utility match the expected utility is what I should do right now. If the patch (aka lesson) mysteriously works, why it works isn't an urgent matter to attend to, although further optimization may be possible if the nature of the patch (aka lesson) is better understood.

Tell the Truth

That which can be destroyed by abstract truths might also be abstractly true. 

Only when you are dealing with claims which represent fully formalized intuitions does it apply that 'that which can be destroyed by the truth (is false and therefore) should be.'

Abstract imperatives like "don't be a dick" and "be cool to each other" are important to remember even if you have a very good formalization, because you basically never know if you've really formalized the full set of intuitions, or if you've only formalized some parts of the set of intuitions whic... (read more)

Tell the Truth

You are a hero to me.

Tell the Truth

https://imgs.xkcd.com/comics/sheeple.png

My impression is that we would be in relative heaven by now if this image realistically represented the thoughts of most people as it implicitly intends to. Most people would rather prevent legibility of comparison.

Is nuking women and children cheaper than firebombing them?

Depravity is not a real problem. 

Anyways I'm confused by your initial reaction. I'll pretend you said something other than depravity; I'll pretend you mentioned some kind of actual real problem, like non-[meta-wanted] unwanted suffering. 

Just measure the suffering and do the calculation. 

I understand one's uncertainty about how much (non-[meta-wanted] unwanted) suffering a human life is worth, as well as one's uncertainty about how much money is worth how much suffering.

But the global facts of your conditional perferences don't go away just ... (read more)

Is nuking women and children cheaper than firebombing them?

You know what else is depraved? Kissing. You're literally putting orifices against orifices. Also homosexuality is depraved. But thank god cost-benefit analysis wins out sometimes over "waah waah, depravity".

2Richard_Kennaway1moYou said that, not me.

In this shortform, I want to introduce a concept of government structure I've been thinking of called representative omnilateralism. I do not know if this system works and I do not claim that it does. I'm asking for imagination here.

Representative (subagent) omnilateralism: A system under which one person or a groups of people tries to reconcile the interests of all people(/subagents) in a nation (instead of just the majority of the nation) into one plan that satisfies all people(/subagents)*

I think "representative democracy" is an ambiguous term which can... (read more)

Is nuking women and children cheaper than firebombing them?

The confusion here is in the word "cost". In the context of lsusr's post, costs and cheapness are framed in terms of monetary costs and cheapness, yet I ask: why not consider moral costs as real, decision-critical costs? Then seek to reduce all decision-critical costs, whether moral, instrumental, or otherwise.

2Richard_Kennaway2moBecause money is bounded, but depravity is not.
3Robin2moThen you run into the big problem of how to measure moral cost. There will be situations where you can minimise monetary cost by increasing the moral cost. To minimise for both you need to put a price tag on morality in dollars. How much does a dead civilian cost?

I want to DM about rationality. Direct messaging is not subject to so many incentive pressures. Please DM me. Please let me be free. 

Please DM me please DM me please DM me please DM me * 36

I'm looking for someone who I can share my half-baked rambly thoughts with. Shortform makes me feel terrible. 

Please DM me; let me be free; please DM me; let me be free * 105

When a bailey is actually true, there are better ways to show it - in those cases they ARE in the motte.

Endorsed.

Just because there are mottes and baileys, doesn't mean the baileys are wrong; they may just be less defensible in everyday non-ideal speech situations.

2Dagon3moSome examples would help here. If a common motte is the best argument for a bailey, that IS some evidence that the bailey is wrong (or at least unsupportable in the application being promoted). When a bailey is actually true, there are better ways to show it - in those cases they ARE in the motte.
2Dagon3moduplicate comment due to site error.

To whatever extent culture must pass through biology (e.g. retinas, eardrums, stomach) before having an effect on a person, and to whatever extent culture is invented through biological means, cultural inputs are entirely compatible with biological determinism.

Deadlines: X Smooth, Y Sharp

Recently an acquaintance told me we had to be leaving at "4:00 PM sharp." 

Knowing of the planning fallacy, I asked "Sharp? Uh-oh. As a primate animal, I naturally tend not to be very good with sharp deadlines, though your information is useful. Could you tell me when we're leaving smooth?"

"Smooth? What do you mean?"

"Smooth as opposed to sharp. Like, suppose you told me the time I should be aiming to be ready for in order to compensate for the fact that humans are bad at estimating time costs. Let's say you wanted to create ... (read more)

Kids Learn by Copying

I don't take it for granted that saying something very beautiful but doing something contradictorily ugly and cynicism-inducing is less insane, nor, if it is necessarily sane, do I take it for granted that sanity is the thing we should be striving for in that case.

Kids Learn by Copying

These two possibilities are not mutually exclusive; talking is a thing that people do. The correct answer is that it's the latter case (verbal theory) as an instance of the former category of cases (cases where people copy the behavior of others, such as fashions of thinking and talking).

I'm also not very sure that removing the ability to negotiate theories of objectivity or fairness, which are naturally controversial subjects, would make people more peaceful on average given it as a limiting condition on the deveopment of culture starting with the first appearance of any human communication; I expect it would make world histories more violent on average to remove such an ability.

What is normally called common sense is not common sense. Common sense is the sense that is actually common. Idealized common sense (which, I shall elaborate, is the union of the set of thoughts you would have to be carefully trying to be common-sensical in order make salient in your mind and the set of natural common sense thoughts) should be called something other than common sense, because making a wide-sweeping mental search about possible ways of being common-sensical is not common, even if a general deference and post-hoc accountability to the concept of common sense may be common.

My response to it is: What makes you think it is naive idiocy? It seems like naive intelligence if anything. Even if the literal belief is false, that doesn't make it a stupid thing to act as if true. If everyone acted as if it were true, it would certainly be a stag-hunt scenario! And the benefits are still much worthwhile even if the other does not perfectly cooperate. 

Stupid uncritical intolerant people will think you look childish and impertinent, but intelligent people will notice you're being bullied and you're still tolerating your interlocutor... (read more)

1JBlack3moYes, I can see some benefits to responding to straw-manning as if it were misphrased enquiry. I do think that at least 90% of the occasions in which straw-manning happens, it isn't. Most of the times I see it happen are in scenarios where curious enquiry about the difference is not a plausible motivation. In my experience it has nearly always happened where both sides are trying to win some sort of debate, usually with an audience. That aside, the proposed mechanism for straw-manning was that it is a particular kind of mistake, so I would expect to see at least some significant fraction of cases where the same kind of enquiry was intended, and the mistake was not made. I haven't observed any significant fraction of such cases in the situations where I have seen straw-man arguments used. I agree that the fictional example I wrote does have a tone that implies that there is no difference between my caricature and your position. That matches the majority of cases where I see straw-man arguments being used. We could discuss the special case of straw-manning where such implication isn't present, but I think that would reduce the practical scope to near zero.

Lately I've been thinking about what God would want from me, because I think the idea was a good influence on my life. Here's a list in progress of some things I think whould characterize God's wants and judgments:

  • 1. God would want you to know the truth
  • 2. If you find yourself flinching at knowledge of serious risk factors (e.g. of your character or moral plans), God would urgently want to speak with you about it
  • 3. Resist the pull of natural wrongness
  • 3.1. Consider all of the options which are such that you would have to be looking for the obvious/common sen
... (read more)

I am convinced that moral principles are contributory rather than absolute. I don't like the term 'particularist'; it sounds like a matter of arbitration when you put it that way; I am very reasonable about what considerations I allow to contribute to my moral judgments. I would prefer to call my morality contributist. I wonder if it makes sense to say that utilitarians are a subset of contributists.

I found the Defeasible Reasoning SEP page because I found this thing talking about defeasible reasoning, which I found because I googled 'contextualist Bayesian'.

Googling 'McCarthy Logic of Circumscription' brought me here; very neat.

Interesting stuff from the Stanford Encyclopedia of Philosophy:

2.8 Occam’s Razor and the Assumption of a “Closed World”

Prediction always involves an element of defeasibility. If one predicts what will, or what would, under some hypothesis, happen, one must presume that there are no unknown factors that might interfere with those factors and conditions that are known. Any prediction can be upset by such unanticipated interventions. Prediction thus proceeds from the assumption that the situation as modeled constitutes a closed world: that nothing outside tha... (read more)

1LVSN3moI am convinced that moral principles are contributory rather than absolute. [https://plato.stanford.edu/entries/moral-particularism/] I don't like the term 'particularist'; it sounds like a matter of arbitration when you put it that way; I am very reasonable about what considerations I allow to contribute to my moral judgments. I would prefer to call my morality contributist. I wonder if it makes sense to say that utilitarians are a subset of contributists.
1LVSN3moI found the Defeasible Reasoning SEP page because I found this thing [https://philarchive.org/archive/GRECMH] talking about defeasible reasoning, which I found because I googled 'contextualist Bayesian'.
1LVSN3moGoogling 'McCarthy Logic of Circumscription' brought me here [https://en.wikipedia.org/wiki/Circumscription_(logic)]; very neat.

In defense of strawmanning: there's nothing wrong with wanting to check if someone else is making a mistake. If you forget to frame it as a question, e.g. "Just wanna make sure: what's the difference between what you're thinking and the thinking of what my more obviously bad, made-up person who speaks similarly to you?" Then the natural way it comes out will sound accusatory, as in our typical conception of strawmanning. 

I think most people strawman because it's shorthand for this kind of attempt to check, but then they're also unaware that they're ju... (read more)

1JBlack3mo[Comment somehow ended up in triplicate after some parse error displayed]
1JBlack3mo[My comment ended up triplicated, and no delete option]
1JBlack3moPlease don't take the following as seriously as it appears: Obviously this is a straw-man presentation of your thesis, yet follows the form of the "nothing wrong with wanting to check" alternative. Is it actually any better with the question than without? My suspicion is that there isn't a lot of difference. I guess the true test will be to wait and see whether this comment thread devolves into acrimonious and hostility-filled polarisation.
Three Principles to Writing Original Nonfiction

I was never a fan of this advice to remove all reference to the self when making a statement. If you think everything is broken or complicated and you don't think you have strong reasons to think you're doing any better than average, why pretend that everything is fine and we can just be authorities on the way that things are rather than how they impressed us as being?

My English teacher took off grades every time I explained things as though from a perspective as humble and precarious as honest, good epistemics require me to report; using terms like "I thi... (read more)

Kids Learn by Copying

Most people talk a lot about how they hate hypocrites. Hypocrites say you're supposed to do one thing, and then they do another, and people don't like that. I can understand admitting that it is hard to live in accordance with your stated standards, but people shouldn't lie that they believe there is anything plausibly contextually good about a standard when they don't actually believe there is anything plausibly contextually good about the standard. Otherwise you can't hold people accountable to the standards that both of you say you think have something ... (read more)

3jaspax3moI don't think I disagree with this, except to note that it's rarely the case that social standards are explicitly, consciously hypocritical. More often, people simply don't notice the conflict between stated and actual standards. Where I differ from many people is that, in case of a conflict between actual and stated standards of behavior, the correct thing to do is to endorse and formalize the actual standard, rather than trying to enforce the stated standard. This is because the stated standard, by virtue of never having actually been put into practice, is frequently insane if you try to actually practice it.
Kids Learn by Copying

Would it be better or worse if someone's takeaway from this post was that no one should reason about what makes a course of action or policy better or worse? That they should just copy other people?

What if copying other people meant burning suspected witches alive? What if some people who burn witches aren't really sure about the correctness of what they're doing, they care about that kind of thing, and yet they profess great certainty that their acts are in accordance with correct values? Should I not try to play to the part of them which is uncertain in ... (read more)

2Viliam3moAs a historical fact, did people actually burn witches because they were copying someone else's behavior, or because they had some verbal theory for why that is the right thing to do?
Rafael Harth's Shortform

If you think it's very important to think about all the possible adjacent interpretations of a proposition as stated before making up your mind, it can be useful to indicate your initial agreement with the propositions as a small minimum divergence from total uncertainty (the uncertainty representing your uncertainty about whether you'll come up with better interpretations for the thing you think you're confident about) on just so many interpretations before you consider more ambitious numbers like 90%. 

If you always do this and you wind up being wron... (read more)

One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU="(Model Uncertainty) I'm confused, maybe the question doesn't make sense, maybe A isn't a coherent claim, maybe the concepts I used aren't the right concepts to use, maybe I didn't think of a possibility, etc. etc.". 

I tend to think of writing my propositions in notepad like
A: 75%
B: 34%
C: 60%

And so on. Are you telling me that "~A: 75%" means not only that ~A has a 75% likeliho... (read more)

My shortform post yesterday about proposition negations, could I get some discussion on that? Please DM me if you like! I need to know if and where there's been good discussion about how Bayesian estimate tracking relates with negation! I need to know if I'm looking at it the wrong way!

1Measure3moThe short answer is that if you have a set of mutually exclusive hypotheses, such as a proposition and its negation, then your credences for them should sum to unity. Conditional on MTBI's assumptions, a 55% credence for T-type does indeed imply a 45% credence for F-type. If I think people will misunderstand me due to connotations, then I need to be more clear when expressing myself, but that doesn't change my internal beliefs.
1JBlack3moWhat sort of discussion are you looking for? Negation is fairly straightforward in classical propositional logic, predicate logic, and probability (the bases for Bayesian reasoning). If questions about personality types are implicitly tied to some particular model, then the proposition "A has personality type X" really means "A has personality type X in model M", which in turn usually boils down to "A will have (or had) particular ranges of scores in M's associated personality test under the prescribed conditions for administering it". How does negation come into such a discussion? Maybe you want to talk about the differences between negating various parts of that proposition versus negating the whole thing? I'm not sure.

Does thinking that A is 45% likely mean that you think the negation of A is 5% likely, or 55% likely? Don't answer that; the negation is 55% likely.

But we can imagine making a judgment about someone's personality. One human person accepts MBTI's framework that thinking and feeling are mutually exclusive personalities, so when they write that someone has a 55% chance of being a thinker type, they make an implicit not-tracked judgment that they have an almost 45% chance of being a feeler type AND not a thinker, but a rational Bayesian is not so silly of cour... (read more)

1TekhneMakre3moI read this twice and can't pick up on what you're thinking. You could focus your attention on your question and write more from within it (e.g., vague gesturing from different angles; toy problems / examples; partial formalizations; etc.). One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU="(Model Uncertainty) I'm confused, maybe the question doesn't make sense, maybe A isn't a coherent claim, maybe the concepts I used aren't the right concepts to use, maybe I didn't think of a possibility, etc. etc.". Probability theory still makes sense, you can always ask e.g. "what am I seeing right now" and other pretty answerable questions, and use those as grounding of vaguer claims if necessary. But the point is, if usually A is a specific claim like "the virus spreads with R>2", then the negation not-A could naturally be taken to mean "the virus spreads with R≤2, or the question is ill-defined, e.g. because R is very different in different places or something, or bakes in a confusion, e.g. there's no virus or even there's no such thing as a virus". Then not-A is "getting extra probability" from the model uncertainty, vs A which seems to be a positive statement (it posits a state of affairs).
Rafael Harth's Shortform

Someone (Tyler Cowen?) said that most people ought assign much lower confidences to their beliefs, like 52% instead of 99% or whatever.

oops I have just gained the foundational insight for allowing myself to be converted to (explicit probability-tracking-style) Bayesianism; thank you for that

I always thought "belief is when you think something is significantly more likely than not; like 90%, or 75%, or 66%." No; even just having 2% more confidence is a huge difference given how weak existing evidence is.

If one really rational debate-enjoyer thinks A is 2% l... (read more)

3JBlack3moTo me, 0.02 is a comparatively tiny difference between likelihood of a proposition and its negation. If P(A) = 0.51 and P(~A) = 0.49 then almost every decision I make based on A will give almost equal weight to whether it is true or false, and the cognitive process of working through implications on either side are essentially identical to the case P(A) = 0.49 and P(~A) = 0.51. The outcome of the decision will also be the same very frequently, since outcomes are usually unbalanced. It takes quite a bit of contriving to arrange a situation where there is any meaningful difference between P(A) = 0.51 and P(A) = 0.49 for some real-world proposition A.
Can you control the past?

I just love this quote. (And, I need it in isolation so I can hyperlink to it.)

"When I step back in Newcomb’s case, I don’t feel especially attached to the idea that it the way, the only “rational” choice (though I admit I feel this non-attachment less in perfect twin prisoner’s dilemmas, where defecting just seems to me pretty crazy). Rather, it feels like my conviction about one-boxing start to bypass debates about what’s “rational” or “irrational.” Faced with the boxes, I don’t feel like I’m asking myself “what’s the rational choice?” I feel like I’m, w... (read more)

The Death of Behavioral Economics

(surprised) No way!! I bought that book three months ago, at the recommendation of no one. I haven't read it yet, but it's good to see that I have made a good investment on my own judgment.

Some subset of those who agree that 'when two people disagree, only one of them can be right' and the people who agree that A : A := 'when two people disagree, they can both be right' such that A A' and A' := 'when two people "disagree," they might not disagree, and they can both be right', do not have a disagreement that cashes out as differences in anticipated experiences, and therefor may only superficially disagree.

Note 1: in order for this to be unambiguously true, 'anticipated experiences' necessarily includes anticipated experiences given counterf... (read more)