Nominated Posts for the 2018 Review

112Mythic Mode2y
79
1 nomination
207Making yourself small2y
51
1 nomination
108Unknown Knowns1y
17
Review
2 nominations

2018 Review Discussion

Act of Charity
1691y7 min readShow Highlight

(Cross-posted from my blog)

The stories and information posted here are artistic works of fiction and falsehood. Only a fool would take anything posted here as fact.

—Anonymous

Act I.

Carl walked through the downtown. He came across a charity stall. The charity worker at the stall called out, "Food for the Africans. Helps with local autonomy and environmental sustainability. Have a heart and help them out." Carl glanced at the stall's poster. Along with pictures of emaciated children, it displayed infographics about how global warming would cause problems for African communities' food p

... (Read more)

[this is a review by the author]

I think what this post was doing was pretty important (colliding two quite different perspectives). In general there is a thing where there is a "clueless / naive" perspective and a "loser / sociopath / zero-sum / predatory" perspective that usually hides itself from the clueless perspective (with some assistance from the clueless perspective; consider the "see no evil, hear no evil, speak no evil" mindset, a strategy for staying naive). And there are lots of difficulties in trying to establish communication. And the dial

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Disclaimer: This post was written time-boxed to 2 hours because I think LessWrong can still understand and improve upon it; please don't judge me harshly for it.

Summary: I am generally dismayed that many people seem to think or assume that only three levels of social metacognition matter ("Alex knows that Bailey knows that Charlie knows X"), or otherwise seem generally averse to unrolling those levels. This post is intended to point out (1) how the higher levels systematically get distilled and chunked into smaller working memory elements through social learning, which leads to... (Read more)

13romeostevensit10hReviewI'm generally in favor of public praise and private criticism, but this post really rubbed me the wrong way. To me it reads as a group of neurotic people getting together to try to get out of neuroticism by being even more neurotic at each other. Or, that in a quest to avoid interacting with the layer of intentions, let's go arbitrarily deep on the recursion stack at the algorithmic/strategy layer of understanding. Also really bothered by calling a series of reactions spread over time levels of meta. Actually going meta would be paying attention to the structure of the back and forth rather than the individual steps in the back and forth.
4Ben Pace9hHuh, am surprised that this was your response, because I got quite a lot out of the post. Like, I think this post has a true description of a key part of what's going on. The key insight is that your working memory is limited to a few slots, and that if you have more you'll be able to see a few more levels of modelling of modelling of modelling etc, and I think the descriptions are accurate portrayals of causally what happened. I think that, especially in the modern tech era, a lot of norm violations come down to having very different assumptions about background context leading to mixed signals, and having common norms for carefully and slowly making a lot of the background assumptions explicit, can lead to resolving problems that otherwise would be intractable or be resolved with a lot more violent force. You can't say "Make everything explicit", but this post helps set out a framework for making certain important things explicit. I agree that there are other skills that need to be done here, but this to me feels pretty key (and not just like a few bells and whistles that are distracting from the real substance of what needs to happen). Am curious to know more of your thoughts.

I think the post imagines something like a multi person stack trace. In reality backwards facing introspection winds up confabulating, and there's not limit to how many epicycles can be added with multiple parties confabulating.

Anti-social Punishment
1871y5 min readShow Highlight

This is a cross post from 250bpm.com.

Introduction

There's a trope among Slovak intellectual elite depicting an average Slovak as living in a village, sitting a local pub, drinking Borovička, criticizing everyone and everything but not willing to lift a finger to improve things. Moreover, it is assumed that if you actually tried to make things better, said individual would throw dirt at you and place obstacles in your way.

I always assumed that this caricature was silly. It was partly because I have a soft spot for Slovak rural life but mainly because such behavior makes absolutely no sense fro

... (Read more)

Author here.

In the hindsight, I still feel that the phenomenon is interesting and potentially important topic to look into. I am not aware of any attempt to replicate or dive deeper though.

As for my attempt to explain the psychology underlying the phenomenon I am not entirely happy with it. It's based only on introspection and lacks sound game-theoretic backing.

By the way, there's one interesting explanation I've read somewhere in the meantime (unfortunately, I don't remember the source):

Cooperation may incur different costs on different participants. If y

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
7romeostevensit10hReviewI think awareness of this effect is tremendously important. Your immune system needs to fight cancer (mindless unregulated replication) in order for you to function and pursue any goal with a lower time preference than the mindless replicators. But what's even worse than cancer is a disease that coopts the immune system, leading to a lowered ability to fight off infections in general. People who care about the future are concerned about no-value aligned replication outcompeting human values. But they should also be concerned about agentic processes that specifically undermine the ability to do low time preference work aka antisocial punishers and the things that lead them to exist and flourish.
3jmh3dI'll lump two thoughts in here -- one relates to SilentCat the other elsewhere but... Like others I think this is a great insight and should be looked at by the authors, or other interested social scientists. I think it relates to a question I ask myself from time to time, though generally don't get too far in answering. Where do we draw the line between public and private spheres of action? I don't think that is a fixed/static division over time and seems to have important implication for public policy. I'm tempted to say it might with the above proposed efficiency division. I'm not sure though. The over-all results and some of the other comments also made me wonder if history -- particularly as most of these locations seem to have been former USSR members. I'm just wondering if perhaps the culture legacy would support the behavior if innocent people were just as likely to be punished for what might be actions of other attempting to make everyone's lives better (but often I suspect viewed as a threat to the authorities and government powers).
Give praise
1472y1 min readShow Highlight

The dominant model about status in LW seems to be one of relative influence. Necessarily, it's zero-sum. So we throw up our hands and accept that half the community is just going to run a deficit.

Here's a different take: status in the sense of worth. Here's a set of things we like, or here's a set of problems for you to solve, and if you do, you will pass the bar and we will grant you personhood and take you seriously and allow you onto the ark when the world comes crumbling. Worth is positive-sum.

I think both models are useful, but only one of these models underlies the em... (Read more)

26mingyuan5hReviewI have several problems with including this in the 2018 review. The first is that it's community-navel-gaze-y - if it's not the kind of thing we allow on the frontpage because of concerns about newcomers seeing a bunch of in-group discussion, then it seems like we definitely wouldn't want it to be in a semi-public-facing book, either. The second is that I've found that most discussion of the concept of 'status' in rationalist circles to be pretty uniformly unproductive, and maybe even counterproductive. People generally only discuss 'status' when they're feeling a lack of it, which means that discussions around the subject are often fraught and can be a bit of an echo chamber. I have not personally found any post about status to be enlightening or to have changed the way I think. My other concerns have to do with specific parts of the post: This is unsubstantiated and confusing in a whole host of ways. First, what is 'worth' supposed to mean? Toon seems to say it means something along the lines of "we will grant you personhood and take you seriously and allow you onto the ark when the world comes crumbling." If I had to sum this idea up into one word I would call it 'acceptance'. Second, "worth is generated by praise" doesn't square with my experience at all. I tend to think I'm fairly well-calibrated when it comes to my own abilities, so when someone gives me praise that I don't think I deserve, that doesn't generate any value for me (I just think the person is wrong/miscalibrated). Praise is also not what I need when I'm burned out or upset - I need people to help me solve my problems, not give me vacuous words of encouragement. Also, to be more general about it, giving children too much praise can harm them just as much as giving them too little. I'm having trouble putting my finger on exactly what else about this claim feels wrong to me, but the two points I covered are definitely not all of it. It just really rubs me the wrong way. I see lots of rational
3Raemon5hThis sounds plausible, but in a domain as fuzzy as this having some kind of citation would be good.

Yeah, good point, I don't have a citation handy for that so I just deleted it. Doesn't really change anything about my argument.

3CuriousMeta3dMy impression is that in-group status is always, inherently zero-sum. While the influence/worth distinction may be a relevant one, I think it'd be relative worth that satisfies status-as-social-need. Praise certainly meets other emotional needs, though, and it may well be rational to have more of it.

Epistemic Status: Simple point, supported by anecdotes and a straightforward model, not yet validated in any rigorous sense I know of, but IMO worth a quick reflection to see if it might be helpful to you.

A curious thing I've noticed: among the friends whose inner monologues I get to hear, the most self-sacrificing ones are frequently worried they are being too selfish, the loudest ones are constantly afraid they are not being heard, the most introverted ones are regularly terrified that they're claiming more than their share of the conversation, the most assertive ones are always su... (Read more)

The LW team is encouraging authors to review their own posts, so:

In retrospect, I think this post set out to do a small thing, and did it well. This isn't a grand concept or a vast inferential distance, it's just a reframe that I think is valuable for many people to try for themselves.

I still bring up this concept quite a lot when I'm trying to help people through their psychological troubles, and when I'm reflecting on my own.

I don't know whether the post belongs in the Best of 2018, but I'm proud of it.

Towards a New Impact MeasureΩ
1111y32 min readΩ 20Show Highlight

In which I propose a closed-form solution to low impact, increasing corrigibility and seemingly taking major steps to neutralize basic AI drives 1 (self-improvement), 5 (self-protectiveness), and 6 (acquisition of resources).

Previously: Worrying about the Vase: Whitelisting, Overcoming Clinginess in Impact Measures, Impact Measure Desiderata

To be used inside an advanced agent, an impact measure... must capture so much variance that there is no clever strategy whereby an advanced agent can produce some special type of variance that evades the measure.
~ Safe Impact Measure

If we have a safe impa... (Read more)

2Gurkenglas20hIf it is capable of becoming more able to maximize its utility function, does it then not already have that ability to maximize its utility function? Do you propose that we reward it only for those plans that pay off after only one "action"?
2TurnTrout16hNot quite. I'm proposing penalizing it for gaining power, a la my recent post [https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/6DuJxY8X45Sco4bS2]. There's a big difference between "able to get 10 return from my current vantage point" and "I've taken over the planet and can ensure i get 100 return with high probability". We're penalizing it for increasing its ability like that (concretely, see Conservative Agency [https://arxiv.org/abs/1902.09725] for an analogous formalization, or if none of this makes sense still, wait till the end of Reframing Impact).
2Gurkenglas7hAssessing its ability to attain various utilities after an action requires that you surgically replace its utility function with a different one in a world it has impacted. How do you stop it from messing with the interface, such as by passing its power to a subagent to make your surgery do nothing?

It doesn’t require anything like that. Check out in the linked paper!

Requisite background: high school level programming and calculus. Explanation of backprop is included, skim it if you know it. This was originally written as the first half of a post on organizational scaling, but can be read standalone.

Backpropagation

If you’ve taken a calculus class, you’ve probably differentiated functions like . But if you want to do math on a computer (for e.g. machine learning) then you’ll need to differentiate functions like

function f(x):    # Just wait, the variable names get worse…          
a = x^(1/2) # Step 1
... (Read more)

There are two separate lenses through which I view the idea of competitive markets as backpropagation.

First, it's an example of the real meat of economics. Many people - including economists - think of economics as studying human markets and exchange. But the theory of economics is, to a large extent, general theory of distributed optimization. When we understand on a gut level that "price = derivative", and markets are just implementing backprop, it makes a lot more sense that things like markets would show up in other fields - e.g. AI or b... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Unknown Knowns
1081y1 min readShow Highlight

Previously (Marginal Revolution): Gambling Can Save Science

A study was done to attempt to replicate 21 studies published in Science and Nature. 

Beforehand, prediction markets were used to see which studies would be predicted to replicate with what probability. The results were as follows (from the original paper):

Fig. 4: Prediction market and survey beliefs.

The prediction market beliefs and the survey beliefs of replicating (from treatment 2 for measuring beliefs; see the Supplementary Methods for details and Supplementary Fig. 6 for the results from treatment 1) are shown. The replication

... (Read more)
27Bucky8hReviewTldr; I don’t think that this post stands up to close scrutiny although there may be unknown knowns anyway. This is partly due to a couple of things in the original paper which I think are a bit misleading for the purposes of analysing the markets. The unknown knowns claim is based on 3 patterns in the data: “The mean prediction market belief of replication is 63.4%, the survey mean was 60.6% and the final result was 61.9%. That’s impressive all around.” “Every study that would replicate traded at a higher probability of success than every study that would fail to replicate.” “None of the studies that failed to replicate came close to replicating, so there was a ‘clean cut’ in the underlying scientific reality.” Taking these in reverse order: CLEAN CUT IN RESULTS I don’t think that there is as clear a distinction between successful and unsuccessful replications as stated in the OP: "None of the studies that failed to replicate came close to replicating" This assertion is based on a statement in the paper: “Second, among the unsuccessful replications, there was essentially no evidence for the original finding. The average relative effect size was very close to zero for the eight findings that failed to replicate according to the statistical significance criterion.” However this doesn’t necessarily support the claim of a dichotomy – the average being close to 0 doesn’t imply that all the results were close to 0, nor that every successful replication passed cleanly. If you ignore the colours, this graph [https://www.google.com/search?q=Evaluating+the+replicability+of+social+science+experiments+in+Nature+and+Science+between+2010+and+2015&client=firefox-b-e&sxsrf=ACYBGNTYW4E9bxi1hGV1rqF4YN6QlpycnQ:1575990095532&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjk58DOrKvmAhXCAewKHUJGA24Q_AUoAXoECA0QAw&biw=1282&bih=911#imgrc=H8VD34zCINDxCM:] from the paper suggests that the normalised effect sizes are more of a continuum than a clean cut (central section b is relevant chart

This is awesome :) Thank you very much for reading through it all and writing down your thoughts and conclusions.

Epistemic status: pretty confident. Based on several years of meditation experience combined with various pieces of Buddhist theory as popularized in various sources, including but not limited to books like The Mind Illuminated, Mastering the Core Teachings of the Buddha, and The Seeing That Frees; also discussions with other people who have practiced meditation, and scatterings of cognitive psychology papers that relate to the topic. The part that I’m the least confident of is the long-term nature of enlightenment; I’m speculating on what comes next based on what I’ve experienced, but have no... (Read more)

On a small point, maybe it would be helpful to use a more natural term than 'defusion', e.g. 'detachment' (if that expresses it clearly), or perhaps something like 'objectivity'.

As better to avoid the confusion of introducing a new technical term if something can be expressed just as well with a familiar one.

Affordance Widths
1472y2 min readShow Highlight

This article was originally a post on my tumblr. I'm in the process of moving most of these kinds of thoughts and discussions here.


Okay. There’s a social interaction concept that I’ve tried to convey multiple times in multiple conversations, so I’m going to just go ahead and make a graph.

I’m calling this concept “Affordance Widths”.

Let’s say there’s some behavior {B} that people can do more of, or less of. And everyone agrees that if you don’t do enough of the behavior, bad thing {X} happens; but if you do too much of the behavior, bad thing {Y} happens.

Now, let’s say we have five differ... (Read more)

I would like to see a post on this concept included in the best of 2018, but I also agree that there are reputational risks given the author. I'd like to suggest possible compromise - perhaps we could include the concept, but write our own explanation of the concept instead of including this article?

2mr-hire4dI don't think those quite get as specific or easy to talk about as this term For instance, the concept of "society isn't made nice for humans" is not new, but having moloch and inadequate equilibria as concepts still pushed forward the discourse
3Raemon3dNod. And in particular, I saw this post as something like "taking the concept of 'privilege', and fleshing it the gears of one particular facet of it." (Privilege also being a concept that's interwoven with some broader narratives or political maneuvering that I don't fully endorse, but is nonetheless have found quite useful)
4mr-hire3dYes, I didn't frame the post in those terms but you doing so made a bunch of things click for me. One of the converaations I had recently made me realize my affordable widths with risk taking for money were much different from others, because I don't need tons of money for health issues and my parents can and will support me when worst comes to worst (and I don't have to accept something like abuse to get their help). This made me really conscious of my privilege around money.
The Intelligent Social Web
1942y12 min readShow Highlight

Follow-up to: Fake Frameworks, Kenshō

Related to: Slack, Newcomblike Problems are the Norm

Previously titled: The Real-World Omega (see these comments)

I’d like to offer a fake framework here. It’s a little silly, and not fully justified, but it keeps producing meaningful results in my life when I use it. Some of my own personal examples are:

  • Overcoming a crushing depression
  • Learning how to set aside my “performance mode” and be more authentic and vulnerable when I want to be
  • Shifting my attachment style from anxious-preoccupied to mostly secure
  • Fixing a lifelong problem where I love athletics but I
... (Read more)
1CuriousMeta1dPowerful improv metaphor. Powerful post. The trickiness of roles that involve the disidentification with specific roles or the concept of roles in general must not be underestimated. That's especially true for roles that seem to be opposed to the prevalent social structure. I'm also reminded of Transactional Analysis. In particular, Games and Life Scripts.
36Jacobian2dReviewIn my opinion, the biggest shift in the study of rationality since the Sequences were published were a change in focus from "bad math" biases (anchoring, availability, base rate neglect etc.) to socially-driven biases. And with good reason: while a crash course in Bayes' Law can alleviate many of the issues with intuitive math, group politics are a deep and inextricable part of everything our brains do. There has been a lot of great writing describing the issue like Scott’s essays on ingroups and outgroups [https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/] and Robin Hanson’s theory of signaling [http://elephantinthebrain.com/]. There are excellent posts summarizing the problem of socially-driven bias on a high level, like Kevin Simler’s post on crony beliefs [https://meltingasphalt.com/crony-beliefs/]. But The Intelligent Social Web offers something that all of the above don’t: a lens that looks into the very heart of social reality, makes you feel its power on an immediate and intuitive level, and gives you the tools to actually manipulate and change your reaction to it. Valentine’s structure of treating this as a “fake framework” is invaluable in this context. A high-level rigorous description of social reality doesn’t really empower you to do anything about it. But seeing social interactions as an improv scene, while not literally true, offers actionable insight. The specific examples in the post hit very close to home for me, like the example of one’s family tugging a person back into their old role. I noticed that I quite often lose my temper around my parents, something that happens basically never around my wife or friends. I realized that much of it is caused by a role conflict with my father about who gets to be the “authority” on living well. I further recognized that my temper is triggered by “should” statements, even innocuous ones like “you should have the Cabernet with this dish” over dinner. Seeing these interact
In my opinion, the biggest shift in the study of rationality since the Sequences were published were a change in focus from "bad math" biases (anchoring, availability, base rate neglect etc.) to socially-driven biases.

Funny enough, when I did a reread through the sequence, I saw a huge number of little ways EY was pointing to various socially driven biases, which I'd missed the first time around. I think it might have been a framing thing, where because it didn't feel like those bits were the main point of the essays, I smashed them all into "Don't be dumb/conformist" (a previous notion I could round off to).

Also great review.

This is a cross post from http://250bpm.com/blog:128.

Introduction

In the past I've reviewed Eliezer Yudkowsky's "Inadequate Equilibria" book. My main complaint was that while it explains the problem of suboptimal Nash equilibria very well, it doesn't propose any solutions. Instead, it says that we should be aware of such coordination failures and we should expect ourselves to fare better than the official institutions in such cases. What Yudkowsky is saying (if I understand him correctly) is that given that the treatment of short bowel syndrome in babies is stuck in an inadequate eq... (Read more)

Author here.

I still believe this article is a important addition to the discussion of inadequate equilibria. While Scott Alexander's Moloch post and Eliezer Yudkowsky's book are great for introduction and discussion of the topic, both of them fail, in my opinion, to convey the sheer complexity of the problem as it occurs in the real world. That, I think, results in readers thinking about the issue in simple malthusian or naive game-theoretic terms and eventually despairing about inescapability of suboptimal Nash equilibria.

What I try to present is a world

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

[Epistemic status: Pretty good, but I make no claim this is original]

A neglected gem from Less Wrong: Why The Tails Come Apart, by commenter Thrasymachus. It explains why even when two variables are strongly correlated, the most extreme value of one will rarely be the most extreme value of the other. Take these graphs of grip strength vs. arm strength and reading score vs. writing score:

In a pinch, the second graph can also serve as a rough map of Afghanistan

Grip strength is strongly correlated with arm strength. But the person with the strongest arm doesn’t have the strongest grip. He’s up th... (Read more)

This essay defines and clearly explains an important property of human moral intuitions: the divergence of possible extrapolations from the part of the state spaces we're used to think about. This property is a challenge in moral philosophy, that has implications on AI alignment and long-term or "extreme" thinking in effective altruism. Although I don't think that it was especially novel to me personally, it is valuable to have a solid reference for explaining this concept.

Robustness to ScaleΩ
1662y1 min readΩ 7Show Highlight

I want to quickly draw attention to a concept in AI alignment: Robustness to Scale. Briefly, you want your proposal for an AI to be robust (or at least fail gracefully) to changes in its level of capabilities. I discuss three different types of robustness to scale: robustness to scaling up, robustness to scaling down, and robustness to relative scale.

The purpose of this post is to communicate, not to persuade. It may be that we want to bite the bullet of the strongest form of robustness to scale, and build an AGI that is simply not robust to scale, but if we do, we should at least realize that... (Read more)

This essay makes a valuable contribution to the vocabulary we use to discuss and think about AI risk. Building a common vocabulary like this is very important for productive knowledge transmission and debate, and makes it easier to think clearly about the subject.

This is the first in a series of posts about lessons from my experiences in World of Warcraft. I’ve been talking about this stuff for a long time—in forum comments, in IRC conversations, etc.—and this series is my attempt to make it all a bit more legible. I’ve added footnotes to explain some of the jargon, but if anything remains incomprehensible, let me know in the comments.


World of Warcraft, especially WoW raiding[1], is very much a game of numbers and details.

At first, in the very early days of WoW, people didn’t necessarily appreciate this very well, nor did they have any good way to us

... (Read more)
3Raemon1dReading this thread in the future, I find myself kinda wishing for ways comment threads like this could be auto-collapsed or resolved or something after reaching their conclusion.
2Said Achmiz1dAgreed, that would be a nice feature. The trick would be to have a good way of identifying such “now totally irrelevant, except for esoteric academic reasons” threads that wouldn’t run into any controversy or require non-trivial moderator attention.
3Raemon1dThe latest version of the "offtopic comment" feature that the team had chatted about was a "collapse" feature, where some comments are just forcibly collapsed with a flag, and this is just a generic tool that admins and some authors have access to. Doesn't really require anything automatic, just, when you notice such a thread, you can close it. (It's still appear in the comment list, just collapsed as if it had low karma, possibly with a reason displayed)

Yes, that is exactly the sort of thing I had in mind, which would clearly be open to all sorts of, perhaps not “abuse”, but at least—controversial application. It seems to me that it would be useful to differentiate such threads as this one we are discussing now, where nothing “on-topic” is really being discussed, and no one has nor could have any strong feelings about, etc. (This is not to say that the general-purpose tool you’re talking about would not also be useful—very plausibly it would.)

A LessWrong Crypto Autopsy
2762y3 min readShow Highlight

Wei Dai, one of the first people Satoshi Nakamoto contacted about Bitcoin, was a frequent Less Wrong contributor. So was Hal Finney, the first person besides Satoshi to make a Bitcoin transaction.

The first mention of Bitcoin on Less Wrong, a post called Making Money With Bitcoin, was in early 2011 - when it was worth 91 cents. Gwern predicted that it could someday be worth "upwards of $10,000 a bitcoin". He also quoted Moldbug, who advised that:

If Bitcoin becomes the new global monetary system, one bitcoin purchased today (for 90 cents, last time I checked) will make you a very wealt
... (Read more)

I wrote about this post extensively as part of my essay on Rationalist self-improvement. The general idea of this post is excellent: gathering data for a clever natural experiment of whether Rationalists actually win. Unfortunately, the analysis itself is very lacking and is not very data-driven.

The core result is: 15% of SSC readers who were referred by LessWrong made over $1,000 in crypto, 3% made $100,000. These quantities require quantitative analysis: Is 15%/3% a lot or a little compared to matched groups like the Silicon Valley or Libertarian blogosp... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

One of the most common difficulties faced in discussions is when the parties involved have different beliefs as to what the scope of the discussion should be. In particular, John Nerst identifies two styles of conversation as follows:

  • Decoupling norms: It is considered eminently reasonable to require the truth of your claims to be considered in isolation - free of any potential implications. An insistence on raising these issues despite a decoupling request are often seen as sloppy thinking or attempts to deflect.
  • Contextualising norms: It is considered eminently reasonable to expect certain con
... (Read more)

It occurs to me that "free speech", "heterodoxy", and "decoupling vs contextualising" are all related to intelligence vs virtue signaling. In particular, if you want to do or see more intelligence signaling, then you should support free speech and decoupling norms. If you want to do or see more virtue signaling, then you should support contextualising norms and restrictions on free speech. Heterodox ideas tend to be better (more useful) for intelligence signaling and orthodox ideas better for virtue signaling. (Hopefully this is obvious once pointed out, b

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
8Raemon2dReviewThis post seems to be making a few claims, which I think can be evaluated separately: 1) Decoupling norms exist 2) Contextualizing norms exist 3) Decoupling and contextualization norms are useful to think as opposites (either as a dichotomy or spectrum) (i.e. there are enough people using those norms that it's a useful way to carve up the discussion-landscape) There's a range of "strong" / "weak" versions of these claims – decoupling and/or contextualization might be principled norms that some people explicitly endorse, or they might just be clusters of tendencies people have sometimes. In the comments of his response post [https://www.lesswrong.com/posts/GSz8SrKFfW7fJK2wN/relevance-norms-or-gricean-implicature-queers-the-decoupling#y4aB6N4PwcA5ixv6d] , Zack Davis noted: And, reading that, I think it may actually the opposite – there is general factor of "decoupling", not contextualizing. By default people are using language for a bunch of reasons all jumbled together, and it's a relatively small set of people who have the deliberately-decouple tendency, skill and/or norm, of "checking individual statements to see if they make sense." Upon reflection, this is actually more in line with the original Nerst article, which used the terms "Low Decoupling" and "High Decoupling", which less strongly conveys the idea of "contextualizer" being a coherent thing. On the other hand, Nerst's original post does make some claims about Klein being the sort of person (a journalist) who is "definitively a contextualizer, as opposed to just 'not a decoupler'", here: Although they're interwoven, I think it might be worth distinguishing some subclaims here (not necessarily made by Nerst or Leong, but I think implied and worth thinking about) * There exist a class of general storytelling contextualists * There exist PR-people/politicians/activists who wield contextual practice as a tool or weapon. * There exist "principled contextualizers" who try to evenly come to good
2Chris_Leong2dI really don't like the term jumbled as some people would likely object much more to being labelled as jumbled than as a contextualiser. The rest of this comment makes some good points, but sometimes less is more. I do want to edit this article, but I think I'll mostly engage with Zack's points and reread the article.
3Raemon2dThe OP comment was optimizing for "improving my understanding of the domain" more than direct advice of how to change the post. (I'm not necessarily expecting the points and confusions there to resolve within the next month – it's possible that you'll reflect on it a bit and then figure out a slightly different orientation to the post, that distills the various concepts into a new form. Another possible outcome is that you leave the post as-is for now, and then in another year or two after mulling things over someone writers a new post doing a somewhat different thing, that becomes the new referent. Or, it might just turn out that my current epistemic state wasn't that useful. Or other things) Re: "Jumbled" I think there's sort of a two-step process that goes into naming things (ironically, or appropriate, which map directly onto the post) – first figuring out "okay what actually is this phenomenon, and what name most accurately describes it?" and then, separately, "okay, what sort of names are going to reliably going to make people angry and distract from the original topic if you apply it to people, and are there alternative names that cleave closely to the truth?" (my process for generating names in that risk offending is something like a multi-step Babble and Prune, where I generate names aiming to satisfice on "a good explanation of the true phenomenon" and "not likely to be unnecessarily distracting", until I have a name that satisfies both criteria) I haven't tried generating a maximally good name for Jumbled yet since I wasn't sure this was even carving reality the right way. But, like, it's not an accident that 'jumbled' is more likely to offend people than 'contextualized'. I do, in fact, think worse of people who have jumbled communication than deliberately contextualized communication. (compare "Virtue Signalling", which is an important term but is basically an insult except among people who have some kind of principled understanding that "Yup, it

(Cross-posted from Facebook.)

0.

Tl;dr: There's a similarity between these three concepts:

  • A locally valid proof step in mathematics is one that, in general, produces only true statements from true statements. This is a property of a single step, irrespective of whether the final conclusion is true or false.
  • There's such a thing as a bad argument even for a good conclusion. In order to arrive at sane answers to questions of fact and policy, we need to be curious about whether arguments are good or bad, independently of their conclusions. The rules against fallacies must be enforced even
... (Read more)

I think about this post a lot, and sometimes in conjunction with my own post on common knowlege.

As well as it being a referent for when I think about fairness, it also ties in with how I think about LessWrong, Arbital and communal online endeavours for truth. The key line is:

For civilization to hold together, we need to make coordinated steps away from Nash equilibria in lockstep.

You can think of Wikipedia as being a set of communally editable web pages where the content of the page is constrained to be that which we can easily gain common knowledge of its

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Recently someone pointed out to me that there was no good canonical post that explained the use of common knowledge in society. Since I wanted to be able to link to such a post, I decided to try to write it.

The epistemic status of this post is that I hoped to provide an explanation for a standard, mainstream idea, in a concrete way that could be broadly understood rather than in a mathematical/logical fashion, and so the definitions should all be correct, though the examples in the latter half are more speculative and likely contain some inaccuracies.

Let's start with a puzzle. What do the... (Read more),,,,,,,,,,,,,,,

This my own post. I continue to talk and think a lot about the world from the perspective of solving coordination problems where facilitating the ability for people to build common knowledge is one of the central tools. I'm very glad I wrote the post, it made a lot of my own thinking more rigorous and clear.

In the post 'Four layers of Intellectual Conversation', Eliezer says that both the writer of an idea, and the person writing a critique of that idea, need to expect to have to publicly defend what they say at least one time. Otherwise they can write something stupid and never lose status because they don't have to respond to the criticism.

I was wondering about where this sort of dialogue happens in academia. I have been told by many people that current journals are quite terrible, but I've also heard a romantic notion that science (especially physics and math) used to be mo... (Read more)

I wrote this post, and at the time I just wrote it because... well, I thought I'd be able to write a post with a grand conclusion about how science used to check the truth, and then point to how it changed, but I was so surprised to find that journals had not one sentence of criticism in them at all. So I wrote it up as a question post instead, framing my failure to answer the question as 'partial work' that 'helped define the question'.

In retrospect, I'm really glad I wrote the post, because it is a clear datapoint about how science does not work. I have

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
Load More