Nominated Posts for the 2019 Review

Posts need at least 2 nominations to continue into the Review Phase.
Nominate posts that you have personally found useful and important.
Sort by: fewest nominations

2019 Review Discussion

Previously: Keeping Beliefs Cruxy


When disagreements persist despite lengthy good-faith communication, it may not just be about factual disagreements – it could be due to people operating in entirely different frames — different ways of seeing, thinking and/or communicating.

If you can’t notice when this is happening, or you don’t have the skills to navigate it, you may waste a lot of time.

Examples of Broad Frames

Gears-oriented Frames

Bob and Alice’s conversation is about cause and effect. Neither of them are planning to take direct actions based on their conversation, they’re each just interested in understanding a particular domain better.

Bob has a model of the domain that includes gears A, B, C and D. Alice has a model that includes gears C, D and F. They’re able to exchange information,...

Pages 4-5 of my edition of my copy of The Strategy of Conflict define two terms:

  • Pure Conflict: In which the goals of the players are opposed completely (as in Eliezer's "The True Prisoner's Dilemma")
  • Bargaining: In which the goals of the players are somehow aligned so that making trades is better for everyone

Schelling goes on to argue (again, just on page 5) that most "Pure Conflicts" are actually not, and that people can do better by bargaining instead. Then, he creates a spectrum from Conflict games to Bargaining games, setting the stage for the fram... (read more)

This post is based on chapter 15 of Uri Alon’s book An Introduction to Systems Biology: Design Principles of Biological Circuits. See the book for more details and citations; see here for a review of most of the rest of the book.

Fun fact: biological systems are highly modular, at multiple different scales. This can be quantified and verified statistically, e.g. by mapping out protein networks and algorithmically partitioning them into parts, then comparing the connectivity of the parts. It can also be seen more qualitatively in everyday biological work: proteins have subunits which retain their function when fused to other proteins, receptor circuits can be swapped out to make bacteria follow different chemical gradients, manipulating specific genes can turn a fly’s antennae into legs, organs perform specific...

There's another form of modular variation with e.g. eating than just modularly varying foods in the environment, namely: we don't eat all the time. Instead we have some periods where we eat and other periods where we do other stuff. I wonder if this also contributes to modularity, e.g. it makes it necessary for it to be possible to "activate" and "deactivate" an eating mode while the rest of the body just does whatever.

Reply to: Meta-Honesty: Firming Up Honesty Around Its Edge-Cases

Eliezer Yudkowsky, listing advantages of a "wizard's oath" ethical code of "Don't say things that are literally false", writes—

Repeatedly asking yourself of every sentence you say aloud to another person, "Is this statement actually and literally true?", helps you build a skill for navigating out of your internal smog of not-quite-truths.

I mean, that's one hypothesis about the psychological effects of adopting the wizard's code.

A potential problem with this is that human natural language contains a lot of ambiguity. Words can be used in many ways depending on context. Even the specification "literally" in "literally false" is less useful than it initially appears when you consider that the way people ordinarily speak when they're being truthful is actually pretty dense

...
1Dacyn17dMaybe I am missing the point, but since you do know all information which you do in fact, know, wouldn't behaving as if you do just mean behaving... the way in which you behave? In which case, isn't the puzzle meaningless? On the other hand, if we understand the meaning of the puzzle to be illuminated by elriggs' first reply to it, we could rephrase it (or rather its negation) as follows: I would answer this question as "yes", but with a further appeal to honesty in my reasoning: I think that sometimes the inferential distance between you and the people around you is so great that the only way you can try to bridge it is by putting yourself into a role that they can understand. I can give more details over PM but am reluctant to share publically.
2Said Achmiz17dThis is true in the same technically-correct-but-useless sense that it’s true to say something like “choosing what to do is impossible, since you will in fact do whatever you end up doing”. Unless we believe in substance dualism, or magic, or what have you, we have to conclude that our actions are determined, right? So when you do something, it’s impossible for you to have done something different! Well, ok, but having declared that, we do still have to figure out what to have for dinner, and which outfit to wear to the party, and whether to accept that job offer or not. Neither do I think that talk of “playing roles” is very illuminating here. For a better treatment of the topic, see this recent comment by Viliam [https://www.lesswrong.com/posts/YGB5f7kioXDXNAwLx/privacy-and-manipulation-1?commentId=5Dy3yYiZ2E3WgokAN] .
1Dacyn16dOK, fair enough. So you are asking something like "is it ever ethical to keep a secret?" I would argue yes, because different people are entitled to different parts of your psyche. E.g. what I am willing to share on the internet is different from what I am willing to share in real life. Or am I missing something again?

Or am I missing something again?

Perhaps. Consider this scenario:

Your best friend Carol owns a pastry shop. One day you learn that her store manager, Dave, is embezzling large sums of money from the business. What do you do?

Silly question, obvious answer: tell Carol at once! Indeed, failing to do so would be a betrayal—later, when Carol has to close the shop and file for bankruptcy, her beloved business ruined, and she learns that you knew of Dave’s treachery and said nothing—how can you face her? The friendship is over. It’s quite clear: if you know tha... (read more)

The stereotyped image of AI catastrophe is a powerful, malicious AI system that takes its creators by surprise and quickly achieves a decisive advantage over the rest of humanity.

I think this is probably not what failure will look like, and I want to try to paint a more realistic picture. I’ll tell the story in two parts:

  • Part I: machine learning will increase our ability to “get what we can measure,” which could cause a slow-rolling catastrophe. ("Going out with a whimper.")
  • Part II: ML training, like competitive economies or natural ecosystems, can give rise to “greedy” patterns that try to expand their own influence. Such patterns can ultimately dominate the behavior of a system and cause sudden breakdowns. ("Going out with a bang," an instance of optimization daemons.)

I...

A more recent clarification from Paul Christiano, on how Part 1 might get locked in / how it relates to concerns about misaligned, power-seeking AI:

I also consider catastrophic versions of "you get what you measure" to be a subset/framing/whatever of "misaligned power-seeking." I think misaligned power-seeking is the main way the problem is locked in.

Previously: Online discussion is better than pre-publication peer review, Disincentives for participating on LW/AF

Recently I've noticed a cognitive dissonance in myself, where I can see that my best ideas have come from participating on various mailing lists and forums (such as cypherpunks, extropians, SL4, everything-list, LessWrong and AI Alignment Forum), and I've received a certain amount of recognition as a result, but when someone asks me what I actually do as an "independent researcher", I'm embarrassed to say that I mostly comment on other people's posts, participate in online discussions, and occasionally a new idea pops into my head and I write it down as a blog/forum post of my own. I guess that's because I imagine it doesn't fit most people's image of what a researcher's

...

There's nothing that explicitly prevents people from distilling such discussions into subsequent posts or papers. If people aren't doing that, or are doing that less than they should, that could potentially be solved as a problem that's separate from "should more people be doing FP or traditional research?"

  1. Doing these types of summarize feels like a good place to start out if you are new to doing FP. It is a fairly straight-forward task, but provides a lot of value, and helps you grow skills and reputation that will help you when you do more independent wo
... (read more)

Lately I've come to think of human civilization as largely built on the backs of intelligence and virtue signaling. In other words, civilization depends very much on the positive side effects of (not necessarily conscious) intelligence and virtue signaling, as channeled by various institutions. As evolutionary psychologist Geoffrey Miller says, "it’s all signaling all the way down."

A question I'm trying to figure out now is, what determines the relative proportions of intelligence vs virtue signaling? (Miller argued that intelligence signaling can be considered a kind of virtue signaling, but that seems debatable to me, and in any case, for ease of discussion I'll use "virtue signaling" to mean "other kinds of virtue signaling besides intelligence signaling".) It seems that if you get too much of one type

...
Answer by J FDec 24, 20211

Sociology intro course had Pierre Bourdieu's definition of social capital, and a large part of that is signalling class and power.

Cross-posted from Putanumonit.


Basketballism

Imagine that tomorrow everyone on the planet forgets the concept of training basketball skill.

The next day everyone is as good at basketball as they were the previous day, but this talent is assumed to be fixed. No one expects their performance to change over time. No one teaches basketball, although many people continue to play the game for fun.

Geneticists explain that some people are born with better hand-eye coordination and are able to shoot a basketball accurately. Economists explain that highly-paid NBA players have a stronger incentive to hit shots, which explains their improved performance. Psychologists note that people who take more jump shots each day hit a higher percentage and theorize a principal factor of basketball affinity that influences both desire and skill at...

The obvious candidate for the Rationalist Fosbury flop is the development of good Forecasting environment/software/culture/theory etc.

Suppose the following:

1. Your intelligence is directly proportional to how many useful things you know.

2. Your intelligence increases when your learn things and decreases as the world changes and the things you know go out-of-date.

How quickly the things you know become irrelevant is directly proportional to how many relevant things you know and therefore proportional to your intelligence and inversely proportional to the typical lifetime of things you know . Let's use to denote your rate of learning. Put this together and we get a equation.

If we measure intelligence in units of "facts you know" then the proportionality becomes an equality.

The solution to this first order differential equation is an exponential function.

We must solve for . For convenience let's declare that your intelligence is ...

4Raemon1moA year later I still wish this post had a title that made it easier to remember the core point.

If the thesis in Unlocking the Emotional Brain (UtEB) is even half-right, it may be one of the most important books that I have read. Written by the psychotherapists Bruce Ecker, Robin Ticic and Laurel Hulley, it claims to offer a neuroscience-grounded, comprehensive model of how effective therapy works. In so doing, it also happens to formulate its theory in terms of belief updating, helping explain how the brain models the world and what kinds of techniques allow us to actually change our minds. Furthermore, if UtEB is correct, it also explains why rationalist techniques such as Internal Double Crux [1 2 3] work.

UtEB’s premise is that much if not most of our behavior is driven by emotional learning. Intense emotions generate unconscious predictive models of how...

6David Althaus1moThanks a lot for this post (and the whole sequence), Kaj! I found it very helpful already. Below a question I first wanted to ask you via PM but others might also benefit from an elaboration on this. You describe the second step of the erasure sequence as follows (emphasis mine): >Activating, at the same time, the contradictory belief and having the experience of simultaneously believing in two different things which cannot both be true. When I try this myself, I feel like I cannot actually experience two things simultaneously. There seems to be at least half a second or so between trying to hold the target schema in consciousness and focusing my attention on disconfirming knowledge or experiences. (Generally, I'd guess it's not actually possible to hold two distinct things in consciousness simultaneously, at least that's what I heard various meditation teachers (and perhaps also neuroscientists) claim; you might have even mentioned this in this sequence yourself, if I remember correctly. Relatedly, I heard the claim that multitasking actually involves rapid cycling of one's attention between various tasks, even though it feels from the inside like one is doing several things simultaneously.) So should I try to minimize the duration between holding the target schema and disconfirming knowledge in consciousness (potentially aiming to literally feel as though I experience both things at once) or is it enough to just keep cycling back and forth between the two every few seconds? (If yes, what about, say, 30 seconds?) One issue I suspect I have is that there is a tradeoff between how vividly I can experience the target schema and how rapidly I'm cycling back to the disconfirming knowledge. Or maybe I'm doing something wrong here? Admittedly, I haven't tried this for more than a minute or so before immediately proceeding to spending 5 minutes on formulating this question. :)

Good question, I guess if you look at the transcripts it also looks like at least in some cases two beliefs are actually alternating rather than being literally simultaneous? Though there seem to be some actually simultaneous cases as well.

In general I'd say it probably doesn't matter that much, and that the main fact is to have them both in your general "field of awareness". Even if you are not literally thinking about both at the same time, you still have some sort of awareness of them both being true and their discrepancy "linking up" in some sense. Thi... (read more)

A typical paradigm by which people tend to think of themselves and others is as consequentialist agents: entities who can be usefully modeled as having beliefs and goals, who are then acting according to their beliefs to achieve their goals.

This is often a useful model, but it doesn’t quite capture reality. It’s a bit of a fake framework. Or in computer science terms, you might call it a leaky abstraction.

An abstraction in the computer science sense is a simplification which tries to hide the underlying details of a thing, letting you think in terms of the simplification rather than the details. To the extent that the abstraction actually succeeds in hiding the details, this makes things a lot simpler. But sometimes the abstraction inevitably leaks, as...

Hey, I had an experience that was very much influenced by this sequence, I think. Any chance that you (or someone else with more context than me) would take a look?

https://www.lesswrong.com/posts/dmhZsRDBuqrDJF6zB/a-meditative-experience

(cross posted from my personal blog)

Since middle school I've generally thought that I'm pretty good at dealing with my emotions, and a handful of close friends and family have made similar comments. Now I can see that though I was particularly good at never flipping out, I was decidedly not good "healthy emotional processing". I'll explain later what I think "healthy emotional processing" is, right now I'm using quotes to indicate "the thing that's good to do with emotions". Here it goes...

Relevant context

When I was a kid I adopted a strong, "Fix it or stop complaining about it" mentality. This applied to stress and worry as well. "Either address the problem you're worried about or quit worrying about it!" Also being a kid, I had a limited...

Can anyone relate to the fact of having 100% confidence but like 0% social skills?

(The title of this post is a joking homage to one of Gary Marcus’ papers.)

I’ve discussed GPT-2 and BERT and other instances of the Transformer architecture a lot on this blog.  As you can probably tell, I find them very interesting and exciting.  But not everyone has the reaction I do, including some people who I think ought to have that reaction.

Whatever else GPT-2 and friends may or may not be, I think they are clearly a source of fascinating and novel scientific evidence about language and the mind.  That much, I think, should be uncontroversial.  But it isn’t.

(i.)

When I was a teenager, I went through a period where I was very interested in cognitive psychology and psycholinguistics.  I first got interested via Steven Pinker’s popular books – this was

...

Maybe add a disclaimer at the start of the post?

3Conor Sullivan2moI think Gary Marcus wanted AI research to uncover lots of interesting rules like "in English, you make verbs past tense by adding -ed, except ..." because he wants to know what the rules are, and because engineering following psycholinguistic research is much more appealing to him than the other way around. Machine learning (without interpretability) doesn't give us any tools to learn what the rules are.

Followup to: What Evidence Filtered Evidence?

In "What Evidence Filtered Evidence?", we are asked to consider a scenario involving a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time. Observing Heads is 1 bit of evidence for the coin being Heads-biased (because the Heads-biased coin lands Heads with probability 2/3, the Tails-biased coin does so with probability 1/3, the likelihood ratio of these is , and ), and analogously and respectively for Tails.

If such a coin is flipped ten times by someone who doesn't make literally false statements, who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying[1] reporter used to...

10Ege Erdil2moThis is a rather pedantic remark that doesn't have much relevance to the primary content of the post (EDIT: it's also based on a misunderstanding of what the post is actually doing - I missed that an explicit prior is specified which invalidates the concern raised here), but is not how Bayesian updating would work in this setting. As I've explained in my post [https://www.lesswrong.com/posts/ea7CGqF3pmqpebogK/laplace-s-rule-of-succession] about Laplace's rule of succession, if you start with a uniform prior over [0,1] for the probability of the coin coming up heads and you observe a sequence of N heads in succession, you would update to a posterior of Beta(N+1,1) which has mean (N+1)/(N+2). For N=3 that would be 4/5 rather than 8/9. I haven't formalized this, but one problem with the entropy approach here is that the distinct bits of information you get about the coin are actually not independent, so they are worth less than one bit each. They aren't independent because if you know some of them came up heads, your prior that the other ones also came up heads will be higher, since you'll infer that the coin is likely to have been biased in the direction of coming up heads. To not leave this totally up in the air, if you think of the nth heads having an information content of log2(n+1n) bits, then the total information you get from n heads is something like n∑k=1log2(k+1k)=log2(n+1) bits instead of n bits. Neglecting this effect leads you to make much more extreme inferences than would be justified by Bayes' rule.
5Zack_M_Davis2moThanks for this analysis! However— I'm not. The post specifies "a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time"—that is (and maybe I should have been more explicit), I'm saying our prior belief about the coin's bias is just the discrete distribution {"1/3 Heads, 2/3 Tails": 0.5, "2/3 Heads, 1/3 Tails": 0.5}. I agree that a beta prior would be more "realistic" in the sense of applying to a wider range of scenarios (your uncertainty about a parameter is usually continuous, rather than "it's either this, or it's that, with equal probability"), but I wanted to make the math easy on myself and my readers.

Ah, I see. I missed that part of the post for some reason.

In this setup the update you're doing is fine, but I think measuring the evidence for the hypothesis in terms of "bits" can still mislead people here. You've tuned your example so that the likelihood ratio is equal to two and there are only two possible outcomes, while in general there's no reason for those two values to be equal.

This is Part I of the Specificity Sequence

Imagine you've played ordinary chess your whole life, until one day the game becomes 3D. That's what unlocking the power of specificity feels like: a new dimension you suddenly perceive all concepts to have. By learning to navigate the specificity dimension, you'll be training a unique mental superpower. With it, you'll be able to jump outside the ordinary course of arguments and fly through the conceptual landscape. Fly, I say!

"Acme exploits its workers!"

Want to see what a 3D argument looks like? Consider a conversation I had the other day when my friend “Steve” put forward a claim that seemed counter to my own worldview:

Steve: Acme exploits its workers by paying them too little!

We were only one sentence into the conversation...

I think you are on the right track.

The problem is, "specifity" has to be handled in a really specific way and the intention has to be the desire to get from the realm of unclear arguments to clear insight.

If you see discussions as a chess game, you're already sending your brain in the wrong direction, to the goal of "winning" the conversation, which is something fundamentally different than the goal of clarity.

Just as specificity remains abstract here and is therefore misunderstood, one would have to ask: What exactly is specificity supposed to be?

Linguist... (read more)

The generalized efficient markets (GEM) principle says, roughly, that things which would give you a big windfall of money and/or status, will not be easy. If such an opportunity were available, someone else would have already taken it. You will never find a $100 bill on the floor of Grand Central Station at rush hour, because someone would have picked it up already.

One way to circumvent GEM is to be the best in the world at some relevant skill. A superhuman with hawk-like eyesight and the speed of the Flash might very well be able to snag $100 bills off the floor of Grand Central. More realistically, even though financial markets are the ur-example of efficiency, a handful of firms do make impressive amounts of money by...

8Alexander2moFirstly, I wonder how this would apply to the “meta-ness” of skills. The first kind of dimensionality is for the distinct skills, e.g. macroeconomics, tennis, cooking, etc. Another kind of dimensionality is for how meta the skills are, I.e. how foundational and widely applicable they are across a skills “hierarchy”. If you choose to improve the more foundational skills (e.g. computing, probabilistic reasoning, interpersonal communication) then you’ll be able to have really high dimensionality by leveraging those foundational skills efficiently across many other dimensions. Secondly, I wonder how we might reason about diminishing returns in terms of the number of dimensions we choose to compete on. I can choose to read the Wikipedia overviews of 1,000,000 different fields, which will allow me to reach the Pareto frontier in this 1,000,000-dimensional graph. However, this isn’t practically useful. PS: this was an excellent post and explained a fascinating concept well. I've been binge-reading a lot of your posts on LessWrong and finding them very insightful.
11johnswentworth2moThat... actually sounds extremely useful, this is a great idea. The closest analogue I've done is read through a college course catalogue from cover to cover, which was extremely useful. Very good way to find lots of unknown unknowns.
7AllAmericanBreakfast2moTo both of you, I say “useful relative to what?” Opportunity cost is the baseline for judging that. Are you excited to read N field overviews over your next best option?

Good points by both of you. I like the idea of discovering unknown unknowns.

I should've clarified what I meant by 'useful'. The broader point I was going for is that you can always become Pareto 'better' by arbitrarily choosing to compete along evermore dimensions. As you said, once we define a goal, then we can decide whether competing along one more dimension is better than doing something else or not.

The justification for modelling real-world systems as “agents” - i.e. choosing actions to maximize some utility function - usually rests on various coherence theorems. They say things like “either the system’s behavior maximizes some utility function, or it is throwing away resources” or “either the system’s behavior maximizes some utility function, or it can be exploited” or things like that. Different theorems use slightly different assumptions and prove slightly different things, e.g. deterministic vs probabilistic utility function, unique vs non-unique utility function, whether the agent can ignore a possible action, etc.

One theme in these theorems is how they handle “incomplete preferences”: situations where an agent does not prefer one world-state over another. For instance, imagine an agent which prefers pepperoni over mushroom pizza when it has pepperoni,...

The example you give has a pretty simple lattice of preferences, which lends itself to illustrations but which might create some misconceptions about how the subagent model should be formalized. For example, in your example you assume that the agents' preferences are orthogonal (one cares about pepperoni, the other about mushrooms, and each is indifferent to the opposite direction), the agents have equal weighting in the decision-making, the lattice is distributive... Compensating for these factors, there are many ways that a given 'weak utility' can be ex... (read more)

1. A group wants to try an activity that really requires a lot of group buy in. The activity will not work as well if there is doubt that everyone really wants to do it. They establish common knowledge of the need for buy in. They then have a group conversation in which several people make comments about how great the activity is and how much they want to do it. Everyone wants to do the activity, but is aware that if they did not want to do the activity, it would be awkward to admit. They do the activity. It goes poorly.

2. Alice strongly wants to believe A. She searches for evidence of A. She implements a biased search, ignoring evidence against A. She finds justifications...

I want to write a longer version of this in the future, but I'm going to take a while to write a comment to anchor it in my mind while it's fresh.

Many social decisions are about forming mutually advantageous teams. Bob and Charlie want to team up because they both predict that this will give them some selfish benefit. If this is a decision at all, it's because there's some cost that must be weighed against some benefit. For example, if Bob is running a small business, and considers hiring Charlie, there'll be various costs and risks for onboarding Charlie,... (read more)

LessWrong is currently doing a major review of 2018 — looking back at old posts and considering which of them have stood the tests of time. It has three phases:

  • Nomination (ends Dec 1st at 11:59pm PST)
  • Review (ends Dec 31st)
  • Voting on the best posts (ends January 7th)

Authors will have a chance to edit posts in response to feedback, and then the moderation team will compile the best posts into a physical book and LessWrong sequence, with $2000 in prizes given out to the top 3-5 posts and up to $2000 given out to people who write the best reviews.

Helpful Links:


This is the first week of the LessWrong 2018 Review – an experiment in...

Yes that is my account, but I no longer have access to that email address, so can't get a standard password reset. I've been out of the LW community for a bit due to baby (single parenthood is hard).
Maybe there's a way to get back my access anyway - I'll look into it, but it's not high priority ;)

[Epistemic Status: Scroll to the bottom for my follow-up thoughts on this from months/years later.]

Early this year, Conor White-Sullivan introduced me to the Zettelkasten method of note-taking. I would say that this significantly increased my research productivity. I’ve been saying “at least 2x”. Naturally, this sort of thing is difficult to quantify. The truth is, I think it may be more like 3x, especially along the dimension of “producing ideas” and also “early-stage development of ideas”. (What I mean by this will become clearer as I describe how I think about research productivity more generally.) However, it is also very possible that the method produces serious biases in the types of ideas produced/developed, which should be considered. (This would be difficult to quantify at the best of...

So do you recommend Workflowly, Dynalist, or Roam?

Load More