New Comment
355 comments, sorted by Click to highlight new comments since: Today at 2:34 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

SlateStarCodex, EA, and LW helped me get out of the psychological, spiritual, political nonsense in which I was mired for a decade or more.

I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.

Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. Worst of all, it's given me ambition to do original research. That's a demanding task, one where you have to accept feeling stupid all the time.

But I still look down that old road and I'm glad I'm not walking down it anymore.

Too smart for your own good. You were supposed to believe it was about rationality. Now we have to ban you and erase your comment before other people can see it. :D Yeah, same here.

Things I come to LessWrong for:

  • An outlet and audience for my own writing
  • Acquiring tools of good judgment and efficient learning
  • Practice at charitable, informal intellectual argument
  • Distraction
  • A somewhat less mind-killed politics

Cons: I'm frustrated that I so often play Devil's advocate, or else make up justifications for arguments under the principle of charity. Conversations feel profit-oriented and conflict-avoidant. Overthinking to the point of boredom and exhaustion. My default state toward books and people is bored skepticism and political suspicion. I'm less playful than I used to be.

Pros: My own ability to navigate life has grown. My imagination feels almost telepathic, in that I have ideas nobody I know has ever considered, and discover that there is cutting edge engineering work going on in that field that I can be a part of, or real demand for the project I'm developing. I am more decisive and confident than I used to be. Others see me as a leader.

Some people optimize for drama. It is better to put your life in order, which often means getting the boring things done. And then, when you need some drama, you can watch a good movie. Well, it is not completely a dichotomy. There is also some fun to be found e.g. in serious books. Not the same intensity as when you optimize for drama, but still. It's like when you stop eating refined sugar, and suddenly you notice that the fruit tastes sweet.

Chemistry trick

Once you've learned to visualize, you can employ my chemistry trick to learn molecular structures. Here's the structure of Proline (from Sigma Aldrich's reference).

Before I learned how to visualize, I would try to remember this structure by "flashing" the whole 2D representation in my head, essentially trying to see a duplicate of the image above in my head.

Now, I can do something much more engaging and complex.

I visualize the molecule as a landscape, and myself as standing on one of the atoms. For example, perhaps I start by standing on the oxygen at the end of the double bond.

I then take a walk around the molecule. Different bonds feel different - a single bond is a path, a double bond a ladder, and a triple bond is like climbing a chain-link fence. From each new atomic position, I can see where the other atoms are in relation to me. As I walk around, I get practice in recalling which atom comes next in my path.

As you can imagine, this is a far more rich and engaging form of mental practice than just trying to reproduce static 2D images in my head.

A few years ago, I felt myself to have almost no ability to visualize. Now, I am able to do this with relative ease. So... (read more)

I was able to memorize the structures of all 20 amino acids []pretty easily and pleasantly in a few hours' practice over the course of a day using this technique.
I imagine a computer game, where different types of atoms are spheres of different color (maybe also size; at least H should be tiny), connected the way you described, also having the correct 3D structure, so you walk on them like astronaut. Now there just needs to be something to do in that game, not sure what. I guess, if you can walk the atoms, so can some critters you need to kill, or perhaps there are some items to collect. Play the game a few times, and you will remember the molecules (because people usually remember useless data from computer games they played). Advanced version: chemical reactions, where you need to literally cut the atomic bonds and bind new atoms.
I haven't seen games using the precise mechanic you describe. However, there are games/simulations to teach chemistry. They ask you to label parts of atoms, or to act out the steps of a chemical reaction. I'm open to these game ideas, but skeptical, for reasons I'll articulate in a later shortform.

Intellectual Platforms

My most popular LW post wasn't a post at all. It was a comment on John Wentworth's post asking "what's up with Monkeypox?"

Years before, in the first few months of COVID, I took a considerable amount of time to build a scorecard of risk factors for a pandemic, and backtested it against historical pandemics. At the time, the first post received a lukewarm reception, and all my historical backtesting quickly fell off the frontpage.

But when I was able to bust it out, it paid off (in karma). People were able to see the relevance to an issue they cared about, and it was probably a better answer in this time and place than they could have obtained almost anywhere else.

Devising the scorecard and doing the backtesting built an "intellectual platform" that I can now use going forward whenever there's a new potential pandemic threat. I liken it to engineering platforms, which don't have an immediate payoff, but are a long-term investment.

People won't necessarily appreciate the hard work of building an intellectual platform when you're assembling it. And this can make it feel like the platform isn't worthwhile: if people can't see the obvious importance of what I'm doing,... (read more)

Math is training for the mind, but not like you think

Just a hypothesis:

People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:

“Mathematics is food for the brain,” says math professor Dr. Arthur Benjamin. “It helps you think precisely, decisively, and creatively and helps you look at the world from multiple perspectives . . . . [It’s] a new way to experience beauty—in the form of a surprising pattern or an elegant logical argument.”

But math doesn't obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.

As I've studied calculus, coming from a humanities background, I'd argue it differently.

Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.

It can therefore serve as a more reliable signal, to self and others, of one's own learning capacity.

Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.

Math is interesting in this regard because it is both very precise and there's no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit). Programming, OTOH, gives you clear feedback loops.
In programming, that's true at first. But as projects increase in scope, there's a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run. For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database. Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.
7Gordon Seidoh Worley3y
Yep. Having worked both as a mathematician and a programmer, the idea of objectivity and clear feedback loops starts to disappear as the complexity amps up and you move away from the learning environment. It's not unusual to discover incorrect proofs out on the fringes of mathematical research that have not yet become part of the cannon, nor is it uncommon (in fact, it's very common) to find running production systems where the code works by accident due to some strange unexpected confluence of events.
Feedback, yes. Clarity... well, sometimes it's "yes, it works" today, and "actually, it doesn't if the parameter is zero and you called the procedure on the last day of the month" when you put it in production.
Proof verification is meant to minimize this gap between proving and programming
The thing I like about math is that it gives the feeling that the answers are in the territory. (Kinda ironic, when you think about what the "territory" of math is.) Like, either you are right or you are wrong, it doesn't matter how many people disagree with you and what status they have. But it also doesn't reward the wrong kind of contrarianism. Math allows you to make abstractions without losing precision. "A sum of two integers is always an integer." Always; literally. Now with abstractions like this, you can build long chains out of them, and it still works. You don't create bullshit accidentally, by constructing a theory from approximations that are mostly harmless individually, but don't resemble anything in the real world when chained together. Whether these are good things, I suppose different people would have different opinions, but it definitely appeals to my aspie aesthetics. More seriously, I think that even when in real world most abstractions are just approximations, having an experience with precise abstractions might make you notice the imperfection of the imprecise ones, so when you formulate a general rule, you also make a note "except for cases such as this or this". (On the other hand, for the people who only become familiar with math as a literary genre [], it might have an opposite effect: they may learn that pronouncing abstractions with absolute certainty is considered high-status.)
2Eli Tyre3y
Isn't programming even more like this? I could get squidgy about whether a proof is "compelling", but when I write a program, it either runs and does what I expect, or it doesn't, with 0 wiggle room.
Sometimes programming is like that, but then I get all anxious that I just haven’t checked everything thoroughly! My guess is this has more to do with whether or not you’re doing something basic or advanced, in any discipline. It’s just that you run into ambiguity a lot sooner in the humanities
It helps you to look at the world from multiple perspectives: It gets you into a position to make a claim like that soley based on anecdotal evidence and wishful thinking.

I ~completely rewrote the Wikipedia article for the focus of my MS thesis, aptamers.

Please tell me what you liked, and feel free to give constructive feedback!

While I do think aptamers have relevance to rationality, and will post about that at some point, I'm mainly posting this here because I'm proud of the result and wanted to share one of the curiosities of the universe for your reading pleasure.

You know what "chunking" means in memorization? It's also something you can do to understand material before you memorize it. It's high-leverage in learning math.

Take the equation for a t score:

That's more symbolic relationships than you can fit into your working memory when you're learning it for the first time. You need to chunk it. Here's how I'd break it into chunks:

Chunk 1:


Chunk 2:

Chunk 3:

Chunk 4:

[(Chunk 1) - (Chunk 2)]/sqrt(Chunk 3)

The most useful insight here is learning to see a "composite" as a "unitary." If we inspect Chunk 1 and see it as two variables and a minus sign, it feels like an arbitrary collection of three things. In the back of the mind, we're asking "why not a plus sign? why not swap out x1 for... something else?" There's a good mathematical answer, of course, but that doesn't necessarily stop the brain from firing off those questions during the learning process, when we're still trying to wrap our heads around these concepts.

But if we can see

as a chunk, a thing with a unitary identity, it lets us think with it in a more powerful way. Imagine if you were running a cafe, and you didn't perceive your dishes as "unitary." A pie wasn't a pie, it was a pan f... (read more)

A Nonexistent Free Lunch

  1. More Wrong

On an individualPredictIt market, sometimes you can find a set of "no" contracts whose price (1 share of each) adds up to less than the guaranteed gross take.

Toy example:

  • Will A get elected? No = $0.30
  • Will B get elected? No = $0.70
  • Will C get elected? No = $0.90
  • Minimum guaranteed pre-fee winnings = $2.00
  • Total price of 1 share of both No contracts = $1.90
  • Minimum guaranteed pre-fee profits = $0.10

There's always a risk of black swans. PredictIt could get hacked. You might execute the trade improperly. Unexpected personal expenses might force you to sell your shares and exit the market prematurely.

But excluding black swans, I though that as long as three conditions held, you could make free money on markets like these. The three conditions were:

  1. You take PredictIt's profit fee (10%) into account
  2. You can find enough such "free money" opportunities that your profits compensate for PredictIt's withdrawal fee (5% of the total withdrawal)
  3. You take into account the opportunity cost of investing in the stock market (average of 10% per year)

In the toy example above, I calculated that you'd lose $0.10 x 10% = $0.01 to PredictIt's profit fee if you bought 1 of each "... (read more)

Simulated weight gain experiment, day 2

Background: I'm wearing a weighted vest to simulate the feeling of 50 pounds (23 kg) of weight gain and loss. The plan is to wear this vest for about 20 days, for as much of the day as is practical. I started with zero weight, and will increase it in 5 pound (~2 kg) increments daily to 50 pounds, then decrease it by 5 pounds daily until I'm back to zero weight.

So far, the main challenge of this experiment has been social. The weighted vest looks like a bulletproof vest, and I'm a 6' tall white guy with a buzzcut. My girlfriend laughed just imagining what I must look like (we have a long-distance relationship, so she hasn't seen me wearing it). My housemate's girlfriend gasped when I walked in through the door.

As much as I'd like to wear this continuously as planned, I just don't know if I it will work to wear this to the lab or to classes in my graduate school. If the only problem was scaring people, I could mitigate that by emailing my fellow students and the lab and telling them what I'm doing and why. However, I'm also in the early days of setting up my MS thesis research in a big, professional lab that has invested a lot of time and money ... (read more)

The Rationalist Move Club

Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens.

Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings.

Maybe one way to solve this is to ask for escalating credible commitments.

A trusted individual sets up a Rationalist Move Fund. Everybody who's open to the idea of moving puts $500 in a short-term escrow. This makes them part of the Rationalist Move Club.

If the Move Club grows to a certain number of members within a defined period of time (say 20 members by March 2020), then they're invited to planning meetings for a defined period of time, perhaps one year. This is the first checkpoint. If the Move Club has not grown to that size by then, the money is returned and the project is cancelled.

By the end of the pre-defined planning period, there could be one of three majority... (read more)

What gives LessWrong staying power?

On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents?

Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people's posts on this forum when we don't have a prior interest in the topic?

Rationality should be the art of general intelligence. It's what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint. 

To determine whether or not we're actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively.

This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren't they generally known and appreciated? Sh... (read more)

What gives LessWrong staying power?

For me, it's the relatively high epistemic standards combined with relative variety of topics. I can imagine a narrowly specialized website with no bullshit, but I haven't yet seen a website that is not narrowly specialized and does not contain lots of bullshit. Even most smart people usually become quite stupid outside the lab. Less Wrong is a place outside the lab that doesn't feel painfully stupid. (For example, the average intelligence at Hacker News seems quite high, but I still regularly find upvoted comments that make me cry.)

Yeah, Less Wrong seems to be a combination of project and aesthetic. Insofar as it's a project, we're looking for techniques of general intelligence, partly by stress-testing them on a variety of topics. As an aesthetic, it's a unique combination of tone, length, and variety + familiarity of topics that scratches a particular literary itch.

School teaches terrible reading habits.

When you're assigned 30 pages of a textbook, the diligent students read them, then move on to other things. A truly inquisitive person would struggle to finish those 30 pages, because there are almost certainly going to be many more interesting threads they want to follow within those pages.

As a really straightforward example, let's say you commit to reading a review article on cell senescence. Just forcing your way through the paper, you probably won't learn much. What will make you learn is looking at the citations as you go.

I love going 4 layers deep. I try to understand the mechanisms that underpin the experiments that generated the data that informed the facts that inform the theories that the review article is covering. When I do this, it suddenly transforms the review article from dry theory to something that's grounded in memories of data and visualizations of experiments. I have a "simulated lived experience" to map onto the theory. It becomes real. 

I think that for anything except scholarship, those aren't terrible. I'd attack them from the other side: They aren't shallow enough. In industry, most often you often just want to find some specific piece of information, so reading the whole 30 pages is a waste of time, as is following your deep curiosity down into rabbit holes.
I agree with you. It’s a good point that I should have clarified this is for a specific use case - rapidly scouting out a field that you’re unfamiliar with. When I take this approach, I also do not read entire papers. I just read enough to get the gist and find the next most interesting link. So for example, I am preparing for a PhD, where I’ll probably focus on aging research. I need to understand what’s going on broadly in the field. Obviously I can’t read everything, and as I have no specific project, there are no particular known-in-advance bits of information I need to extract. I don’t yet have a perfect account for what exactly you “learn” from this - at the speed I read, I don’t remember more than a tiny fraction of the details. My best explanation is that each paper you skim gives you context for understanding the next one. As you go through this process, you come away with some takeaway highlights and things to look at next. So for example, the last time I went through the literature on senescence, I got into the antagonistic pleiotropy literature. Most of it is way too deep for me at this point, but I took away the basic insights and epistemic: models consistently show that aging is the only stable equilibrium outcome of evolution, that it’s fueled by genes that confer a reproductive advantage early in life but a disadvantage later in life, and that the late-life disadvantages should not be presumed to be intrinsically beneficial - they are the downside side of a tradeoff, and evolution often mitigates them, but generally cannot completely eliminate them. I also came to understand that this is 70 years of development of mathematical and data-backed models, which consistently show the same thing. Relevant for my research is that anti-aging therapeutics aren’t necessarily going to be “fighting against evolution.” They are complementing what nature is already trying to do: mitigate the genetic downsides in old age of adaptations for youthful vigor.
That sounds more like a problem of the teaching style than school in particular. Instead of assigning textbook pages to be read, a better way is to give the students problems to solve and tell them that those textbook pages are relevant to solving the problem. That's how my biology and biochemistry classes went. We were never assigned to read particular pages of the book.
That does sound like a better way. Personally, I'm halfway through my biomedical engineering MS and have never experienced a STEM class like this. If you don't mind my asking, where did you take your bio/biochem classes (or what type of school was it)?
I studied bioinformatics at the Free University of Berlin. Just like we had weekly problem sheets in math classes we also had them in biology and biochemistry. It was more than a decade ago. There was certainly a sense of not simply copying what biology majors might do but to be focused more on problem-solving skills that would presumably be more relevant. 

There are many software tools for study, learning, attention programming, and memory prosthetics.

  • Flashcard apps (Anki)
  • Iterated reading apps (Supermemo)
  • Notetaking and annotation apps (Roam Research)
  • Motivational apps (Beeminder)
  • Time management (Pomodoro)
  • Search (Google Scholar)
  • Mnemonic books (Matuschak and Nielsen's "Quantum Country")
  • Collaborative document editing (Google Docs)
  • Internet-based conversation (Internet forums)
  • Tutoring (Wyzant)
  • Calculators, simulators, and programming tools (MATLAB)

These complement analog study tools, such as pen and paper, textbooks, worksheets, and classes.

These tools tend to keep the user's attention directed outward. They offer useful proxy metrics for learning: getting through 20 flashcards per day, completing N Pomodoros, getting through the assigned reading pages, turning in the homework.

However, these proxy metrics, like any others, are vulnerable to streetlamp effects and Goodharting.

Before we had this abundance of analog and digital knowledge tools, scholars relied on other ways to tackle problems. They built memory palaces, visualized, looked for examples in the world around them, invented approximations, and talked to themselves. They relied on t... (read more)

Please post this as a regular post.

Thirding this. Would love more detail or threads to pull on. Going into the constructivism rabbit hole now.
I'll continue fleshing it out over time! Mostly using the shortform as a place to get my thoughts together in legible form prior to making a main post (or several). By the way, contrast "constructivism" with "transmissionism," the latter being the (wrong) idea that students are basically just sponges that passively absorb the information their teacher spews at them. I got both terms from Andy Matuschak [].
I second this, and expansions of these ideas.

Thoughts on cheap criticism

It's OK for criticism to be imperfect. But the worst sort of criticism has all five of these flaws:

  1. Prickly: A tone that signals a lack of appreciation for the effort that's gone in to presenting the original idea, or shaming the presenter for bringing it up.
  2. Opaque: Making assertions or predictions without any attempt at specifying a contradictory gears-level model, evidence basis, even on the level of anecdote or fiction.
  3. Nitpicky: Attacking the one part of the argument that seems flawed, without arguing for how the full original argument should be reinterpreted in light of the local disagreement.
  4. Disengaged: Not signaling any commitment to continue the debate to mutual satisfaction, or even to listen to/read and respond to a reply.
  5. Shallow: An obvious lack of engagement with the details of the argument or evidence originally offered.

I am absolutely guilty of having delivered Category 5 criticism, the worst sort of cheap shots.

There is an important tradeoff here. If standards are too high for critical commentary, it can chill debate and leave an impression that either nobody cares, everybody's on board, or the argument's simply correct. Sometimes, an idea ca... (read more)

4Matt Goldenberg2y
This seems like a fairly valuable framework.  It occurs to me that all 5 of these flaws are present in the "Snark" genre present in places like Gawker and Jezebel.
I am going to experiment with a karma/reply policy to what I think would be a better incentive structure if broadly implemented. Loosely, it looks like this: 1. Strong downvote plus a meaningful explanatory comment for infractions worse than cheap criticism; summary deletions for the worst offenders. 2. Strong downvote for cheap criticism, no matter whether or not I agree with it. 3. Weak downvote for lazy or distracting comments. 4. Weak upvote for non-cheap criticism or warm feedback of any kind. 5. Strong upvote for thoughtful responses, perhaps including an appreciative note. 6. Strong upvote plus a thoughtful response of my own to comments that advance the discussion. 7. Strong upvote, a response of my own, and an appreciative note in my original post referring to the comment for comments that changed or broadened my point of view.
1Luke Allen2y
I'm trying a live experiment: I'm going to see if I can match your erisology [] one-to-one as antagonists to the Elements of Harmony from My Little Pony: 1. Prickly: Kindness 2. Opaque: Honesty 3. Nitpicky: Generosity 4. Disengaged: Loyalty 5. Shallow: Laughter Interesting! They match up surprisingly well, and you've somehow also matched the order of 3 out of 5 of the corresponding "seeds of discord" from 1 Peter 2:1, CSB: "Therefore, rid yourselves of all malice, all deceit, hypocrisy, envy, and all slander." If my pronouncement of success seems self-serving and opaque, I'll elaborate soon: 1. Malice: Kindness 2. Deceit: Honesty 3. Hypocrisy: Loyalty 4. Envy: Generosity 5. Slander: Laughter And now the reveal. I'm a generalist; I collect disparate lists of qualities (in the sense of "quality vs quantity"), and try to integrate all my knowledge into a comprehensive worldview. My world changed [] the day I first saw My Little Pony; it changed in a way I never expected, in a way many people claim to have been affected by HPMOR. I believed I'd seen a deep truth, and I've been subtly sharing it wherever I can. The Elements of Harmony are the character qualities that, when present, result in a spark of something that brings people together. My hypothesis is that they point to a deep-seated human bond-testing instinct. The first time I noticed a match-up was when I heard a sermon on The Five Love Languages, which are presented in an entirely different order: 1. Words of affirmation: Honesty 2. Quality time: Laughter 3. Receiving gifts: Generosity 4. Acts of service: Loyalty 5. Physical touch: Kindness Well! In just doing the basic research to write this reply, it turns out I'm re-inventing the wheel! Someone else has already written a psychometric analysis of the Five Love Languages [

Weight Loss Simulation

I've gained 50 pounds over the last 15 years. I'd like to get a sense of what it would be like to lose that weight. One way to do that is to wear a weighted vest all day long for a while, then gradually take off the weight in increments.

The simplest version of this experiment is to do a farmer's carry with two 25 lb free weights. It makes a huge difference in the way it feels to move around, especially walking up and down the stairs.

However, I assume this feeling is due to a combination of factors:

  • The sense of self-consciousness that comes with doing something unusual
  • The physical bulk and encumbrance (i.e. the change in volume and inertia, having my hands occupied, pressure on my diaphragm if I were wearing a weighted vest, etc)
  • The ratio of how much muscle I have to how much weight I'm carrying

If I lost 50 pounds, that would likely come with strength training as well as dieting, so I might keep my  current strength level while simultaneously being 50 pounds lighter. That's an argument in favor of this "simulated weight loss" giving me an accurate impression of how it would feel to really lose that much weight.

On the other hand, there would be no sudden tr... (read more)

Overtones of Philip Tetlock:

"After that I studied morning and evening searching for the principle,
and came to realize the Way of Strategy when I was fifty. Since then I
have lived without following any particular Way. Thus with the virtue of
strategy I practice many arts and abilities - all things with no teacher. To
write this book I did not use the law of Buddha or the teachings of Confucius, neither old war chronicles nor books on martial tactics. I take up
my brush to explain the true spirit of this Ichi school as it is mirrored in
the Way of heaven and Kwannon." - Miyamoto Musashi, The Book of Five Rings

Personal evidence for the impact of stress on cognition. This is my Lichess ranking on Blitz since January. The two craters are, respectively, the first 4 weeks of the term, and the last 2 weeks. It begins trending back up immediately after I took my last final.

How much did you play during the start / end of term compared to normal?
I don’t know exactly, Lichess doesn’t have a convenient way to plot that day by day. But probably roughly equal amounts. It’s my main distraction.
Too bad. My suspects for confounders for that sort of thing would be 'you played less at the start/end of term' or 'you were more distracted at the start/end of term'.
Playing less wouldn’t decrease my score, and being distracted is one of the effects of stress.
Interesting. Is this typically the case with chess? Humans tend to do better with tasks when they are repeated more frequently, albeit with strongly diminishing returns. Absolutely, which makes it very difficult to tease apart 'being distracted as a result of stress caused by X causing a drop' and 'being distracted due to X causing a drop'.
I see what you mean, and yes, that is a plausible hypothesis. It's hard to get a solid number, but glancing over the individual records of my games, it looks like I was playing about as much as usual. Subjectively, it doesn't feel like lack of practice was responsible. I think the right way to interpret my use of "stress" in this context is "the bundle of psychological pressures associated with exam season," rather than a psychological construct that we can neatly distinguish from, say, distractability or sleep loss. It's kind of like saying "being on an ocean voyage with no access to fresh fruits and vegetables caused me to get scurvy."

Does rationality serve to prevent political backsliding?

It seems as if politics moves far too fast for rational methods can keep up. If so, does that mean rationality is irrelevant to politics?

One function of rationality might be to prevent ethical/political backsliding. For example, let's say that during time A, institution X is considered moral. A political revolution ensues, and during time B, X is deemed a great evil and is banned.

A change of policy makes X permissible during time C, banned again during time D, and absolutely required for all upstanding folk during time E.

Rational deliberation about X seems to play little role in the political legitimacy of X.

However, rational deliberation about X continues in the background. Eventually, a truly convincing argument about the ethics of X emerges. Once it does, it is so compelling that it has a permanent anchoring effect on X.

Although at some times, society's policy on X contradicts the rational argument, the pull of X is such that it tends to make these periods of backsliding shorter and less frequent.

The natural process of developing the rational argument about X also leads to an accretion of arguments that are not only correct... (read more)

Thinking, Fast and Slow was the catalyst that turned my rumbling dissatisfaction into the pursuit of a more rational approach to life. I wound up here. After a few years, what do I think causes human irrationality? Here's a listicle.

  1. Cognitive biases, whatever these are
  2. Not understanding statistics
  3. Akrasia
  4. Little skill in accessing and processing theory and data
  5. Not speaking science-ese
  6. Lack of interest or passion for rationality
  7. Not seeing rationality as a virtue, or even seeing it as a vice.
  8. A sense of futility, the idea that epistemic rationality is not very useful, while instrumental rationality is often repugnant
  9. A focus on associative thinking
  10. Resentment
  11. Not putting thought into action
  12. Lack of incentives for rational thought and action itself
  13. Mortality
  14. Shame
  15. Lack of time, energy, ability
  16. An accurate awareness that it's impossible to distinguish tribal affiliation and culture from a community
  17. Everyone is already rational, given their context
  18. Everyone thinks they're already rational, and that other people are dumb
  19. It's a good heuristic to assume that other people are dumb
  20. Rationality is disruptive, and even very "progressive" people have a conservative bias to stay the same, conform with their pee
... (read more)
A few other (even less pleasant) options: 51) God is inscrutable and rationality is no better than any other religion. 52) Different biology and experience across humans leads to very different models of action. 53) Everyone lies, all the time.  

Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?

One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.

On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.

I wish we could figure out ways of getting this stuff across that was fun,  made it seem agreeable and sensible and non-threatening.

Less negativity - we do sooo much critique. I was originally attracted to LW partly as a place where I didn’t  feel obligated to participate in the culture war. Now, I do, just on a set of topics that I didn’t associate with the CW before LessWrong.

My guess? This is totally possible. But it needs a champion. Somebody willing to dedicate themselves to it. Somebody friendly, funny, empathic, a good performer, neat and practiced. And it needs a space for the educative process - a YouTube channel, a book, etc. And it needs the courage of its convictions. The sign of that? Not taking itself too seriously, being known by the fruits of its labors.

Traditionally, things like this are socially achieved by using some form of "good cop, bad cop" strategy. You have someone who explains the concepts clearly and bluntly, regardless of whom it may offend (e.g. Eliezer Yudkowsky), and you have someone who presents the concepts nicely and inoffensively, reaching a wider audience (e.g. Scott Alexander), but ultimately they both use the same framework.

The inoffensiveness of Scott is of course relative, but I would say that people who get offended by him are really not the target audience for rationalist thought. Because, ultimately, saying "2+2=4" means offending people who believe that 2+2=5 and are really sensitive about it; so the only way to be non-offensive is to never say anything specific.

If a movement only has the "bad cops" and no "good cops", it will be perceived as a group of assholes. Which is not necessarily bad if the members are powerful; people want to join the winning side. But without actual power, it will not gain wide acceptance. Most people don't want to go into unnecessary conflicts.

On the other hand, a movement with "good cops" without "bad cops" wil... (read more)

You're right. I need to try a lot harder to remember that this is just a community full of individuals airing their strongly held personal opinions on a variety of topics.
Those opinions often have something in common -- respect for the scientific method, effort to improve one's rationality, concern about artificial intelligence -- and I like to believe it is not just a random idiosyncratic mix (a bunch of random things Eliezer likes), but different manifestations of the same underlying principle (use your intelligence to win, not to defeat yourself). However, not everyone is interested in all of this. And I would definitely like to see "somebody friendly, funny, empathic, a good performer, neat and practiced" promoting these values in a YouTube channel or in books. But that requires a talent I don't have, so I can only wait until someone else with the necessary skills does it. This reminded me of the YouTube channel of Julia Galef [], but the latest videos there are 3 years old.
Her podcast is really good IMHO. She does a singularly good job of challenging guests in a friendly manner, dutifully tracking nuance, steelmanning, etc. It just picked back up after about a yearlong hiatus (presumably due to her book writing). Unfortunately, I see the lack of notoriety for her podcast to be some evidence against the prospects of the "skilled & likeable performer" strategy. I assume that potential subscribers are more interested in lower-quality podcasts and YouTubers that indulge in bias rather than confronting it. Dunno what to do about that, but I'm glad she's back to podcasting.
That's wonderful news, thank you for telling me! For those who have clicked on the YouTube link in my previous comment, there is no new content as of now, go to the Rationally Speaking [] podcast.
You're both assuming that you have a set of correct ideas coupled with bad PR...but how well are Bayes, Aumann and MWI (eg.) actually doing?
Look, I'm neurotypical and I don't find anything Eliezer writes offensive, will you please stop ostracizing us.
2Ben Pace3y
Did either of them say neurotypical? I just heard them say normies.
Oh, sorry, I've only heard the word used in that context before, I thought that's what it meant. Turns out it has a broader meaning. 

Like to explain why I think as I do, I‘d need to go through some basic rational concepts.

I believe that if the rational concepts are pulling their weight, it should be possible to explain the way the concept is showing up concretely in your thinking, rather than justifying it in the general case first.

As an example, perhaps your friend is protesting your use of anecdotes as data, but you wish to defend it as Bayesian, if not scientific, evidence. Rather than explaining the difference in general, I think you can say "I think that it's more likely that we hear this many people complaining about an axe murderer downtown if that's in fact what's going on, and that it's appropriate for us to avoid that area today. I agree it's not the only explanation and you should be able to get a more reliable sort of data for building a scientific theory, but I do think the existence of an axe murderer is a likely enough explanation for these stories that we should act on it"

If I'm right that this is generally possible, then I think this is a route around the feeling of being trapped on the other side of an inferential gap (which is how I interpreted the 'weird tension')

I think you're right, when the issue at hand is agreed on by both parties to be purely a "matter of fact." As soon as social or political implications crop in, that's no longer a guarantee. But we often pretend like our social/political values are matters of fact. The offense arises when we use rational concepts in a way that gives the lie to that pretense. Finding an indirect and inoffensive way to present the materials and let them deconstruct their pretenses is what I'm wishing for here. LW has a strong culture surrounding how these general-purpose tools get applied, so I'd like to see a presentation of the "pure theory" that's done in an engaging way not obviously entangled with this blog. The alternative is to use rationality to try and become savvier social operators. This can be "instrumental rationality" or it can be "dark arts," depending on how we carry it out. I'm all for instrumental rationality, but I suspect that spreading rational thought further will require that other cultural groups appropriate the tools to refine their own viewpoints rather than us going out and doing the convincing ourselves. 


I work in a biomedical engineering lab. With the method I'm establishing, there are hundreds of little steps, repeated 15 times over the course of weeks. For many of these steps, there are no dire consequences for screwing them up. For others, some or all of your work could be ruined if you don't do them right. There's nothing intrinsic about the critical steps that scream "PAY ATTENTION RIGHT NOW."

If your chance of doing any step right is X%, then for some X, you are virtually guaranteed to fail. If in a day, there are 30 critical steps, then y... (read more)

Aging research is the wild west

In Modern Biological Theories of Aging (2010), Jin dumps a bunch of hypotheses and theories willy-nilly. Wear-and-tear theory is included because "it sounds perfectly reasonable to many people even today, because this is what happens to most familiar things around them." Yet Jin entirely excludes antagonistic pleiotropy, the mainstream and 70-year-old solid evolutionary account for why aging is an inevitable side effect of evolution for reproductive fitness.

This review has 617 citations. It's by a prominent researcher with a ... (read more)

Markets are the worst form of economy except for all those other forms that have been tried from time to time.

8Matt Goldenberg3y
I used this line when having a conversation at a party with a bunch of people who turned out to be communists, and the room went totally silent except for one dude who was laughing.
It was the silence of sullen agreement.

I'm annoyed that I think so hard about small daily decisions.

Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?


"Spend no more than 1 minute per $25 spent and 2% of the price to find a better product."

This heuristic cashes out to:

  • Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meal
... (read more)
For some (including younger-me), the opposite advice was helpful - I'd agonize over "big" decisions, without realizing that the oft-repeated small decisions actually had a much larger impact on my life. To account for that, I might recommend you notice cache-ability and repetition, and budget on longer timeframes. For monthly spending, there's some portion that's really $120X decade spending (you can optimize once, then continue to buy monthly for the next 10 years), a bunch that's probably $12Y of annual spending, and some that's really $Z that you have to re-consider every month. Also, avoid the mistake of inflexible permissions. Notice when you're spending much more (or less!) time optimizing a decision than your average, but there are lots of them that actually benefit from the extra time. And lots that additional time/money doesn't change the marginal outcome by much, so you should spend less time on.
I wonder if your problem as a youth was in agonizing over big decisions, rather than learning a productive way to methodically think them through. I have lots of evidence that I underthink big decisions and overthink small ones. I also tend to be slow yet ultimately impulsive in making big changes, and fast yet hyper-analytical in making small changes. Daily choices have low switching and sunk costs. Everybody's always comparing, so one brand at a given price point tends to be about as good as another. But big decisions aren't just big spends. They're typically choices that you're likely stuck with for a long time to come. They serve as "anchors" to your life. There are often major switching and sunk costs involved. So it's really worthwhile anchoring in the right place. Everything else will be influenced or determined by where you're anchored. The 1 minute/$25 + 2% of purchase price rule takes only a moment's thought. It's a simple but useful rule, and that's why I like it. There are a few items or services that are relatively inexpensive, but have high switching costs and are used enough or consequential enough to need extra thought. Examples include pets, tutors, toys for children, wedding rings, mattresses, acoustic pianos, couches, safety gear, and textbooks. A heuristic and acronym for these exceptions might be CHEAPS: "Is it a Curriculum? Is it Heavy? Is it Ergonomic? Is it Alive? Is it Precious? Is it Safety-related?"

Don't get confused - to attain charisma and influence, you need power first.

If you, like most people, would like to fit in, make friends easily, and project a magnetic personality, a natural place to turn is books like The Charisma Myth and How to Make Friends and Influence People.

If you read them, you'll get confused unless you notice that there's a pattern to their anecdotes. In all the success stories, the struggling main character has plenty of power and resources to achieve their goals. Their problem is that, somehow, they're not able to use that powe... (read more)

tl;dr - it doesn't matter how friendly you are, if there is nothing to gain by being friends with you
5Ulisse Mini6mo
Both are important, but I disagree that power is always needed. In example 3,7,9 it isn't clear that the compromise is actually better for the convinced party. The insurance is likely -EV, The peas aren't actually a crux to defeating the bully, the child would likely be happier outside kindergarten.
I see what you mean! If you look closely, I think you'll find that power is involved in even these cases. The examples of the father and child depend on the father having the power of his child's trust. He can exploit this to trick his child and misrepresent the benefits of school or of eating peas. The case of the insurance salesman is even more important to consider. You are right that insurance policies always have negative expected value in terms of money. But they may have positive expected value to the right buyer. Ann insurance policy can confer status on the buyer, who can put his wife's mind at ease that he's protected her and their children from the worst. It's also protection against loss aversion and a commitment device to put money away for a worst-case scenario, without having to think too hard about it. But in order to use his enthusiasm to persuade the customer of this benefit, the salesman has to get a job  with an insurance company and have a policy worth selling. That's the power he has to have first, in order to make his persuasion successful.
1Ulisse Mini6mo
I disagree that the policy must be worth selling (see e.g. Jordon Belfort). Many salespeople can sell things that aren't worth buying. See also: never split the difference for an example of negotiation when you have little/worse leverage. (Also, I don't think htwfaip boils down to satisfying an eager want, the other advice is super important too. E.g. don't criticize, be genuinely interested in a person, ...)

How I boosted my chess score by a shift of focus

For about a year, I've noticed that when I'm relaxed, I play chess better. But I wasn't ever able to quite figure out why, or how to get myself in that relaxed state. Now, I think I've done it, and it's stabilized my score on Lichess at around 1675 rather than 1575. That means I'm now evenly matched with opponents who'd previously have beaten me 64% of the time.

The trick is that I changed my visual relationship with the chessboard. Previously, I focused hard on the piece I was considering moving, almost as if... (read more)

Kasparov was asked: how you are able to calculate all possible outcomes of the game. He said: I don't. I just have very good understanding of current situation.
I think there's a broader lesson to this ability to zoom out, soft focus, take in the whole situation as it is now, and just let good ideas come to you. Chess is an easy illustration because all information is contained on the board and the clock, and the rules and objective are clear. Vaguely, it seems like successful people are able to construct a model of the whole situation, while less successful people get caught up in hyperfocusing on the particularities.
I think that they are also to find the most important problem from all.
It's probably helpful to be able to take in everything in order to do that - I think these two ideas go together.
I also had an n-back boost using visualisation, see my shortform.

Summaries can speed your reading along by

  • Avoiding common misunderstandings
  • Making it easy to see why the technical details matter
  • Helping you see where it's OK to skim

Some summaries are just BAD

  • They sometimes to a terrible job of getting the main point across
  • They can be boring, insulting, or confusing
  • They give you a false impression of what's in the article, making you skip it when you'd actually have gotten a lot out of reading it
  • They can trick you into misinterpreting the article

The author is not the best person to write the summary. They don't have a clea... (read more)

Task Switching And Mentitation

A rule of thumb is that there's no such thing as multitasking - only rapid task switching. This is true in my experience. And if it's true, it means that we can be more effective by improving our ability to both to switch and to not switch tasks.

Physical and social tasks consume a lot of energy, and can be overstimulating. They also put me in a headspace of "external focus," moving, looking at my surroundings, listening to noises, monitoring for people. Even when it's OK to stop paying attention to my surroundings, I find it v... (read more)

There's a fairly simple statistical trick that I've gotten a ton of leverage out of. This is probably only interesting to people who aren't statistics experts.

The trick is how to calculate the chance that an event won't occur in N trials. For example, in N dice rolls, what's the chance of never rolling a 6?

The chance of a 6 is 1/6, and there's a 5/6 chance of not getting a 6. Your chance of never rolling a 6 is therefore .

More generally, the chance of an event X never occurring is . The chance of the event occurring at least once is&n... (read more)

If you are a waiter carrying a platter full of food at a fancy restaurant, the small action of releasing your grip can cause a huge mess, a lot of wasted food, and some angry customers. Small error -> large consequences.

Likewise, if you are thinking about a complex problem, a small error in your chain of reasoning can lead to massively mistaken conclusions. Many math students have experienced how a sign error in a lengthy calculation can lead to a clearly wrong answer. Small error -> large consequences.

Real-world problems often arise when we neglect,... (read more)

"Ludwig Boltzmann, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics." - States of Matter, by David L. Goodstein

The structure of knowledge is an undirected cyclic graph between concepts. To make it easier to present to the novice, experts convert that graph into a tree structure by removing some edges. Then they convert that tree into natural language. This is called a textbook.

Scholarship is the act of converting the textbook language back into nodes and edges of a tree, and then filling in the missing edges to convert it into the original graph.

The mind cannot hold the entire graph in working memory at once. It's as important to practice navigating between concept... (read more)

I want to put forth a concept of "topic literacy."

Topic literacy roughly means that you have both the concepts and the individual facts memorized for a certain subject at a certain skill level. That subject can be small or large. The threshold is that you don't have to refer to a reference text to accurately answer within-subject questions at the skill level specified.

This matters, because when studying a topic, you always have to decide whether you've learned it well enough to progress to new subject matter. This offers a clean "yes/no" answer to that ess... (read more)

We do things so that we can talk about it later.

I was having a bad day today. Unlikely to have time this weekend for something I'd wanted to do. Crappy teaching in a class I'm taking. Ever increasing and complicating responsibilities piling up.

So what did I do? I went out and bought half a cherry pie.

Will that cherry pie make me happy? No. I knew this in advance. Consciously and unconsciously: I had the thought, and no emotion compelled me to do it.

In fact, it seemed like the least-efficacious action: spending some of my limited money, to buy a pie I don't... (read more)

So the "stupid solutions to problems of life" are not really about improving the life, but about signaling to yourself that... you still have some things under control? (My life may suck, but I can have a cherry pie whenever I want to!) This would be even more important if the cherry pie would somehow actively make your life worse. For example, if you are trying to lose weight, but at the same time keep eating cherry pie every day in order to improve the story of your day. Or if instead of cherry pie it would be cherry liqueur. Just guessing, but it would probably help to choose the story in advance. "If I am doing X, my life is great, and nothing else matters" -- and then make X something useful that doesn't take much time. Even better, have multiple alternatives X, Y, Z, such that doing any of them is a "proof" of life being great.
I do chalk a lot of dysfunction up to this story-centric approach to life. I just suspect it’s something we need to learn to work with, rather than against (or to deny/ignore it entirely). My sense is that storytelling - to yourself or others - is an art. To get the reaction you want - from self or others - takes some aesthetic sensitivity. My guess is there’s some low hanging fruit here. People often talk about doing things “for the story,” which they resort to when they're trying to justify doing something dumb/wasteful/dangerous/futile. Perversely, it often seems that when people talk in detail about their good decisions, it comes of as arrogant. Pointless, tidy philosophical paradoxes seem to get people's puzzle-solving brains going better than confronting the complexity of the real world. But maybe we can simply start building habits of expressing gratitude. Finding ways to present good ideas and decisions in ways that are delightful in conversation. Spinning interesting stories out of the best parts of our lives.

Make sentences easier to follow with the XYZ pattern

I hate the Z of Y of X pattern. This is a sentence style presents information in the wrong order for easy visualization. XYZ is the opposite, and presents information in the easiest way to track.

Here are some examples:

Z of Y of X: The increased length of the axon of the mouse

XYZ: The mouse's axon length increase

Z of Y of X: The effect of boiling of extract of ginger is conversion to zingerol of gingerol

XYZ: Ginger extract, when boiled, converts gingerol to zingerol.

Z of Y of X: The rise of the price of st... (read more)

This depends a lot on the medium of communication.  A lot of style guides recommend that they go in order of importance or relevance.  I suspect different readers will have different levels of difficulty in keeping the details in mind at once, so it's not obvious which is actually easier or "more chunkable" for them. For instance, I find "Addida's stock price is rising due to the decline of Kanye West's stylistic influence".  Is that ZXY?  The decline is the main point, and what is declining is one chunk "Kanye's influence".
You’re right. Sentences should start with the idea that is most important or foundational, and end with a transition to the next sentence’s topic. It’s best to do these two things with the XYZ structure whenever possible. My examples ignored these other design principles to focus on the XYZ structure. Writers must make tradeoffs. Ideally, their writing occupies the sentence design Pareto frontier.
Is there a name for the discipline or practice of symbolically representing the claims and content in language (this may be part of Mathematical Logic, but I am not familiar enough with it to know)? Practice: The people of this region (Z) typically prefer hiking in the mountains of the rainforest to walking in the busy streets (Y), given their love of the mountaintop scenery (X). XYZ Output: Given their mountaintop scenery love (X), rainforest mountain hiking is preferred over walking in the busy streets (Y) by this region's people (Z).
I don’t know if there’s a name for the practice. I notice the XYZ form makes some phrases sound like music album titles (“mountaintop scenery love”). The XYZ form is mainly meant to structure sentences for easy tracking, not just to eliminate the word “of.” “Their love of mountaintop scenery” seems easier to track than “mountaintop scenery love.” In your XYZ version, “this region’s people” ends the sentence. Since the whole sentence is about them, it seems like it’s easier to track if they’re introduced at the beginning. Maybe: “This region’s people’s love of mountaintop scenery typically makes them prefer hiking in the mountainous rainforests to walking in the busy streets.” I don’t love “this region’s people’s” but I’m not sure what to do about that. Maybe: “In this region, the people’s love of mountaintop scenery typically makes them prefer hiking in the mountain rainforests to walking in the busy streets.”

From 2000-2015, we can see that life expectancy has been growing faster the higher your income bracket (source is Vox citing JAMA).

There's an angle to be considered in which this is disturbingly inequitable. That problem is even worse when considering the international inequities in life expectancy. So let's fund malaria bednets and vaccine research to help bring down malaria deaths from 600,000/year to zero  - or maybe support a gene drive to eliminate it once and for all.

At the same time, this seems like hopeful news for longevity research. If we we... (read more)

I'm surprised by the difference. I'm curious whether the United States is special in that regard or whether the patterns also exist in European countries to the same extent.

Operator fluency

When learning a new mathematical operator, such as Σ, a student typically goes through a series of steps:

  1. Understand what it's called and what the different parts mean.
  2. See how the operator is used in a bunch of example problems.
  3. Learn some theorems relevant to or using the operator.
  4. Do a bunch of example problems.
  5. Understand what the operator is doing when they encounter it "in the wild" in future math reading.

I've only taken a little bit of proof-based math, and I'm sure that the way one relates with operators depends a lot on the type of clas... (read more)

Mentitation: the cost/reward proposition

Mentitation techniques are only useful if they help users with practical learning tasks. Unfortunately, learning how to crystallize certain mental activities as "techniques," and how to synthesize them into an approach to learning that really does have practical relevance, took me years of blundering around. Other people do not, and should not, have that sort of patience and trust that there's a reward at the end of all that effort.

So I need a strategy for articulating, teaching, and getting feedback on these methods... (read more)

A lot of my akrasia is solved by just "monkey see, monkey do." Physically put what I should be doing in front of my eyeballs, and pretty quickly I'll do it. Similarly, any visible distractions, or portals to distraction, will also suck me in.

But there also seems to be a component that's more like burnout. "Monkey see, monkey don't WANNA."

On one level, the cure is to just do something else and let some time pass. But that's not explicit enough for my taste. For one thing, something is happening that recovers my motivation. For another, "letting time pass" i... (read more)

Functional Agency

I think "agent" is probably analogous to a river: structurally and functionally real, but also ultimately an aggregate of smaller structures that are not themselves aligned with the agent. It's convenient for us to be able to point at a flowing body of water much longer than it is wide and call it a river. Likewise, it is convenient for us to point to an entity that senses its environment and steers events adaptively toward outcomes for legible reasons and refer to it as exhibiting agency.

In that sense, AutoGPT is already an agent - it is ... (read more)

Telling people what they want to hear

When I adopt a protocol for use in one of my own experiments, I feel reassured that it will work in proportion to how many others have used it before. Likewise, I feel reassured that I'll enjoy a certain type of food depending on how popular it is.

By contrast, I don't feel particularly reassured by the popularity of an argument that it is true (or, at least, that I'll agree with it). I tend to think book and essays become popular in proportion to whether they're telling their audience what they want to hear.

One problem ... (read more)

That's not how it is for me, at least not consciously. I have trouble anticipating what will be controversial and what not. I guess it shows in the high fraction of my posts that were controversial here. At best, I can imagine potential questions. But your account matches what I have heard elsewhere that having a reliable audience leads to wanting to please your audience and lock-in.
Learn to value and notice interaction and commentary, far more than upvotes.  A reply or follow-up comment is an indication that you've posted something worth engaging with.  An upvote could mean anything (I mean, it's still nice, and is some evidence in your favor, just not the most important signal). I got a zero score yesterday, +2 +1 -1 and -2 on 4 different comments.  But I got two responses, so a good day (I didn't need to further interact on those threads, so not perfect).  Overall, I shoot for 90% upvotes (which is probably 75% postitive response, given people's biases toward positivity), and I actively try to be a little more controversial if I start to think I'm mostly saying things that everyone already knows and believes.

Hard numbers

I'm managing a project to install signage for a college campus's botanical collection.

Our contractor, who installed the sign posts in the ground, did a poor job. A lot of them pulled right out of the ground.

Nobody could agree on how many posts were installed: the groundskeeper, contractor, and two core team members, each had their own numbers from "rough counts" and "lists" and "estimates" and "what they'd heard."

The best decision I've made on this project was to do a precise inventory of exactly which sign posts are installed correctly, comple... (read more)

Paying your dues

I'm in school at the undergraduate level, taking 3 difficult classes while working part-time.

For this path to be useful at all, I have to be able to tick the boxes: get good grades, get admitted to grad school, etc. For now, my strategy is to optimize to complete these tasks as efficiently as possible (what Zvi calls "playing on easy mode"), in order to preserve as much time and energy for what I really want: living and learning.

Are there dangers in getting really good at paying your dues?

1) Maybe it distracts you/diminishes the incen... (read more)

If you haven't seen Half-assing it with everything you've got [], I'd definitely recommend it as an alternative perspective on this issue.
I see my post as less about goal-setting ("succeed, with no wasted motion") and more about strategy-implementing ("Check the unavoidable boxes first and quickly, to save as much time as possible for meaningful achievement"). 
I suspect "dues" are less relevant in today's world than a few decades ago.  It used to be a (partial) defense against being judged harshly for your success, by showing that you'd earned it without special advantage.  Nowadays, you'll be judged regardless, as the assumption is that "the system" is so rigged that anyone who succeeds had a headstart. To the extent that the dues do no actual good (unlike literal dues, which the recipient can use to buy things, presumably for the good of the group), skipping them seems very reasonable to me.  The trick, of course, is that it's very hard to distinguish unnecessary hurdles ("dues") from socially-valuable lessons in conformity and behavior ("training").   Relevant advice when asked if you've paid your dues:

I've been thinking about honesty over the last 10 years. It can play into at least three dynamics.

One is authority and resistance. The revelation or extraction of information, and the norms, rules, laws, and incentives surrounding this, including moral concepts, are for the primary purpose of shaping the power dynamic.

The second is practical communication. Honesty is the idea that specific people have a "right to know" certain pieces of information from you, and that you meet this obligation. There is wide latitude for "white lies," exaggeration, storytell... (read more)

Better rationality should lead you to think less, not more. It should make you better able to

  • Set a question aside
  • Fuss less over your decisions
  • Accept accepted wisdom
  • Be brief

while still having good outcomes. What's your rationality doing to you?

I like this line of reasoning, but I'm not sure it's actually true. "better" rationality should lead your thinking to be more effective - better able to take actions that lead to outcomes you prefer. This could express as less thinking, or it could express as MORE thinking, for cases where return-to-thinking is much higher due to your increase in thinking power. Whether you're thinking less for "still having good outcomes", or thinking the same amount for "having better outcomes" is a topic for introspection and rationality as well.
That's true, of course. My post is really a counter to a few straw-Vulcan tendencies: intelligence signalling, overthinking everything, and being super argumentative all the time. Just wanted to practice what I'm preaching!

How should we weight and relate the training of our mind, body, emotions, and skills?

I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It's a stochastic process that seems to work OK, both individually and as a species.

We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?

I'm experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.

To do that, I want to arrive at an acronym that's memorable, defines an action plan and implies when you'd want to use it. Examples:

OSSEE Activity - "One Short Simple Easy-to-Exit Activity." A way to plan dates and hangouts that aren't exhausting or recipes for confusion.

DAHLIA - "Discuss, Assess, Help/Ask, Leave, Intervene, Accept." An action plan for how to de... (read more)

Thoughts on Apple Vision Pro:

  • The price point is inaccessibly high.
  • I'm generally bullish on new interfaces to computing technology. The benefits aren't always easy to perceive until you've had a chance to start using it.
  • If this can sit on my head and allow me to type or do calculations while I'm working in the lab, that would be very convenient. Currently, I have to put gloves on and off to use my phone, and office space with my laptop is a 6-minute round trip from the lab.
  • I can see an application that combines voice-to-text and AI in a way that makes it fe
... (read more)
Sure, but an audio-only interface can be done with an iPhone and some Airpods; no need for a new interface.
That's true! However, I would feel weird and disruptive trying to ask ChatGPT questions when working alongside coworkers in the lab.
2[comment deleted]10h
2[comment deleted]10h

Calling all mentitators

Are you working hard on learning STEM?

Are you interested in mentitation - visualization, memory palaces, developing a practical craft of "learning how to learn?"

What I think would take this to the next level would be developing an exchange of practices.

I sit around studying, come up with mentitation ideas, test them on myself, and post them here if they work.

But right now, I don't get feedback from other people who try them out. I also don't get suggestions from other people with things to try.

Suggestions are out there, but the devil... (read more)

Memory palace foundations

What makes the memory palace work? Four key principles:

  • Sensory integration: Journeying through the memory palace activates your kinetic and visual imagination
  • Pacing: The journey happens at your natural pace for recollection
  • Decomposition: Instead of trying to remember all pieces of information at once, you can focus on the single item that's in your field of view
  • Interconnections: You don't just remember the information items, but the "mental path" between them.

We can extract these principles and apply them to other forms of memoriza... (read more)

Can mentitation be taught?

Mentitation[1] can be informed by the psychological literature, as well as introspection. Because people's inner experiences are diverse and not directly obervable, I expect it to be difficult to explain or teach this subject. However, mentitation has allowed me to reap large gains in my ability to understand and remember new information. Reading STEM textbooks has become vastly more interesting and has lead to better test results.

Figuring out a useful way to do mentitation has taken me years, with lots of false starts along ... (read more)

Why do patients neglect free lifestyle interventions, while overspending on unhelpful healthcare?

The theory that patients are buying "conspicuous care" must compete with the explanation that patients have limited or asymmetric information about true medical benefits. Patient tendencies to discount later medical benefits, while avoiding immediate effort and cost, can also explain some of the variation in lifestyle intervention neglect.

We could potentially separate these out by studying medical overspending by doctors on their own healthcare, particularly in... (read more)

My first thought is that lifestyle interventions are in fact almost never free, from either a quality of life point of view or a monetary point of view. My second thought is a question: Is it clear that patients do actually overspend on unhelpful healthcare? All of the studies I've read that claimed this made one or more of the following errors or limitations: * Narrowly defining "helpful" to mean just reduction in mortality or severe lasting disability; * Conflating costs imposed after the fact by the medical system with those a patient chooses to spend; * Failing to consider common causal factors in both amount of spending and medical problems; * Studying very atypical sub-populations. It's entirely possible that patients from general population do in fact voluntarily overspend on healthcare that on average has negligible benefit even after allowing for prior causes, and would like to see a study that made a credible attempt at testing this.
One of the examples given was a RAND RCT in which subjects had their healthcare subsidized to varying degrees. The study examined whether the more heavily subsidized groups consumed more healthcare (they did) and whether or not health outcomes differed among the different groups (they did not). Another was an Oregon RCT in which subjects were randomly assigned to receive or not receive Medicaid. The only health effects of getting subsidized healthcare here was in "feeling healthier" and mental health. Other studies show that regional variations in healthcare consumption (i.e. surgery rates for enlarged prostate) do not correlate with different health outcomes. One shows that death rates across the 50 US states are correlated with education and income, but not amount of medical spending. The overall conclusion seems to be that whatever people are buying at the hospital when they spend more than average, it does not appear to be health, and particularly not physical health.
Do you have links? The descriptions you give match a number of studies I've read and already evaluated. E.g. dozens of papers investigating various aspects of the Oregon randomized Medicaid trial, with substantially varying conclusions in this area.
This is just the summary given in The Elephant In the Brain, I haven't read the original papers and I'm sure that you know more about this than me. Here's what TEITB says about the Oregon Medicaid trial (screenshotted from my Kindle version): If you think this misrepresents what we should take away from this study, I'm keen to hear it!
It's mixed. As far as it goes for the original study, it's mostly accurate but I do think that the use of the phrase "akin to a placebo effect" is misleading and the study itself did not conclude anything of the kind. There may be later re-analyses that do draw such a conclusion, though. Most objective health outcomes of medical treatment were not measured, and many of those that were measured were diagnostic of chronic conditions that medical treatment cannot modify, but only provide treatment that reduces their impact on daily life. There are objective measures of outcomes of such treatment, but they require more effort to measure and are more specific to the medical conditions being treated. This is relevant in that a large fraction of medical expenditure is in exactly this sort of management of conditions to improve functionality and quality of life without curing or substantially modifying the underlying disease. It should also be borne in mind that the groups in this study were largely healthy, relatively young adults. The vast majority of health service expenditure goes to people who are very sick and mostly older than 65. It seems unwise to generalize conclusions about overall effectiveness of health expenditure from samples of much healthier younger adults.
That's helpful information, thanks. Would you characterize the Oregon Medicaid study as poorly designed, or perhaps set up to make Medicaid look bad? From your description, it sounds like they chose a population and set of health metrics that were predictably going to show no effect, even though there was probably an effect to be found.
Doesn't necessarily mean they "neglected free lifestyle interventions". Maybe they were already doing everything they were aware of. If you are not an expert, when you ask people about what to do, you get lots of contradictory advice. Whatever one person recommends, another person will tell you it's actively harmful. "You should exercise more." "Like this?" makes a squat. "No, definitely not like that, you will fuck up your spine and joints." "So, how exactly?" "I don't know actually; I am just warning you that you can hurt yourself." "You should only eat raw vegetables." Starts eating raw vegetables. Another person: "If you keep doing that, the lack of proteins will destroy your muscles and organs, and that will kill you." The only unambiguous advice is to give up all your bodily pleasures. Later: "Hey, why are you so depressed?" (For the record, I don't feel epistemically helpless about this stuff now. I discussed it with some people I trust, and sorted out the advice. But it took me a few years to get there, and not everyone has this opportunity. Even now, almost everything I ever do, someone tells me it's harmful; I just don't listen to them anymore.)
People's willingness to spend on healthcare changes with the amount they are currently suffering. Immediate suffering is a much stronger motivator for behavior than plausible future suffering and even likely future suffering. 
I'm sure there's a lot of variance in how it feels to be someone willing to spend on healthcare but less willing to change their daily habits and activities.  For me, "free" is misleading.  It's a whole lot more effort and reduced joy for some interventions.  That's the opposite of free, it's prohibitively costly, or seems like it.   There's also a bit of inverse-locus-of-control.  If my choices cause it, it's my fault.  If a doctor or medication helps, that means it was externally imposed, not my fault.   And finally, it hits up against human learning mechanisms - we notice contrasts and rapid changes, such as when a chiropractor does an adjustment or when a medication is prescribed.  We don't notice gradual changes (positive or negative), and our minds don't make the correlation to our behaviors.

Mistake theory on plagiarism:

How is it that capable thinkers and writers destroy their careers by publishing plagiarized paragraphs, sometimes with telling edits that show they didn't just "forget to put quotes around it?"

Here is my mistake-theory hypothesis:

  1. Authors know the outlines of their argument, but want to connect it with the literature. At this stage, they're still checking their ideas against the data and theory, not trying to produce a polished document. So in their lit review, they quickly copy/paste relevant quotes into a file. They don't both
... (read more)
In an arena where plagiarism is harmful, I'd call this "negligence theory" rather than "mistake theory".  This isn't just a misunderstanding or incorrect belief, it's a sloppiness in research that (again, in domains where it matters) should cost the perpetrator a fair bit of standing and trust. It matters a lot what they do AFTERWARD, too.  Admitting it, apologizing, and publishing an updated version is evidence that it WAS a simple unintentional mistake.  Hiding it, repeating the problem, etc. are either malice or negligence. Edit: there's yet another possibility, which is "intentional use of ideas without attribution".  In some kinds of writing, the author can endorse a very slight variant of someone else's phrasing, and just use it as their own.  It's certainly NICER to acknowledge the contribution from the original source, but not REQUIRED except in formal settings.
Negligence vs. mistake It's sloppy, but my question is whether it's unusually sloppy. That's the difference between a mistake and negligence. Compare this to car accidents. We expect that there's an elevated proportion of "consistently unusually sloppy driving" among people at fault for causing car accidents relative to the general driving population. For example, if we look at the population of people who've been at fault for a car accident, we will find a higher-than-average level of drunk driving, texting while driving, tired driving, speeding, dangerous maneuvers, and so on. However, we might also want to know the absolute proportion of at-fault drivers who are consistently unusually sloppy drivers, relative to those who are average or better-than-average drivers who had a "moment of sloppy driving" that happened to result in an accident. As a toy example, imagine the population is: * 1/4 consistently good drivers. As a population, they're responsible for 5% of accidents. * 1/2 average drivers. As a population, they're responsible for 20% of accidents. * 1/4 consistently bad drivers. As a population, they're responsible for 75% of accidents. In this toy example, good and average drivers are at fault for about 10% of all accidents. When we see somebody commit an accident, this should, as you say, make us see them as substantially more likely to be a bad driver. It is also good incentives to punish this mistake in proportion to the damage done, the evidence about underlying factors (i.e. drunk driving), the remorse they display, and their previous driving record. However, we should also bear in mind that there's a low but nonzero chance that they're not a bad driver, they just got unlucky. Plagiarism interventions Shifting back to plagiarism, the reason it can be useful to bear in mind that low-but-nonzero chance of a plagiarism "good faith error" is that it suggests interventions to lower the rate of that happening. For example, I do all my
True - "harm reduction" is a tactic that helps with negligence or mistake, and less so with true adversarial situations.  It's worth remembering that improvements are improvements, even if only for some subset of infractions. I don't particularly worry about plagiarism very often - I'm not writing formal papers, but most of my internal documents benefit from a references appendix (or inline) for where data came from.  I'd enjoy a plugin that does "referenceable copy/paste", which includes the URL (or document title, or, for some things, a biblio-formatted source). 

I'm interested in the relationship between consumption and motivation to work. I have a theory that there are two demotivating extremes: an austerity mindset, in which the drive to work is not coupled to a drive to consume (or to be donate); and a profligacy mindset, in which the drive to consume is decoupled from a drive to work.

I don't know what to do about profligacy mindset, except to put constraints on that person's ability to obtain more credit.

But I see Putanumonit's recent post advocating self-interested generosity over Responsible Adult (tm) savin... (read more)

One particular category to spend luxury money on is "things you were constrained about as a child but aren't actually that expensive". What color clay do I want? ALL OF THEM. ESPECIALLY THE SHINY ONES. TODAY I WILL USE THE EXACT COLORS OF CLAY I WANT AND MAKE NO COMPROMISES. Some caveats: * I imagine you have to be judicious about this, appeasing your inner child probably hits diminishing returns. But I did experience a particular feeling about  "I used to have to prioritize to satisfy others' constraints and now I can just do the thing." * It's probably better if you actually want the thing and will enjoy it for its own sake, rather than as merely a fuck you to childhood deprivation. I have actually been using the clay and having an abundance of colors really is increasing my joy.
I like the idea of a monthly "luxury budget", because then you only need to convince the person once; then they can keep experimenting with different things, keeping the luxury budget size the same. (Assuming that if something proves super useful, it moves to the normal budget.) This could be further improved by adding a constraint that each month the luxury budget needs to be spent on a different type of expense (food, music, travel, books, toys...). Make the person read The Luck Factor [] to motivate experimenting. It may be simultaneously true that many people underestimate how better their life could be if they took some debt and bought things that improve their life and productivity... and that many other people underestimate how better their life could be if they had more financial slack and greater resilience against occassional clusters of bad luck. A problem with spending the right amount of money is to determine how much exactly the right amount is. For example, living paycheck to paycheck is dangerous -- if you get fired from your job and your car breaks at the same time, you could be in a big trouble; while someone who has 3 months worth of salary saved would just shrug, find a new job, and use a cab in the meanwhile. On the other hand, another person living paycheck to paycheck, who didn't get fired and whose car didn't break at the inconvenient moment, might insist that it is perfectly ok. So when people tell you what worked for them best, they may be survivor bias involved. Statistically, the very best outcomes will not happen to people who used the best financial strategy (with the best expected outcome), but who took risk and got lucky. Such as those who took a lot of debt, started a company, and succeeded.
Speaking as someone on the austerity side, if you want to convince me to buy something specific, tell me exactly how much it costs (and preferably add a link to an online shop as evidence). Sometimes I make the mistake of assuming that something is too expensive... so I don't even bother checking the actual cost, because I have already decided that I am not going to buy it... so in the absence of data I continue believing that it is too expensive. Sometimes I even checked the cost, but it was like 10 years ago, and maybe it got significantly cheaper since then. Or maybe my financial situation has improved during the 10 years, but I don't remember the specific cost of the thing, only my cached conclusion that it was "too expensive", which was perhaps true back then, but not now. Another way to convince me to use some product is to lend it to me, so I get the feeling how actually good it is.

Pet peeve: the phrase "nearly infinite."

Would you prefer "for nearly all purposes, any bounds there might be are irrelevant"?

I’d prefer WAY BIG

In most cases I think the correct phrase would be "nearly unlimited". It unpacks to: the set of circumstances in which a limit would be reached, is nearly empty. 
2mako yass2y
I don't like that one either, it usually reflects a lack of imagination. They're talking about the purposes we can think of now, they usually know nothing about the purposes we will find, once we have it, which haven't been invented yet.

A celebrity is someone famous for being famous.

Is a rationalist someone famous for being rational? Someone who’s leveraged their reputation to gain privileged access to opportunity, other people’s money, credit, credence, prestige?

Are there any arenas of life where reputation-building is not a heavy determinant of success?

4Ben Pace3y
A physicist is someone who is interested in and studies physics. A rationalist is someone who is interested in and studies rationality.
A rationalist is someone who can talk rationally about rationality, I guess. :P One difference between rationality and fame is that you need some rationality in order to recognize and appreciate rationality, while fame can be recognized and admired also (especially?) by people who are not famous. Therefore, rationality has a limited audience. Suppose you have a rationalist who "wins at life". How would a non-rational audience perceive them? Probably as someone "successful", which is a broad category that also includes e.g. lottery winners. Even people famous for being smart, such as Einstein, are probably perceived as "being right" rather than being good at updating, research, or designing experiments. A rationalist can admire another rationalist's ability of changing their mind. And also "winning at life" to the degree we can control for their circumstances (privilege and luck), so that we can be confident it is not mere "success" we admire, but rather "success disportionate to resources and luck". This would require either that the rationalist celebrity regularly publishes their though processes, or that you know them personally. Either way, you need lots of data about how they actually succeeded. You could become a millionaire by buying Bitcoin anonymously, so that would be one example. Depends on what precisely you mean by "success": it is something like "doing/getting X" or rather "being recognized as X"? The latter is inherently social, the former you can often achieve without anyone knowing about it. Sometimes it easier to achieve things if you don't want to take credit; for example if you need a cooperation of a powerful person, it can be useful to convince them that X was actually their idea. Or you can have the power, but live in the shadows, while other people are in the spotlight, and only they know that they actually take commands from you. To be more specific, I think you could make a lot of money by learning something like programming, getting
Certainly it is possible to find success in some areas anonymously. No argument with you there! I view LW-style rationality as a community of practice, a culture of people aggregating, transmitting, and extending knowledge about how to think rationally. As in "The Secret of Our Success," we don't accomplish this by independently inventing the techniques we need to do our work. We accomplish this primarily by sharing knowledge that already exists. Another insight from TSOOS is that people use prestige as a guide for who they should imitate. So rationalists tend to respect people with a reputation for rationality. But what if a reputation for rationality can be cultivated separately from tangible accomplishments? In fact, prestige is already one step removed from the tangible accomplishments. But how do we know if somebody is prestigious? Perhaps a reputation can be built not by gaining the respect of others through a track record of tangible accomplishments, but by persuading others that: a) You are widely respected by other people whom they haven't met, or by anonymous people they cannot identify, making them feel behind the times, out of the loop. b) That the basis on which people allocate prestige conventionally is flawed, and that they should do it differently in a way that is favorable to you, making them feel conformist or conservative. c) That other people's track record of tangible accomplishments are in fact worthless, because they are not of the incredible value of the project that the reputation-builder is "working on," or are suspect in terms of their actual utility. This makes people insecure. d) Giving people an ability to participate in the incredible value you are generating by convincing them to evangelize your concept, and thereby to evangelize you. Or of course, just donating money. This makes people feel a sense of meaning and purpose. I could think of other strategies for building hype. One is to participate in cooperative games, whereb
Ah, so you mean within the rationalist (and adjacent) community; how can we make sure that we instinctively copy our most rational members, as opposed to random or even least rational ones. When I reflect on what I do by default... well, long ago I perceived "works at MIRI/CFAR" as the source of prestige, but recently it became "writes articles I find interesting". Both heuristics have their advantages and disadvantages. The "MIRI/CFAR" heuristic allows me to outsource judgment to people who are smarter than me and have more data about their colleagues; but it ignores people outside Bay Area and those who already have another job. The "blogging" heuristic allows me to judge the thinking of authors; but it ignores people who are too busy doing something important or don't wish to write publicly. Here is how to exploit my heuristics: * Be charming, and convince people at MIRI/CFAR/GiveWell/etc. to give you some role in their organization; it could be a completely unimportant one. Make your association known. * Have good verbal skills, and deep knowledge of some topic. Write a blog about that topic and the rationalist community. Looking at your list: Option a) if someone doesn't live in Bay Area, it could be quite simple to add a few rationalist celebrities as friends on Facebook, and then pretend that you have some deeper interaction with them. People usually don't verify this information, so if no one at your local meetup is in regular contact with them, the risk of exposure is low. Your prestige is then limited to the local meetup. Options b) and c) would probably lead to a big debate. Arguably, "metarationality" is an example of "actually, all popular rationalists are doing it wrong, this is the true rationality" claim. Option d) was tried by Intentional Insights, Logic Nation, and I have heard about people who try to extract free work from programmers at LW meetups. Your prestige is limited to the few people you manage to recruit. Rationalist com

Idea for online dating platform:

Each person chooses a charity and an amount of money that you must donate to swipe right on them. This leads to higher-fidelity match information while also giving you a meaningful topic to kick the conversation off.

Goodhart's Epistemology

If a gears-level understanding becomes the metric of expertise, what will people do?

  • Go out and learn until they have a gears-level understanding?
  • Pretend they have a gears-level understanding by exaggerating their superficial knowledge?
  • Feel humiliated because they can't explain their intuition?
  • Attack the concept of gears-level understanding on a political or philosophical level?

Use the concept of gears-level understanding to debug your own knowledge. Learn for your own sake, and allow your learning to naturally attract the credibility

... (read more)

Let's say I'm right, and a key barrier to changing minds is the perception that listening and carefully considering the other person's point of view amounts to an identity threat.

  • An interest in evolution might threaten a Christian's identity.
  • Listening to pro-vaccine arguments might threaten a conservative farmer's identity.
  • Worrying about speculative AI x-risks might threaten an AI capability researcher's identity.

I would go further and claim that open-minded consideration of suggestions that rationalists ought to get more comfortable with symmetric weapons... (read more)

I disagree with Eliezer's comments on inclusive genetic fitness (~25:30) on Dwarkesh Patel's podcast - particularly his thought experiment of replacing DNA with some other substrate to make you healthier, smarter, and happier.

Eliezer claims that evolution is a process optimizing for inclusive genetic fitness, (IGF). He explains that human agents, evolved with impulses and values that correlate with but are not identical to IGF, tend to escape evolution's constraints and satisfy those impulses directly: they adopt kids, they use contraception, they fail to ... (read more)

Certain texts are characterized by precision, such as mathematical proofs, standard operating procedures, code, protocols, and laws. Their authority, power, and usefulness stem from this quality. Criticizing them for being imprecise is justified.

Other texts require readers to use their common sense to fill in the gaps. The logic from A to B to C may not always be clearly expressed, and statements that appear inconsistent on their own can make sense in context. If readers demand precision, they will not derive value from such texts and may criticize the aut... (read more)

Nope; precision has nothing to do with intrinsic value. If Ashley asks Blaine to get her an apple from the fridge, many would agree that 'apple' is a rather specific thing, but if Blaine was insistent on being dense he can still say "Really? An apple? How vague! There are so many possible subatomic configurations that could correspond to an apple, and if you don't have an exact preference ordering of sub-atomically specified apple configurations, then you're an incoherent agent without a proper utility function!" And Blaine, by the way, is speaking the truth here; Ashley could in fact be more specific. Ashley is not being completely vague, however; 'apple' is specific enough to specify a range of things, and within that range it may be ambiguous as to what she wants from the perspective of someone who is strangely obsessed with specificity, but Ashley can in fact simply and directly want every single apple that matches her rangerately-specified criteria. So it is with words like 'Good', 'Relevant', 'Considerate', 'Justice', and 'Intrinsic Value Strategicism'.

Why I think ChatGPT struggles with novel coding tasks

The internet is full of code, which ChatGPT can riff on incredibly well.

However, the internet doesn't contain as many explicit, detailed and accurate records of the thought process of the programmers who wrote it. ChatGPT isn't as able to "riff on" the human thought process directly.

When I engineer prompts to help ChatGPT imitate my coding thought process, it does better. But it's difficult to get it to put it all together fluently. When I code, I'm breaking tasks down, summarizing, chunking, simulating ... (read more)

Learning a new STEM subject is unlike learning a new language. When you learn a new language, you learn new words for familiar concepts. When you learn a new STEM subject, you learn new words for unfamiliar concepts.

I frequently find that a big part of the learning curve is trying to “reason from the jargon.” You haven’t yet tied a word firmly enough to the underlying concept that there’s an instant correspondence, and it’s easy to completely lose track of the concept.

One thing that can help is to focus early on building up a strong sense of the fundamenta... (read more)

Upvotes more informative than downvotes

If you upvote me, then I learn that you like or agree with the specific ideas I've articulated in my writing. If I write "blue is the best color," and you agreevote, then I learn you also agree that the best color is blue.

But if you disagree, I only learn that you think blue is not the best color. Maybe you think red, orange, green or black is the best color. Maybe you don't think there is a best color. Maybe you think blue is only the second-best color, or maybe you think it's the worst color.

I usually don't upvote or downvote mainly based on agreement, so there may be even less information about agreement than you might think! I have upvoted quite a few posts where I disagree with the main conclusion or other statements within it, when those posts are generally informative or entertaining or otherwise worth reading. I have downvoted a lot of posts with conclusions I generally agreed with but were poorly written, repetitive, trivial, boring, overbearing, used flawed arguments, or other qualities that I don't like to see in posts on this site. A post that said nothing but "blue is the best colour" would definitely get a downvote from me for being both trivial and lacking any support for the position, even if I personally agree. I would at very least want to know by what criteria it was considered "best" along with some supporting evidence for why those criteria were generally relevant and that it actually does meet those criteria better than anything else.
Interesting - I never downvote based on being poorly written, repetitive, trivial, or boring. I do downvote for a hostile-seeming tone accompanied by a wrong or poorly-thought-through argument. I'll disagreevote if I confidently disagree.   "Blue is the best color" was meant as a trivial example of a statement where there's a lot of alternative "things that could be true" if the statement were false, not as an example of a good comment.
2Rafael Harth2mo
This doesn't seem quite right. The information content of agree vs. disagree depends on your prior, i.e., on P(people agree). If that's <0.5, then an agree vote is more informative; if it's >0.5, then a disagree vote is more informative. But it's not obvious that it's <.5 in general.
Fair point! The scenario I’m imagining is one in which our prior is low because we’re dealing with a specific, complex statement like “BLUE is the BEST color.” There are a lot of ways that could be considered wrong, but only one way for it to be considered right, so by default we’d have a low prior and therefore learn a lot more from an agreevote than a disagreevote. I think this is why it makes sense for a truth seeker to be happier with upvotes than downvotes, pleasure aside. If I get agreevotes, I am getting a lot of information in situations like these. If I get disagreevotes, especially when nobody’s taking the time to express why, then I’m learning very little while perceiving a hint that there is some gap in my knowledge.
Come to think of it, I feel like I tend to downvote most when I perceive that the statement has a lot of support (even if I’m the first voter). Somebody who makes a statement that I think will widely received as wrong, I will typically either ignore or respond to explicitly. Intuitively, that behavior seems appropriate: I use downvotes where they convey more information and use comments where downvotes would convey less.

Hunger makes me stop working, but figuring out food feels like work. The reason hunger eventually makes me eat is it makes me less choosy and health-conscious, and blocks other activities besides eating.

More efficient food motivation would probably involve enjoying the process of figuring out what to eat, and anticipated enjoyment of the meal itself. Dieting successfully seems to demand more tolerance for mild hunger, making it easier to choose healthy options than unhealthy options, and avoiding extreme hunger.

If your hunger levels follow a normal distrib... (read more)

Old Me: Write more in order to be unambiguous, nuanced, and thorough.

Future Me: Write for the highest marginal value per word.

Mental architecture

Let's put it another way: the memory palace is a powerful way to build a memory of ideas, and you can build the memory palace out of the ideas directly.

My memory palace for the 20 amino acids is just a protein built from all 20 in a certain order.

My memory palace for introductory mathematical series has a few boring-looking 2D "paths" and "platforms", sure, but it's mainly just the equations and a few key words in a specific location in space, so that I can walk by and view them. They're dynamic, though. For example, I imagine a pillar o... (read more)

Mentitation[1] means releasing control in order to gain control

As I've practiced my ability to construct mental imagery in my own head, I've learned that the harder I try to control that image, the more unstable it becomes.

For example, let's say I want to visualize a white triangle.

I close my eyes, and "stare off" into the black void behind my eyelids, with the idea of visualizing a white triangle floating around in my conscious mind.

Vaguely, I can see something geometric, maybe triangular, sort of rotating and shadowy and shifty, coming into focus.

No... (read more)

In some sense it is similar to Jungian active imagination

I was watching Michael Pollan talk with Joe Rogan about his relationship with caffeine. Pollan brought up the claim that, prior to Prohibition, people were "drunk all the time...", "even kids," because beer and cider "was safer than water."

I myself had uncritically absorbed and repeated this claim, but it occurred to me listening to Pollan that this ought to imply that medieval Muslims had high cholera rates. When I tried Googling this, I came across a couple of Reddit threads (1, 2) that seem sensible, but are unsourced, saying that the "water wasn't safe... (read more)

a book on henry 8th said that his future inlaws were encouraged to feed his future wife (then a child) alcohol because she'd need to drink a lot of it in England for safety reasons. Another book said England had a higher disease load because the relative protection of being an island let its cities grow larger (it was talking about industrialized England but the reasoning should have held earlier). It seems plausible this was a thing in England in particular, and our English-language sources conflated it with the whole world or at least all of Europe. 

I am super curious to hear the disease rate of pre-mongol-destruction Baghdad. 

Managed to find a source [] that gets into the topic. I don't love that their citations are a personal communication and a source from 1899. No other sources for claims that the ancients rejected water as a beverage for safety reasons in this article. In History and epidemiology of alcohol use and abuse [] we again have similar uncited claims. In Cholera: A Worldwide History: [] The book also talks about a description of Mary, Queen of Scots having a disease reminiscent of cholera. However, prior to the germ theory of disease, would people have really realized that a) drinking water was causing illness and b) the alcohol in fermented drinks could decontaminate the water and was therefore safe to drink? As I scan books covering the history of drinking water and historical debates over the virtues of drinking water vs. alcohol, I am seeing that people had a variety of arguments for or against water. Alcoholic drinks were argued by some to promote normal health and development in children. A captive Frenchman said that in his country, only "invalids and chickens" drink water. I don't see reports of a clear consensus among booze advocates in these papers and books that alcoholic beverages were to be preferred for their antiseptic properties. That seems to me to be reading a modern germ theory into the beliefs of people from centuries past.
I wouldn't expect them to have full germ theory, but metis around getting sick after drinking bad water and that happening less often with dilute alcohol seems very plausible. I wonder if there's a nobility/prole distinction, in addition to the fact that we're talking about a wide range of time periods and places.

Alcohol concentrations below 50% have sharply diminished disinfecting utility, and wine and beer have alcohol concentrations in the neighborhood of 5%. However, the water in wine comes from grapes, while the water in beer may have been boiled prior to brewing. If the beer or wine was a commercial product, the brewer might have taken extra care in sourcing ingredients in order to protect their reputation.

Beer and fungal contamination is a problem for the beer industry. Many fungi are adapted to the presence of small amounts of alcohol (indeed, that's why fermentation works at all), and these beverages are full of sugars that bacteria and fungi can metabolize for their growth.

People might have noticed that certain water sources could make you sick, but if so, they could also have noticed which sources were safe to drink. On the other hand, consider also that people continued to use and get cholera from the Broad Street Pump. If John Snow's efforts were what was required to identify such a contaminated water source with the benefit of germ theory, then it would be surprising if people would have been very successful in identifying delayed sickness from a contaminated water source unle... (read more)

huh, interesting. I wonder where the hell the common story came from. 
One factor to consider is that drinking alcohol causes pleasure, and pleasure is the motivation in motivated cognition. Most comments on the internet are against any laws or technical measures that would prevent internet users from downloading for free copies of music and video files. I think the same thing is going on there: listening to music -- music new to the listener particularly -- causes pleasure, and that pleasure acts as motivation to reject plans and ways of framing things that would lead to less listening to novel music in the future.
OK, finally found the conduit [] to people who actually know what they're talking about and have investigated this issue.  
Many other quotes from historical figures showing that water was a widely accepted and consumed beverage. I'm sold.

Simulated weight gain experiment, day 3

I'm up to 15 pounds of extra weight today. There's a lot to juggle, and I have decided not to wear the weighted vest to school or the lab for the time being. I do a lot of my studying from home, so that still gives me plenty of time in the vest.

I have to take off the vest when I drive, as the weights on the back are very uncomfortable to lean on. However, I can wear it sitting at my desk, since I have a habit of sitting up ramrod-straight in my chair due to decades of piano practice sitting upright on the piano bench.... (read more)

Problem Memorization

Problem Memorization is a mentitation[1] technique I use often.

If you are studying for an exam, you can memorize problems from your homework, and then practice working through the key solution steps in your head, away from pencil and paper.

Since calculation is too cognitively burdensome in most cases, and is usually not the most important bottleneck for learning, you can focus instead on identifying the key conceptual steps.

The point of Problem Memorization is to create a structure in your mind (in this case, the memorized problem)... (read more)

In my experience with doing something similar, this practice also helps memorize adjacent concepts. For example, I was recently explaining to myself Hubbard's [] technique that uses Student's t-stat to figure out the 90% CI of a range of possible values of a population using a small sample. Having little memory of statistics from school, I had to refresh my understanding of variance and the standard deviation, which are required to use the t-stat table. So now, whenever I need to "access" variance or stdev, I mentally "reach" for the t-stat table and pick up variance and stdev.

Here's some of what I'm doing in my head as I read textbooks:

  • Simply monitoring whether the phrase or sentence I just read makes immediate sense to me, or whether it felt like a "word salad."
  • Letting my attention linger on key words or phrases that are obviously important, yet not easily interpretable. This often happens with abstract sentences. For example, in the first sentence of the pigeonholing article linked above, we have: "Pigeonholing is a process that attempts to classify disparate entities into a limited number of categories (usually, mutually exc
... (read more)

Psychology has a complex relationship with introspection. To advance the science of psychology via the study of introspection, you need a way to trigger, measure, and control it. You always face the problem that paying attention to your own mental processes tends to alter them.

Building mechanistic tools for learning and knowledge production faces a similar difficulty. Even the latest brain/computer interfaces mostly reinterpret a brain signal as a form of computer input. The user's interaction with the computer modifies their brain state.

However, the compu... (read more)

Apeing The Experts

Humans are apes. We imitate. We do this to learn, to try and become the person we want to be.

Watching an expert work, they often seem fast, confident, and even casual in their approach. They break rules. They joke with the people around them. They move faster, yet with more precision, than you can do even with total focus.

This can lead to big problems when we're trying to learn from them. Because we're not experts in their subject, we'll mostly notice the most obvious, impressive aspects of the expert's demeanor. For many people, that wil... (read more)

My goal here is to start learning about the biotech industry by considering individual stock purchases.

BTK inhibitors are a drug that targets B cell malignancies. Most are covalent, meaning that they permanently disable the receptor they target, which is not ideal for a drug. Non-covalent BTK inhibitors are in clinical trials. Some have been prematurely terminated. Others are proceeding. In addition, there are covalent reversible inhibitors, but I don't know anything about that class of drugs.

One is CG-806, from Aptose, a $200M company. This is one of its ... (read more)

Status and Being a "Rationalist"

The reticence many LWers feel about the term "rationalist" stems from a paradox: it feels like a status-grab and low-status at the same time.

It's a status grab because LW can feel like an exclusive club. Plenty of people say they feel like they can hardly understand the writings here, and that they'd feel intimidated to comment, let alone post. Since I think most of us who participate in this community wish that everybody would be more into being rational and that it wasn't an exclusive club, this feels unfortunate.

It's low ... (read more)

Everybody in our community knows what it means but people outside of our community frequently think that it's about what philosophers call rationality. 

I use LessWrong as a place not just to post rambly thoughts and finished essays, but something in between.

The in between parts are draft essays that I want feedback on, and want to get out while the ideas are still hot. Partly it's so that I can have a record of my thoughts that I can build off of and update in the future. Partly it's that the act of getting my words together in a way I can communicate to others is an important part of shaping my own views.

I wish there was a way to tag frontpage posts with something like "Draft - seeking feedback" vs. "Fin... (read more)

Yeah, I've been thinking about this for a while. Like, maybe we just want to have a "Draft - seeking feedback" tag, or something. Not sure. 

Yeah, just a tag like that would be ideal as far as I'm concerned. You could also allow people to filter those in or out of their feed.

Eliezer's post on motivated stopping contains this line:

Who can argue against gathering more evidence? I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have. You can always change your mind later."

This is often not true, though, for example with regard to whether or not it's ethical to have kids. So how to make these sorts of decisions?

I don't have a good answer for this. I sort of think that there are certain superhuman forces or drives that "win out." The drive ... (read more)

If you will get more evidence, whether you want it or not, is there a way you can do something with that? Ba zbgvingrq fgbccvat vgfrys - jul fgngvp cebprffrf, engure guna qlanzvp barf?

Reading and re-reading

The first time you read a textbook on a new subject, you're taking in new knowledge. Re-read the same passage a day later, a week later, or a year later, and it will qualitatively feel different.

You'll recognize the sentences. In some parts, you'll skim, because you know it already. Or because it looks familiar -- are you sure which?

And in that skimming mode, you might zoom into and through a patch that you didn't know so well.

When you're reading a textbook for the first time, in short, there are more inherent safeguards to keep you f... (read more)

I just started using, in anti-kibitzer mode. Highly recommended. I notice how unfortunately I've glommed on to karma and status more than is comfortable. It's a big relief to open the front page and just see... ideas!

I just went to try this, and something I noticed immediately was that while the anti-kibbitzer applies itself to the post list and to the post page, it doesn't seem to apply to the post-hover-preview.

There's a pretty simple reason why the stock market didn't tank long-term due to COVID. Even if we get 3 million total deaths due to the pandemic, that's "only" around a 5% increase in total deaths over the year where deaths are at their peak. 80% of those deaths are among people of retirement age. Though their spending is around 34% of all spending, the money of those who die from COVID will flow to others who will also spend it.

My explanation for the original stock market crash back in Feb/March is that investors were nervous that we'd impose truly strict lockdown measures, or perhaps that the pandemic would more seriously harm working-age people than it does. That would have had a major effect on the economy.


At any given time, many doors stand wide open before you. They are slowly closing, but you have plenty of time to walk through them. The paths are winding.

Striving is when you recognize that there are also many shortcuts. Their paths are straighter, but the doors leading to them are almost shut. You have to run to duck through.

And if you do that, you'll see that through the almost-shut doors, there are yet straighter roads even further ahead, but you can only make it through if you make a mad dash. There's no guarantee.

To run is exhilarating at fir... (read more)

The direction I'd like to see LW moving in as a community

Criticism has a perverse characteristic:

  1. Fresh ideas are easier to criticize than established ideas, because the language, supporting evidence, and theoretical mechanics have received less attention.
  2. Criticism has more of a chilling effect on new thinkers with fresh ideas than on established thinkers with popular ideas.

Ideas that survive into adulthood will therefore tend to be championed by thinkers who are less receptive to criticism.

Maybe we need some sort of "baby criticism" for new ideas. A "devel... (read more)

This reminds me of the "babble and prune" concept. We should allow... maybe not literally the "babble" stage, but something in between, when the idea is already half-shaped but not completed. I think the obvious concern is that all kinds of crackpottery may try to enter this open door, so what would be the balance mechanism? Should authors specify their level of certainty and be treated accordingly? (Maybe choose one of predefined levels from "just thinking aloud" to "nitpicking welcome".) In a perfect world, certainty could be deduced from the tone of the article, but this does not work reliably. Something else...?
While this sounds nice on the abstract level I'm not sure what concrete behavior you are pointing to. Could you link to examples of comments that you think do this well?
I don't want to take the time to do what you've requested. Some hypothetical concrete behaviors, however: * Asking questions with a tone that conveys a tentative willingness to play with the author's framework or argument, and an interest in hearing more of the authors' thoughts. * Compliments, "this made me think of," "my favorite part of your post was" * Noting connections between a post and the authors' previous writings. * Offers to collaborate or edit.

Cost/benefit anxiety is not fear of the unknown

When I consider doing a difficult/time-consuming/expensive but potentially rewarding activity, it often provokes anxiety. Examples include running ten miles, doing an extensive blog post series on regenerative medicine, and going to grad school. Let's call this cost/benefit anxiety.

Other times, the immediate actions I'm considering are equally "costly," but one provokes more fear than the others even though it is not obviously stupid. One example is whether or not to start blogging under my real name. Call it ... (read more)

A machine learning algorithm is advertising courses in machine learning to me. Maybe the AI is already out of the box.

An end run around slow government

The US recommended daily amount (RDA) of vitamin D is about 600 IUs per day. This was established in 2011, and hasn't been updated since. The Food and Nutrition Board of the Institute of Medicine at the National Academy of Sciences sets US RDAs.

According to a 2017 paper, "The Big Vitamin D Mistake," the right level is actually around 8,000 IUs/day, and the erroneously low level is due to a statistical mistake. I haven't been able to find out yet whether there is any transparency about when the RDA will be reconsidered.

But 3... (read more)

So, you can't trust the government.  Why do you trust that study?  I talked to my MD about it, and he didn't actually know any more than I about reasoning, but did know that there is some toxicity at higher levels, and strongly recommended I stay below 2500 IU/day.  I haven't fully followed that, as I still have a large bottle of 5000 IU pills, which I'm now taking every third day (with 2000 IUs on the intervening days).   EU Food Safety Administration in 2006 (2002 for vitamin D, see page 167 of []  Page 180 for the recommendation) found that 50ug (2000IU) per day is the safe upper limit.   I'm not convinced it's JUST bureaucratic inefficiency - there may very well be difficulties in finding a balanced "one-size-fits-all" recommendation as well, and the judgement of "supplement a bit lightly is safer than over-supplementing" is well in-scope for these general guidelines.
You raise two issues here. One is about vitamin D, and the other is about trust. Regarding vitamin D, there is an optimal dose for general population health that lies somewhere in between "toxically deficient" and "toxically high." The range from the high hundreds to around 10,000 appears to be well within that safe zone. The open question is not whether 10,000 IUs is potentially toxic - it clearly is not - but whether, among doses in the safe range, a lower dose can be taken to achieve the same health benefits. One thing to understand is that in the outdoor lifestyle we evolved for, we'd be getting 80% of our vitamin D from sunlight and 20% through food. In our modern indoor lifestyles, we are starving ourselves for vitamin D. "Supplement a bit lightly is safer than over-supplementing" is only a meaningful statement if you can define the dose that constitutes "a bit lightly" and the dose that is "over-supplementing." Beyond these points, we'd have "dangerously low" and "dangerously high" levels. To assume that 600 IU is "a bit lightly" rather than "dangerously low" is a perfect example of begging the question. [] On the issue of trust, you could just as easily say "so you don't trust these papers, why do you trust your doctor or the government?" The key issue at hand is that in the absence of expert consensus, non-experts have to come up with their own way of deciding who to trust. In my opinion, there are three key reasons to prefer a study of the evidence to the RDA in this particular case: 1. The RDA hasn't been revisited in almost a decade, even simply to reaffirm it. This is despite ongoing research in an important area of study that may have links to our current global pandemic. That's strong evidence to me that the current guidance is as it is for reasons other than active engagement by policy-makers with the current state of vitamin D research. 2. The statistical error identified in

Explanation for why displeasure would be associated with meaningfulness, even though in fact meaning comes from pleasure:

Meaningful experiences involve great pleasure. They also may come with small pains. Part of how you quantify your great pleasure is the size of the small pain that it superceded.

Pain does not cause meaning. It is a test for the magnitude of the pleasure. But only pleasure is a causal factor for meaning.

In a perfect situation, it would be possible to achieve meaningful experiences without pain, but usually it is not possible. A person who optimizes for short-term pain avoidance, will not reach the meaningful experience. Because optimizing for short-term pain avoidance is natural, we have to remind ourselves to overcome this instinct.
This fits with the idea that meaning comes from pleasure, and that great pleasure can be worth a fair amount of pain to achieve. The pain drains meaning away, but the redeeming factor is that it can serve as a test of the magnitude of pleasure, and generate pleasurable stories in the future. An important counter argument to my hypothesis is how we may find a privileged “high road” to success and pleasure to be less meaningful. This at first might seem to suggest that we do inherently value pain. In fact, though, what frustrates people about people born with a silver spoon in their mouths is that society seems set up to ensure their pleasure at another’s expense. It’s not their success or pleasure we dislike. It’s the barriers and pain that we think it’s contextualized in. If pleasure for one means pain for another, then of course we find the pleasure to be less meaningful. So this isn’t about short-term pain avoidance. It’s about long-term, overall, wise and systemic pursuit of pleasure. And that pleasure must be not only in the physical experiences we have, but in the stories we tell about it - the way we interpret life. We should look at it, and see that it is good. If people are wireheading, and we look at that tendency and it causes us great displeasure, that is indeed an argument against wireheading. We need to understand that there’s no single bucket where pleasure can accumulate. There is a psychological reward system where pleasure is evaluated according to the sensory input and brain state. Utilitarian hedonism isn’t just about nerve endings. It’s about how we interpret them. If we have a major aesthetic objection to wireheading, that counts from where we’re standing, no matter how much you rachet up the presumed pleasure of wireheading. The same goes recursively for any “hack” that could justify wireheading. For example, say you posited that wireheading would be seen as morally good, if only we could find a catchy moral justification for it. So w
2Matt Goldenberg3y
I looked through that post but didn't see any support for the claim that meaning comes from pleasure. My own theory is that meaning comes from values, and both pain and pleasure are a way to connect to the things we value, so both are associated with meaning.
I'm a classically trained pianist. Music practice involves at least four kinds of pain: * Loneliness * Frustration * Physical pain * Monotony I perceive none of these to add meaning to music practice. In fact, it was loneliness, frustration, and monotony that caused my music practice to be slowly drained of its meaning and led me ultimately to stop playing, even though I highly valued my achievements as a classical pianist and music teacher. If there'd been an issue with physical pain, that would have been even worse. I think what pain can do is add flavor to a story. And we use stories as a way to convey meaning. But in that context, the pain is usually illustrating the pleasures of the experience or of the positive achievement. In the context of my piano career, I was never able to use these forms of pain as a contrast to the pleasures of practice and performance. My performance anxiety was too intense, and so it also was not a source of pleasure. By contrast, I herded sheep on the Navajo reservation for a month in the middle of winter. That experience generated many stories. Most of them revolve around a source of pain, or a mistake. But that pain or mistake serves to highlight an achievement. That achievement could be the simple fact of making it through that month while providing a useful service to my host. Or moments of success within it: getting the sheep to drink from the hole I cut in the icy lake, busting a tunnel through the drifts with my body so they could get home, finding a mother sheep that had gotten lost when she was giving birth, not getting cannibalized by a Skinwalker. Those make for good stories, but there is pleasure in telling those stories. I also have many stories from my life that are painful to tell. Telling them makes me feel drained of meaning. So I believe that storytelling has the ability to create pleasure out of painful or difficult memories. That is why it feels meaningful: it is pleasurable to tell stories. And being a
2Matt Goldenberg3y
It really depends on what you mean by "pleasure".  If pleasure is just "things you want", then almost tautologically meaning comes from pleasure, since you want meaning. If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure.  I think that there are also people that just WANT the pleasure, and if they could get it while ignoring their values, they would. I call this the"Heaven/Enlightenment" dichotomy, and I think it's a frequent misunderstanding. I've seen some people say "all we care about is feeling good, and people who think they care about the outside world are confused." I've also seen people say "All we care about is meeting our values, and people who think it's about feeling good are confused." Personally, I think that people are more towards one side of the spectrum or the other along different dimensions, and I'm inclined to believe both sides about their own experience.
I think we can consider pleasure, along with altruism, consistency, rationality, fitting the categorical imperative, and so forth as moral goods. People have different preferences for how they trade off one against the other when they're in conflict. But they of course prefer them not to be in conflict. What I'm interested is not what weights people assign to these values - I agree with you that they are diverse - but on what causes people to adopt any set of preferences at all. My hypothesis is that it's pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person's psychological reward system. So if you wanted to understand why another person considers some strange action or belief to be moral, you'd need to understand why the belief system that they hold gives them pleasure. Some predictions from that hypothesis: * People who find a complex moral argument unpleasant to think about won't adopt it. * People who find a moral community pleasant to be in will adopt its values. * A moral argument might be very pleasant to understand, rehearse, and think about, and unpleasant to abandon. It might also be unpleasant in the actions it motivates its subscriber to undertake. It will continue to exist in their mind if the balance of pleasure in belief to displeasure in action is favorable. * Deprogramming somebody from a belief system you find abhorrent is best done by giving them alternative sources of "moral pleasure." Examples of this include the ways people have deprogrammed people from cults and the KKK, by including them in their social gatherings, including Jewish religious dinners, and making them feel welcome. Eventually, the pleasure of adopting the moral system of that shared community displaces whatever pleasure they were deriving from their former belief system. * Paying somebody in money and status to uphold a given belief system is a great way to keep them doing it, no matt
2Matt Goldenberg3y
This just kicks the can down the road on you defining pleasure, all of my points still apply That is, I think it's possible to say that pleasure kicks in around values that we really want, rather than vice versa.

Sci-hub has moved to

Do you treat “the dark arts” as a set of generally forbidden behaviors, or as problematic only in specific contexts?

As a war of good and evil or as the result of trade-offs between epistemic rationality and other values?

Do you shun deception and manipulation, seek to identify contexts where they’re ok or wrong, or embrace them as a key to succeeding in life?

Do you find the dark arts dull, interesting, or key to understanding the world, regardless of whether or not you employ them?

Asymmetric weapons may be the only source of edge for the truth itself. But s... (read more)

There are things like "lying for a good cause", which is a textbook example of what will go horribly wrong because you almost certainly underestimate the second-order effects. Like the "do not wear face masks, they are useless" expert advice for COVID-19, which was a "clever" dark-arts move aimed to prevent people from buying up necessary medical supplies. A few months later, hundreds of thousands have died (also) thanks to this advice. (It would probably be useful to compile a list of lying for a good cause gone wrong, just to drive home this point.) Thinking about historical record of people promoting the use of dark arts within rationalist community, consider Intentional Insights []. Turned out, the organization was also using the dark arts against the rationalist community itself. (There is a more general lesson here: whenever a fan of dark arts tries to make you see the wisdom of their ways, you should assume that at this very moment they are probably already using the same techniques on you. Why wouldn't they, given their expressed belief that this is the right thing to do?) The general problem with lying is that people are bad at keeping multiple independent models of the world in their brains. The easiest, instinctive way to convince others about something is to start believing it yourself. Today you decide that X is a strategic lie necessary for achieving goal Y, and tomorrow you realize that actually X is more correct than you originally assumed (this is how self-deception feels from inside). This is in conflict with our goal to understand the world better. Also, how would you strategically lie as a group? Post it openly online: "Hey, we are going to spread the lie X for instrumental reasons, don't tell anyone!" :) Then there are things like "using techniques-orthogonal-to-truth to promote true things". Here I am quite guilty myself, because I have long ago ad
We already had words for lies, exaggerations, incoherence, and advertising. Along with a rich discourse of nuanced critiques and defenses of each one. The term “dark arts” seems to lump all these together, then uses cherry picked examples of the worst ones to write them all off. It lacks the virtue of precision. We explicitly discourage this way of thinking in other areas. Why do we allow it here?

How to reach simplicity?

You can start with complexity, then simplify. But that's style.

What would it mean to think simple?

I don't know. But maybe...

  • Accept accepted wisdom.
  • Limit your words.
  • Rehearse your core truths, think new thoughts less.
  • Start with inner knowledge. Intuition. Genius. Vision. Only then, check yourself.
  • Argue if you need to, but don't ever debate. Other people can think through any problem you can. Don't let them stand in your way just because they haven't yet.
  • If you know, let others find their own proofs. Move on with the plan.
  • Be slow. Rest
... (read more)

Question re: "Why Most Published Research Findings are False":

Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field... The pre-study probability of a relationship being true is R/(R + 1).

What is the difference between "the ratio of the number of 'true relationships' to 'no relationships' among those tested in the field" and "the pre-study probability of a relationship being true"?

From Reddit: You could think of it this way: If R is the ratio of (combinations that total N on two dice) to (combinations that don't total N on two dice), then the chance of (rolling N on two dice) is R/(R+1). For example, there are 2 ways to roll a 3 (1 and 2, and 2 and 1) and 34 ways to not roll a 3. The probability of rolling a 3 is thus (2/34)/(1+2/34)=2/36.

NEWSFLASH: Expressing Important But Unpopular Beliefs Considered Virtuous, Remains Unpopular

I'm exploring the idea of agency roughly as a certain tendency to adaptively force a range of prioritized outcomes.

In this conception, having a "terminal goal" is just a special and unusual subcase in which there is one single specific outcome at which the agent is driving with full intensity. To maintain that state, one of its subgoals must be to maintain the integrity of its current goal-prioritization state.

More commonly, however, even an AI with superhuman capabilities will prioritize multiple outcomes, with varied degress of intensity, exhibiting only... (read more)

I'm going to be exiting the online rationalist community for an indefinite period of time. If anyone wants to keep in touch, feel free to PM me and I will provide you with personal contact info (I'll check my messages occasionally). Best wishes.

Memorization timescales and feedback loops

Often, people focus on memorizing information on timescales of hours, days, or months. I think this is hard, and not necessarily useful - it smacks of premature optimization. The feedback loops are long, and the decision to force yourself through a process of memorizing THIS set of facts instead of THAT set of facts is always one you may wish to revise later. At the very least, you'd want to frequently prune your Anki cards to eliminate facts that no longer seem as pressing.

By contrast, I think it's very useful and... (read more)

Read by rewording as you go

Many people find their attention drifts when they read. They get to the end of a sentence/paragraph/page and find they've lost the plot, or don't see the connection between the words and sentences, sentences and paragraphs, paragraphs and pages.

One way to try and correct this is to hyperfocus on making sure you read the words accurately, without letting your mind drift. But I have found this is a pretty inefficient strategy.

Instead, I do better by "rewording as I go." By this, I mean a sort of skimming where I take in the sentenc... (read more)

3Nanda Ale8mo
Interestingly, reading your internal monologue seems to help me stay focused. I kind of want actual textbooks in this format.
That’s good to know. I’m doing a lot of tinkering with summarization. Mostly I’ve done this sort of rewording on the paragraph level. It would be interesting to do an experiment with combining an original text with summaries of different granularities. Then seeing which version boosted reading comprehension the most.

The Handbook of the Biology of Aging has serious flaws if its purpose is to communciate with non-experts

What makes an effective sentence for transmitting academic information to a newcomer to the field?

  • Information has to be intelligible to the reader
  • Wordcount, word complexity, and sentence complexity should minimal
  • Information has to be important

Let's ignore the issue of trustworthiness, as well as the other purposes academic writing can serve.

With these three criteria in mind, how does the first sentence of Ch. 3 of the Handbook of the Biology of Aging far... (read more)

Requesting beta readers for "Unit Test Everything."

I have a new post on how to be more reliable and conscientious in executing real-world tasks. It's been through several rounds of editing. I feel like it would now benefit from some fresh eyes. If you'd be willing to give it a read and provide feedback, please let me know!

Mentitation Technique: Organize, Cut and Elaborate

Organize, Cut and Elaborate is a technique for tackling information overload.

People cope with overload by straining to fit as much as they can into their working memory. Some even do exercises try try and expand their memory.

With Organize, Cut and Elaborate, we instead try to link small amounts of new information with what we already have stored in long-term memory. After you've scanned through this post, I invite you to try it out on the post itself.

I don't have an exact algorithm, but a general framework ... (read more)

I think I do something like this or parts of it when I add things to my Anki deck. It is less about individual words. I don't try to remember these. I leave that up to Wikipedia or an appendix of my Anki note. A note for your shortform may look like this Tissues in the body have correspondence to.... ... four proteins (GAGs/glycosaminoglycans) that code for these types of tissues: * Chondro = cartilage * derma = skin, * hepa = liver, * kera = fingernails Thus cut means less splitting up and more cutting away. Elaboration mostly means googling the topic and adding relevant links. Öften when reviewing the note.

Toward an epistemic stance for mentitation

Brett Devereaux writes insightfully about epistemologies.

On the other end, some things are impractical to test empirically; empirical tests rely on repeated experiments under changing conditions (the scientific method) to determine how something behaves. This is well enough when you are studying something relatively simple, but a difficult task when you are studying, say, a society.

In our actual lives and also in the course of nearly every kind of scholarship (humanities, social sciences or STEM) we rely on a range

... (read more)


"Mental training" is not the ideal concept handle for the activity I have in mind. "Metacognition" is relevant, but is more an aspect or basis of mental training, in the same way that strategy and tactics are an aspect and basis for a sports team's play.

We have numerous rationality "drills," such as forecasting practice, babble and prune prompts, making bets. To my understanding, many were pioneered by CFAR.

The practice of rationality involves studying object-level topics, drawing conclusions, and executing on decisions. In the sports metaphore,... (read more)

Explaining the Memory Palace

Memory palaces are an ancient technique, rumored to be a powerful way to improve memory. Yet what a strange concept. It's hard enough to remember the facts you want to recall. Why would placing the added cognitive burden of constructing and remembering an memory palace assist with memory?

I think the memory palace is an attempt to deal with the familiar sensation of knowing you know a fact, but not being able to summon that fact to your conscious mind. Later in the day, by chance, it pops into your mind spontaneously. Just as you... (read more)

How do we learn from experience without a control group?

I can discern cause and effect when there's a clear mechanism and immediate feedback. Brushing my teeth makes them feel clean. Eating pizza makes me full. Yet I've met plenty of people who claim to have had direct, powerful experiences confirming that forms of pseudoscience are real - astrology, crystal healing, reiki, etc. 

While I don't believe in astrology or crystal healing based on their reports, I think that in at least some cases, they've experienced something similar to what it's like to e... (read more)

"The subjective component in causal information does not necessarily diminish over time, even as the amount of data increases. Two people who believe in two different causal diagrams can analyze the same data and may never come to the same conclusion, regardless of how "big" the data are. This is a terrifying prospect for advocates of scientific objectivity, which explains their refusal to accept the inevitability of relying on subjective causal information." - Judea Pearl, The Book of Why

There are lots of reasons to measure a person's ability level in some skill. One such reason is to test your understanding in the early stages of learning a new set of concepts.

You want a system that's:

  • Fast, intuitive
  • Suggests what you need to spend more time on
  • Relies on the textbook and your notes to guide your next study activity, rather than needing to "compute" it separately.

Flashcards/reciting concepts from notes is a nice example. It's fast and intuitive, tells you what concepts you're still struggling with. Knowing that, you can look over the materia... (read more)

How much of rationality is specialized?

Cultural transmission of knowledge is the secret of our success.

Children comprise a culture. They transmit knowledge of how to insult and play games, complain and get attention. They transmit knowledge on how to survive and thrive with a child's priorities, in a child's body, in a culture that tries to guarantee that the material needs of children are taken care of.

General national cultures teach people very broad, basic skills. Literacy, the ability to read and discuss the newspaper. How to purchase consumer goods. H... (read more)

What is the #1 change that LW has instilled in me?

Participating in LW has instilled the virtue of goal orientation. All other virtues, including epistemic rationality, flow from that.

Learning how to set goals, investigate them, take action to achieve them, pivot when necessary, and alter your original goals in light of new evidence is a dynamic practice, one that I expect to retain for a long time.

Many memes circulate around this broad theme. But only here have I been able to develop an explicit, robust, ever-expanding framework for making and thinking abo... (read more)

Conservatism says "don't be first, keep everything the same." This is a fine, self-consistent stance.

A responsible moderate conservative says "Someone has to be first, and someone will be last. I personally want to be somewhere in the middle, but I applaud the early adopters for helping me understand new things." This is also a fine, self-consistent stance.

Irresponsible moderate conservatism endorses "don't be first, and don't be last," as a general rule, and denigrates those who don't obey it. It has no answers for who ought to be first and last. But for ... (read more)

I would pay about $5/month for a version of Twitter that was read-only. I want a window, not a door.

I could imagine this functionality implemented as a simple browser plugin or script. Just hide the input box. (No idea whether it already exists.) Would be useful for many social networks.
1Pat Myron11d []

Steelman as the inverse of the Intellectual Turing Test

The Intellectual Turing Test (ITT) checks if you can speak in such a way that you convincingly come across as if you believe what you're saying. Can you successfully pose as a libertarian? As a communist?

Lately, the ITT has been getting boosted over another idea, "steelmanning," which I think of as making "arguing against the strongest version of an idea," the opposite of weakmanning or strawmanning.

I don't think one is better than the other. I think that they're tools for different purposes.

If I'm doi... (read more)

I think ITT is most useful for practicing privately, as a method for systematically developing intellectual understanding of arguments. Practicing it publicly is somewhat useless (though a good sanity check) and requires a setup where claims so channeled are not taken as your own beliefs. Unlike ITT, steelmanning is not aiming for accurate understanding, so it's much less useful for intellectual understanding of the actual points. It's instead a mode of taking inspiration from something you don't consider good or useful, and running away with whatever gears survive the analogy to what you do see as good or useful. Steelmanning is great at opposing aversion to associating with a thing that appears bad or useless, and making some intellectual use of it, even if it's not for the intended purpose and lossy on intended nuance.

My 3-line FizzBuzz in python:

for i in range(1, 101):

    x = ["", "Fizz"][i%3==0] + ["", "Buzz"][i%5==0]

    print([x, i][len(x)==0])

ChatGPT does it in two:
And in one:

Making Beliefs Identity-Compatible

When we view our minds through the lens of large language models (LLMs), with their static memory prompts and mutable context window, we find a fascinating model of belief and identity formation. Picture this in the context of a debate between an atheist and a creationist: how can this LLM-like model explain the hurdles in finding common ground?

Firstly, we must acknowledge our belief systems, much like an LLM, are slow to change. Guided by a lifetime of self-reinforcing experiences, our convictions, whether atheistic or cr... (read more)

I think the important part is training a good simulation of a new worldview, not shifting weight to it or modifying an old worldview. To change your mind, you first need availability of something to change your mind to. This is mostly motivation to engage and pushing aside protocols/norms/habits that interfere with continual efficient learning. The alternative is never understanding a non-caricature version of the target worldview/skillset/frame, which can persist for decades despite regular superficial engagement.
Do you mean that preserving your openness to new ideas is about being able to first try on new perspectives without necessarily adopting them as the truth? If so, I agree, and I think that captures another oft-neglected aspect of debate. We tend to lump together an explanation of what our worldview is, with a claim that our worldview is true. When all participants in the debate view opportunities to debate the topic in question as rare and consequential, all the focus goes into fighting over some sort of perception of victory, rather than on trying to patiently understand the other person's point of view. Usually, that requires allowing the other person to adopt, at least for a while, the perceived role of the expert or leader, and there's always a good chance they'll refuse to switch places with you and try and learn from you as well. That said, I do think that there are often real asymmetries in the level of expertise that go unrecognized in debate, perhaps for Dunning Krueger reasons. Experts shouldn't demand deference to their authority, and I don't think that strategy works very well. Nevertheless, it's important for experts to be able to use their expertise effectively in order to spread knowledge and the better decision-making that rests on it. My take is that this requires experts to understand the identities and formative memories that underpin the incorrect beliefs of the other person, and conduct their discussion in such a way as to help the other person see how they can accept the expert's knowledge while preserving their identity intact. Sometimes, that will not be possible. An Atheist probably can't convince a Christian that there's a way to keep their Christian identity intact while disbelieving in God. Other times, it might be. Maybe an anti-vax person sees themselves as a defender of personal freedom, a skeptic, a person who questions authority, in harmony with nature, or protective of their children's wellbeing. We might guess that being prote
The idea that a worldview needs to be in any way in tune with your own to be learned is one of those protocols/norms/habits that interfere with efficient learning. Until something is learned, it's much less convenient to assess it, or to extract features/gears to reassemble as you see fit. This is mostly about complicated distant collections of related ideas/skills. Changing your mind or believing is more central for smaller or decomposable ideas that can be adopted piecemeal, but not every idea can be straightforwardly bootstrapped. Thus utility of working on understanding perplexing things while suspending judgement. Adopting debate for this purpose is about figuring out misunderstandings about the content of a single worldview, even if its claims are actively disbelieved, not cruxes that connect different worldviews.
To me, you seem to be describing a pretty ideal version of consciously practiced rationality - it's a good way to be or debate among those in scout mindset. That's useful indeed! I am interested here mainly in how to better interface with people who participate in debate, and who may hold a lot of formal or informal power, but who do not subscribe to rationalist culture. People who don't believe, for whatever reason, in the idea that you can and should learn ideas thoroughly before judging them. Those who keep their identities large and opt to stay in soldier mindset, even if they wouldn't agree with Paul Graham or Julia Galef's framings of those terms or wouldn't agree such descriptors apply to them.
The point is that there is a problem that can be mostly solved this way, bootstrapping understanding of a strange frame. (It's the wrong tool if we are judging credence or details in a frame that's already mostly understood, the more usual goal for meaningful debate.) It's not needed if there is a way of getting there step-by-step, with each argument accepted individually on its own strength. But sometimes there is no such straightforward way, like when learning a new language or a technical topic with its collection of assumed prerequisites. Then, it's necessary to learn things without yet seeing how they could be relevant, occasionally absurd things or things believed to be false, in the hope that it will make sense eventually, after enough pieces are available to your own mind to assemble into a competence that allows correctly understanding individual claims. So it's not a solution when stipulated as not applicable, but my guess is that when it's useful, getting around it is even harder than changing habits in a way that allows adopting it. Which is not something that a single conversation can achieve. Hence difficulty of breaking out of falsehood-ridden ideologies, even without an oppressive community that would enforce compliance.
I'm not quite following you - I'm struggling to see the connection between what you're saying and what I'm saying. Like, I get the following points: 1. Sometimes, you need to learn a bunch of prerequisites without experiencing them as useful, as when you learn your initial vocabulary for a language or the rudimentary concepts of statistics. 2. Sometimes, you can just get to a place of understanding an argument and evaluating it via patient, step-by-step evaluation of its claims. 3. Sometimes, you have to separate understanding the argument from evaluating it. The part that confuses me is the third paragraph, first sentence, where you use the word "it" a lot and I can't quite tell what "it" is referring to.
Learning prerequisites is an example that's a bit off-center (sorry!), strangeness of a frame is not just unfamiliar facts and terms, but unexpected emphasis and contentious premises. This makes it harder to accept its elements than to build them up on their own island. Hanson's recent podcast [] is a more central example for me. By step-by-step learning I meant a process similar to reading a textbook, with chapters making sense in order, as you read them. As opposed to learning a language by reading a barely-understandable text, where nothing quite makes sense and won't for some time. The "it" is the procedure of letting strange frames grow in your own mind without yet having a handle on how/whether they make sense. The sentence is a response to your suggesting that debate with a person not practicing this process is not a place for it. The point is I'm not sure what the better alternative would be. Turning a strange frame into a step-by-step argument often makes it even harder to grasp.
Ah, that makes sense. Yes, I agree that carefully breaking down an argument into steps isn’t necessarily better than just letting it grow by bits and pieces. What I’m trying to emphasize is that if you can transmit an attitude of interest and openness in the topic, the classic idea of instilling passion in another person, then that solves a lot of the problem. Underneath that, I think a big barrier to passion, interest and openness for some topic is a feeling that the topic conflicts with an identity. A Christian might perceive evolution as in conflict with their Christian identity, and it will be difficult or impossible for even the most inspiring evolutionist to instill interest in that topic without first overcoming the identity conflict. That’s what interests me. I don’t think that identify conflict explains all failures to connect, not by a long shot. But when all the pieces are there - two smart people, talking at length, both with a lot of energy, and yet there’s a lot of rancor and no progress is made - I suspect that identify conflict perceptions are to blame.
Your last shortform [] made it clearer that what you discuss could also be framed as seeking ways of getting the process started, and exploring obstructions. A lot of this depends on the assumption of ability to direct skepticism internally, otherwise you risk stumbling into the derogatory senses of "having an open mind" ("so open your brains fall out"). Traditional skepticism puts the boundary around your whole person or even community. With a good starting point, this keeps a person relatively sane and lets in incremental improvements. With a bad starting point, it makes them irredeemable. This is a boundary of a memeplex that infests one's mind, a convergently useful [] thing for most memeplexes to maintain. Any energy for engagement specific people would have is then spent on defending the boundary, only letting through what's already permitted by the reigning memeplex. Thus debates between people from different camps are largely farcical, mostly recruitment drives for the audience. A shorter path to self-improvement naturally turns skepticism inward, debugs your own thoughts that are well past that barrier. Unlike the outer barriers, this is an asymmetric weapon [] that reflects on the truth or falsity of ideas that are already accepted. But once it's in place, it becomes much safer to lower the outer barriers, to let other memeplexes open embassies in your own mind. Then the job of skepticism is defending your own island in an archipelago [] of ideas hosted in your own mind that are all intuitively available to various degrees and allowed to grow in clarity, but often hopelessly contradict each

The best way I've come up with to explain how to use ChatGPT effectively is to think of it as a horde of undergraduate researchers you can employ for pennies. They're somewhat unreliable, but you can work them to the bone for next to nothing, and if you can give them a task that's within their reach, or cross-check their answers against each other, you can do a lot with that resource.

A workflow for forecasting

  1. Identify a topic with pressing need for informative forecasting
  2. Break topic down into rough qualitative questions
  3. Prioritize questions by informativeness
  4. Refine high priority questions to articulate importance, provide background information, make falsifiable, define resolution criteria
  5. Establish base rate
  6. Identify factors to adjust base rate up and down
  7. Create schedule for updating forecast over time

Very interesting that after decades of GUI-ification, we're going back to text-based workflows, with LLMs at the nexus. I anticipate we'll see GUIs encapsulating many of these workflows but I'm honestly not sure - maybe text-based descriptions of desired behaviors are the future.

I have learned to expect to receive mostly downvotes when I write about AI. 

I can easily imagine general reasons why people might downvote me. They might disagree, dislike, or fear negative consequences of my posts. They might be bored of the topic and want only high-caliber expert writings. It might be that the concentrated AI expertise on LessWrong collectively can let their hair down on other topics, but demand professionalism on the specific topic of their expertise.

Because I don't know who's downvoting my AI posts, or their specific reasons why, ... (read more)

I don't post much, but I comment frequently, and somewhat target 80% positive reactions.  If I'm not getting downvoted, I'm probably not saying anything very interesting.  Looking at your post history, I don't see anything with negative totals, though some have low-ish scores.  I also note that you have some voters a little trigger-happy with strong votes (18 karma in 5 votes), which is going to skew the perception. I recommend you not try to learn very much from votes.  It's a lightweight indicator of popularity, not much more.  If something is overwhelmingly negative, that's a signal you've crossed some line, but mixed and slightly-positive probably means you're outside the echo chamber.   Instead of karma, focus on comments and discussion/feedback value.  If someone is interested enough to interact, that's worth dozens of upvotes.  AND you get to refine your topical beliefs based on the actual discussion, rather than (well, in addition to) updating your less-valuable beliefs about what LW wants to read.
A) your observation about negative feedback being undirected is correct, i.e., a well-known phenomenon. B) can you give some examples?

Fixing the ticker-tape problem, or the disconnect between how we write and how we read

Between the tedious wash steps of the experiment I'm running, I've been tinkering with Python. The result is aiRead.

aiRead integrates the ideas about active reading I've accumulated over the last four years. Although its ChatGPT integration is its most powerful feature, this comment is about an insight I've gleaned by using its ticker-tape display feature.

Mostly, I sit down at my desk to read articles on my computer screen. I click a link, and what appears is a column of ... (read more)

A proposal to better adapt nonfiction writing to human memory and attention

Let's explain why it's hard to learn from typical nonfiction writing, so that we can figure out how to potentially solve the problem.

  • Nonfiction writing is compressed. It tries to put across each piece of new information once. But you need repeated exposure, in new ways, to digest information.
  • Nonfiction writing is concentrated. It omits needless words, getting across its ideas in the minimum possible space.
  • Nonfiction writing is continuous. It goes on without a break. You can only hol
... (read more)

The succession of new OpenAI products has proven to me that I'm bad at articulating benchmarks for AI success.

For example, ChatGPT can generate working Python code for a game of mancala, except that it ignores captures and second turns completely, and the UI is terrible. But I'm pretty good at Python, and it would be easier for me to debug and improve ChatGPT's code than to write a complete mancala game.

But I wouldn't have thought to set out "writing code that can be fixed faster than a working program can be written from scratch" as a benchmark. In hindsi... (read more)

The idea of having ChatGPT invent benchmarks can't be tested by just asking it to, but I tried asking it to come up with a slightly more difficult intellectual challenge than writing easily debugged code. Its only two ideas seem to be: * Designing and implementing a new programming language that is easier to read and understand than existing languages, and has built-in features for debugging and error-checking. * Writing efficient and optimized algorithms for complex problems. I don't think either of these seem merely "slightly more difficult" than inventing easily debuggable code.

Sometimes, our bad gut feelings lead us astray. Political actors use this to advantage - programming people with bad gut feelings and exploiting the political division to advantage. Rationality encourages us in these cases to set aside the bad-gut-feeling-generator ("rhetoric"), subject the bad gut feeling to higher scrutiny, and then decide how we ought to feel. There's so much rheotric and so much negativity bias and bad gut feeling, that we might even start to adopt a rule of thumb that "most bad gut feelings are wrong."

Do this to yourself too long, tho... (read more)

Stress-promoting genes?

How can The Selfish Gene help us do better evolutionary thinking? Dawkins presents two universal qualities of a "good gene:"

  • "Selfishness," the ability to outcompete rival alleles for the same spot on the chromosome.
  • "Fitness," the ability to promote reproductive success in the vehicle organism.

One implication is that genes do best by not only making their vehicle organisms able to survive and reproduce in the niche for which the gene adapts them. The gene, or other genes with which it coordinates, ought to make that organism prefer th... (read more)

How I make system II thinking more reliable 

When Kahneman and Tversky described System I and System II thinking, they used it to categorize and explain modes of thought. Over the years, I've learned that it's not always easy to determine which one I'm doing. In particular, I often find myself believing that I'm making a consciously calculated answer, when in fact I am inappropriately relying on haphazard intuition.

I don't know exactly how to solve this problem. The strategy that seems best is to create a "System II Safeguard" that ensures I am employi... (read more)

A distillation might need to fill in background knowledge

A good summary/distillation has two functions. First, it extracts the most important information. Second, it helps the reader understand that information more efficiently.

Most people think a distillation cuts a source text down to a smaller size. It can do more than that. Sometimes, the most important information in a text requires background knowledge. If the reader doesn't have that background knowledge, a distillation designed for that reader must start by communicating it to them, even if the ori... (read more)

1Jay Bailey8mo
I consider distillation to have two main possibilities - teach people something faster, or teach people something better. (You can sometimes do both simultaneously, but I suspect that usually requires you to be really good and/or the original text to be really bad) So, I would separate summarisation (teaching faster) from pedagogy (teaching better) and would say that your idea of providing background knowledge falls under the latter. The difference in our opinions, to me, is that I think it's best to separate the goal of this from the goal of summarising, and to generally pick one and stick to it for any given distillation - I wouldn't say pedagogy is part of summarising at all, I'd say if your goal is to teach the reader background knowledge they need for Subject X, you're no longer "summarising" Subject X. Which is fine. Teaching something better is also a useful skill, even if the post ends up longer than the thing it was distilling.
How do you define teaching “better” in a way that’s cleanly distinguished from teaching “faster?” Or on a deeper level, how would you operationalize “learning” so that we could talk about better and worse learning with some precision? For example, the equation for beam bending energy in a circular arc is Energy = 0.5EIL (1/R)^2. “Shallow” learning is just being able to plug and chug if given some values to put in. Slightly deeper is to memorize this equation. Deeper still is to give a physical explanation for why this equation includes the variables that it does. Yet we can’t just cut straight to that deepest layer of understanding. We have to pass through the shallower understandings first. That requires time and speed. So teaching faster is what enables teaching better, at least to me. Do you view things differently?
3Jay Bailey8mo
This isn't a generalised theory of learning that I've formalised or anything. This is just my way of asking "What's my goal with this distillation?" The way I see it is - you have an article to distill. What's your intended audience? If the intended audience is people who could read and understand the article given, say, 45 minutes - you want to summarise the main points in less time, maybe 5-10 minutes. You're summarising, aka teaching faster. This usually means less, not more, depth. If the intended audience is people who lack the ability to read and understand the article in its current state, people who bounced off of it for some reason, then summarising won't help. So you need to explain the background information and/or explain the ideas of the article better. (which often means more, not less, depth) Thus, your goal is to teach "better". Maybe "better" is the wrong word - maybe the original article was well-suited for an audience who already knows X, and your article is suited for ones that don't. Or maybe the original article just wasn't as well explained as it could have been, so you're rewriting it to resolve people's confusions, which often means adding more detail like concrete examples. What it means to "teach better" is outside the scope of this particular explanation, and I don't have a formal idea, just some general heuristics, like "Identify where people might get confused" and "Start concrete, then go abstract" and "Ask what the existing understanding of your target audience is", but you don't need to have a definition of what it means to "teach better" in order to know that this is your goal with a distillation - not to speed up a process people can do on their own, but to resolve a confusion people might have when trying to read an article independently.
I think that it's good to keep those general heuristics in mind, and I agree with all of them. My goal is to describe the structure of pedagogical text in a way that makes it easier to engineer. I have a way of thinking about shallowness and depth with a little more formality. Starting with a given reader's background knowledge, explaining idea "C" might require explaining ideas "A" and "B" first, because they are prerequisites to the reader to understand "C." A longer text that you're going to summarize might present three such "chains" of ideas: A1 -> B1 -> C1 A2 -> B2 -> C2 -> D2 A3 -> B3 It might take 45 minutes to convey all three chains of ideas to their endpoints. Perhaps a 5-10 minute summary can only convey 3 of these ideas. If the most important ideas are A1, A2, and A3, then it will present them. If the most important idea is C1, then it will present A1 -> B1 -> C1. If D2 is the most important idea, then the summary will have to leave out this idea, be longer, or find a more efficient way to present its ideas. This is why I see speed and depth as being intrinsically intertwined in a summary. Being able to help the reader construct an understanding of ideas more quickly allows it to go into more depth in a given timeframe. All the heuristics you mention are important for executing this successfully. For example, "Ask what the existing understanding of your audience is" comes into play if the summary-writer accidentally assumes knowledge of A2, leaves out that idea, and leads off with B2 in order to get to D2 in the given timeframe. "Start concrete, then go abstract" might mean that the writer must spend more time on each point, to give a concrete example, and therefore they can't get through as many ideas in a given timeframe as they'd hope. "Identify where people might get confused" has a lot to do with how the sentences are written; if people are getting confused, this cuts down on the number of ideas you can effectively present in a given timefr

Study Chatter

Lately, I've hit on another new technique for learning and scholarship. I call it "study chatter." It's simple to describe, but worth explaining why it works, and how it compares to other techniques.

Study chatter means that you write, think, or talk to yourself, in an open-ended way, about the topic you're trying to study. Whatever comes to mind is fine. Here's how this might work in practice (just to give you a flavor):

In Quantitative Cell Biology, we're learning a lot about ways to analyze proteins and cells - lots of methods. This is import

... (read more)

Distillation sketch - rapid control of gene expression

A distillation sketch is like an artistic sketch or musical jam session. Which version below do you prefer, and why?

My new version:

When you’re starving or exercising intensely, your body needs to make more energy, but there's not much glucose or glycogen left to do it with. So it sends a signal to your liver cells to start breaking down other molecules, like amino acids and small molecules, and turning them into glucose, which can in turn be broken down for energy. Cortisol is the hormone that carries t

... (read more)

Studying for retention is a virtuous cycle

If you know how to visualize and build memory palaces, it makes it much easier to understand new concepts, practice them, and therefore to retain them for the long term.

Once you gain this ability, it can transform your relationship with learning.

Before, I felt akrasia. I did my work - it was just a grind. Learning shouldn't feel that way.

Now, I feel voracious. I learn, and the knowledge sticks. My process improves at the same time. My time suddenly has become precious to myself. The books aren't full of tedium that... (read more)

Memory palace inputs

Once you have the mental skills to build memory palaces, you become bottlenecked by how well-organized your external resources are. Often, you'll have to create your own form of external organization.

I've found that trying to create a memory palace based on disorganized external information is asking for trouble. You'll be trying to manage two cognitive burdens: deciding how to organize the material, and remembering the material itself.

This is unnecessary. Start by organizing the material you want to memorize on the page, where you don't have to remember it. Once that's accomplished, then try to organize it in your head by building it into a memory palace.

Memory palace maintenance

A problem with the "memory palace" concept is that it implies that the palace, once constructed, will be stable. If you've built a building well enough to take a walk through it, it will typically stand up on its own for a long time.

By contrast, a "memory palace" crumbles without frequent maintenance, at least for the first few days/weeks/months - I don't really know how long it might take to hypothetically have the memory palace be self-supporting. I suspect that depends on how often your work or further schooling makes use of the... (read more)

Back when I really didn't know what I was doing, I tried to memorize a textbook verbatim. Fortunately, that didn't last long. Even with good technique, memorization is effortful and time-consuming. To get the most benefit, we need to do it efficiently.

What does that mean?

Efficient memorization is building the minimal memory that lets you construct the result.

Let's expand on that with an example.

Taylor's Inequality is a key theorem related to Taylor Series, a powerful tool used widely across science and engineering. Taylor's Inequality gives us a way of sho... (read more)

Mental practice for formal charge

I'm reviewing the chemistry concept of formal charge. There are three simple rules:

  1. Have as few formal charges on the molecule as possible.
  2. Separate like charges.
  3. Put negative formal charges on electronegative atoms.

Lewis structures are unnecessary to practice these rules. We can picture pairs of graphs. Red vertices represent electronegative atoms. The labels representing formal charge.

I decide which has a more favorable formal charge distribution, according to the rules.

This game took me longer to describe than to invent and... (read more)

Efficiency is achieving complex ends through simpler means.

The genius is replaced by the professional. The professional is replaced by the laborer. The laborer is replaced by the animal, or by the robot.

The animal is replaced by the cell. The cell is replaced by the protein. The protein is replaced by the small molecule. The small molecule is replaced by the element.

The robot is replaced by the machine. The machine is replaced by the tool. The tool is replaced by a redesigned part. The part is obviated by a redesigned system.

The advantages of mental practice

Deliberate practice is usually legible practice, meaning that it can be observed, socially coordinated, and evaluated. Legibile forms of deliberate practice are easy to do in a classroom, sports team, or computer program. They are also easy to enforce and incentivize. Almost every form of deliberate practice you have ever been required to do has been some form of legible practice.

Mental practice is illegible. Almost certainly, you have rarely, if ever, been asked to do it, and with even more confidence, I can say that you h... (read more)

Why the chemistry trick works

Yesterday, I wrote about my chemistry trick. To learn the structure of proline, we don't try to "flash" the whole image into our head.  

Instead, we imagine ourselves standing on a single atom. We imagine taking a "walk" around proline. We might try to visualize the whole molecule as it would appear from our position. Or we might just see the next atom in front of us as we walk.

Why does this work? It exploits the cut principle. We don't try to recall every bit of information all at once. Instead, we recall only a small piec... (read more)

The relationship between metabolism, physical fatigue, and the energy cost of physical motion is relatively well-understood.

  • We know the body's efficiency in harvesting energy from stored sugars and fats.
  • We know how energy is transported throughout the body.
  • We know what fuel sources different tissues use.
  • We have detailed knowledge on scales ranging from the molecular to the gross anatomical to explain how organisms move.
  • We have Newtonian mechanics to explain how a certain motion of the body can generate a force on a moving object and what its trajectory wil
... (read more)

What is a Mind Mirror?

The brain does a lot of unconscious processing, but most people feel that they can monitor and control their own brain's behavior, at least to some extent. We are depending on their ability to do this anytime we expect a student to do any amount of self-directed learning.

As in any other domain, a common language to talk about how we monitor and control our own brains for the tasks of learning would be extremely helpful.

"Mind mirroring" is my tentative term for provoking the brain to reliably and automatically create a certain mental s... (read more)

Over the last several years of studying how to study, one of the central focuses has been on managing the cognitive burden. Three main approaches recur again and again:

  1. Using visualization and the phonological loop to examine key information and maintain it in working memory
  2. Elaborative rehearsal to build and retrieve long-term memories
  3. Note-taking, information storage, and other external tools to supplement memory

I've noticed significant progress in my abilities in each of these individual areas, but synthesizing them is even more important.

When I lean on on... (read more)

Fascinatingly, "does getting a second opinion..." autocompletes on Google to "offend doctors," not "lead to better health outcomes."

Score one for The Elephant In The Brain?

Also, it sounds like it does in fact help. 20% of patients get a distinctly different 2nd diagnosis, and 66% are at least altered. []
2Matt Goldenberg10mo
Does it in fact improve outcomes?  Naively I would bet that people tend to go with the diagnosis that fits their proclivities, positive or negative, and therefore getting one diagnosis is actually LESS biased than two.  Of course, if they instead go for the diagnosis that better fits their subjective evidence, then it would likely help.
That is an interesting thought. I tried looking it up on Google Scholar, but I do not see any studies on the subject. I also expect that a norm of seeking two diagnoses would incentivize doctors to prioritize accurate diagnosis. In the long run, I am more confident that it would benefit patient care.

What happens when innovation in treatment for a disease stagnates?

Huntington's has only two FDA-approved drugs to treat symptoms of chorea: tetrabenazine (old) and deutetrabenazine (approved in 2017).

However, in the USA, physicians can prescribe whatever they feel is best for their patient off-label (and about 1 in 5 prescriptions are off-label). Doctors prescribe Risperdal (risperidone), Haldol (haloperidol) and Thorazine (chlorpromazine) off-label to treat HD chorea symptoms.

So off-label prescribing is a major way that the medical system can innovate, by adapting old drugs to new purposes.

A reframe of biomedical industry analysis:

"How is US healthcare going to change between now and the year 202_?"

This requires an understanding of the size of various diseases, the scope of treatment options currently available, natural fluctuations from year to year, and new medicines coming down the pipeline.

It would be interesting to study treatment options for a stagnant disease, one that hasn't had any new drugs come out in at least a few years.

A biomedical engineer might go a step further, and ask:

"How could healthcare change between now an the year 20__?"

Accurate arguments still need to be delightful and convincing

You can't just post two paragraphs of actionable advice and expect people to read it. You can't just explain things in intuitive language and expect to convert the skeptical. Barring exceptional circumstances, you can't get straight to the point. System 2 takes time to initialize.

Blogging is a special format. You're sort of writing for a self-selected niche. But you're also competing for people's attention when they're randomly browsing. Having a strong brand as a writer (Scott, Eliezer, Zvi, and... (read more)

Pascal's Mugging has always confused me. It relies on the assumption that the likelihood of the payoff diminishes more slowly than the size of the payoff.

I can imagine regions of payoff vs. likelihood graphs where that's true. But in general, I expect that likelihood diminishes with greater acceleration than the acceleration of payoff. So eventually, likelihood diminishes faster than payoff increases, even if that was not the case at first. This lets me avoid Pascal's Muggings.

I can imagine an intelligence that somehow gets confused and misses this point. ... (read more)

It's a mistake to think that likelihood or payoff changes - each instance is independent.  And the reason it's effective is the less-explored alternate universe where you get PAID just for being the kind of person who'd accept it.  In the setup, this is stated to be believed by you (meaning: you can't just say "but it's low probability!"  that's rejecting the premise).  That said I agree with your description, with a fairly major modification.  Instead of  I'd say Pascal's Mugging is just a description of one of the enormous number of ways that a superior predictor/manipulator can set up human-level decision processes to fail.  Adversarial situations against a better modeler are pretty much hopeless.
3Gerald Monroe2y
There are a number of ways to deal with this, however, the succinct realization is that Pascal's mugging isn't something you do to yourself.  Another player is telling you the expected reward, and has made it arbitrarily large. Therefore this assessment of future reward is potentially hostile misinformation.  It's been manipulated.  For better or for worse, the way contemporary institutions typically deal with this problem is to simply assume any untrustworthy information is exactly zero probability.  None at all.  The issue with this comes up that contemporary institutions use "has a degree in the field/peer acclaim" as a way to identify who might have something trustworthy to say, and weight "has analyzed the raw data with correct math but is some random joe" as falling in that zero case.   This is where we end up with all kinds of failures and one of the many problems we need a form of AI to solve.   But yes you have hit on a way to filter potentially untrustworthy information without just throwing it out.  In essence, you currently have a belief and confidence.  Someone has some potentially untrustworthy information that differs from your belief.  Your confidence in that data should decrease faster than the difference between the information and your present belief.  
I agree with your analysis. For a Pascal's mugging to work, you need to underestimate how small Pr(your interlocutor is willing and able to change your utility by x) gets when x gets large. Human beings are bad at estimating that, which is why when we explicitly consider PM-type situations in an expected-utility framework we may get the wrong answer; it's possible that there is a connection between this and the fact (which generally saves us from PM-like problems in real life) that we tend to round very small probabilities to zero, whether explicitly or implicitly by simply dismissing someone who comes to us with PM-type claims.

Learning feedback loops

Putting a serious effort into learning Italian in the classroom can make it possible to immerse yourself in the language when you visit Italy. Studying hard for an engineering interview lets you get a job where you'll be able to practice a set of related skills all the time. Reading a scientist's research papers makes you seem like an attractive candidate to work in their lab, where you'll gain a much more profound knowledge of the field.

This isn't just signaling. It's much more about acquiring the minimal competency to participate i... (read more)

That's the crux of most of the education debates.  In reality, almost nothing is just signaling - it's a mix of value and signaling, because that value is actually what's being signaled.    The problem is that it's hard to identify the ratio of real and signaled value without investing a whole lot, and that leads to competitive advantage (in some aspects) to those who can signal without the expense of producing the real value.
Absolutely. There are plenty, plenty of parasites out there. And I hope we can improve the incentives. Thing is, it also takes smart people with integrity just showing up and insisting on doing the right thing, treating the system with savvy, yes, but also acting as if the system works the way it’s supposed to work. I’m going into a scientific career. I immediately saw the kind of lies and exploitations that are going hand in hand with science. At the same time, there are a lot of wonderful people earnestly doing the best research they can. One thing I’ve seen. Honest people aren’t cynical enough, and they’re often naive. I have met people who’ve thrown years away on crap PIs, or decades on opaque projects with no foundation. I know that for me, if I’m going to wade into it, I have to keep a vision of how things are supposed to be, as well as the defects and parasitism.

Business idea: Celebrity witness protection.

There are probably lots of wealthy celebrities who’d like to lose their fame and resume a normal life. Imagine a service akin to witness protection that helped them disappear and start a new life.

I imagine this would lead to journalists and extortionists trying to track them down, so maybe it’s not tractable in the end.

Just a notepad/stub as I review writings on filtered evidence:

One possible solution to the problem of the motivated arguer is to incentivize in favor of all arguments being motivated. Eliezer covered this in "What Evidence Filtered Evidence?" So a rationalist response to the problem of filtered evidence might be to set up a similar structure and protect it against tampering.

What would a rationalist do if they suspected a motivated arguer was calling a decision to their attention and trying to persuade them of option A? It might be to become a motivated arg... (read more)

Aspects of learning that are important but I haven't formally synthesized yet:

  • Visual/spatial approaches to memorization
  • Calibrating reading speeds/looking up definitions/thinking up examples: filtering and organizing to distinguish medium, "future details," and the "learning edge"
  • Mental practice/review and stabilizing an inner monologue/thoughts
  • Organization and disambiguation of review questions/procedures
  • Establishing procedures and stabilizing them so you can know if they're working
  • When to carefully tailor your approach to a particular learning challenge,
... (read more)

Cognitive vs. behaviorist approaches to the study of learning

I. Cognitivist approaches

To study how people study on an internal, mental level, you could do a careful examination of what they report doing with their minds as they scan a sentence of a text that they're trying to learn from.

For example, what does your mind do if you read the following sentence, with the intent to understand and remember the information it contains?

"The cerebral cortex is the site where the highest level of neural processing takes place, including language, memory and cognitive... (read more)

Practice sessions in spaced-repetition literature

Spaced repetition helps, but how do spaced-repetition researchers have their subjects practice within a single practice session? I'd expect optimized practice to involve not only spacing and number of repetitions, but also an optimal way of practicing within sessions.

So far, I've seen a couple formats:

  1. Subjects get an explanation, examples, and a short, timed set of practice problems.
  2. Subjects practice with flash cards. Each "round" of flash cards involves looking at only the cards they haven't seen or got wro
... (read more)

Are democracies doomed to endless intense, intractable partisanship?

Model for Yes: In a democracy, there will be a set of issues. Each has a certain level of popular or special-interest support, as well as constitutionality.

Issues with the highest levels of popular support and constitutionality will get enacted first, if they weren't already in place before the democracy was founded.

Over time, issues with more marginal support and constitutionality will get enacted, until all that's left are the most marginal issues. The issues that remain live issues will... (read more)

1Gerald Monroe2y
I think the current era is a novel phenomena. Consider that 234 years ago, long dead individuals wrote in a statement calling for there to be "or abridging the freedom of speech, or of the press".   Emotionally this sounds good, but consider.  In a our real universe, information is not always a net gain.  It can be hostile propaganda or a virus designed to spread rapidly causing harm to it's hosts.   Yet in a case of 'bug is a feature', until recently most individuals didn't really have freedom of speech.  They could say whatever they wanted, but had no practical way for extreme ideas to reach large audiences.  There was a finite network of newspapers and TV news networks - less than about 10 per city and in many cases far less than that.   Newspapers and television could be held liable for making certain classes of false statements, and did have to routinely []pay fines.  Many of the current QAnon conspiracy theories are straight libel and if the authors and publishers of the statements were not anonymous they would be facing civil lawsuits.   The practical reason to allow a freedom of speech today is current technology has no working method to objectively decide if a piece of information is true, partially true, false, or is hostile information intended to cause harm.  (we rely on easily biased humans to make such judgements and this is error prone and subject to the bias of whoever pays the humans - see Russia Today) I don't know what to do about this problem.  Just that it's part of the reason for the current extremism.

I've noticed that when I write posts or questions, much of the text functions as "planning" for what's to come. Often, I'm organizing my thoughts as I write, so that's natural.

But does that "planning" text help organize the post and make it easier to read? Or is it flab that I should cut?

Thinking, Too Fast and Too Slow

I've noticed that there are two important failure modes in studying for my classes.

Too Fast: This is when learning breaks down because I'm trying to read, write, compute, or connect concepts too quickly.

Too Slow: This is when learning fails, or just proceeds too inefficiently, because I'm being too cautious, obsessing over words, trying to remember too many details, etc.

One hypothesis is that there's some speed of activity that's ideal for any given person, depending on the subject matter and their current level of comfort wi... (read more)

Different approaches to learning seem to be called for in fields with varying levels of paradigm consensus. The best approach to learning undergraduate math/CS/physics/chemistry seems different from the best one to take for learning biology, which again differs from the best approach to studying the economics/humanities*.

High-consensus disciplines have a natural sequential order, and the empirical data is very closely tied to an a priori predictive structure. You develop understanding by doing calculations and making theory-based arguments, along with empi... (read more)

What rationalists are trying to do is something like this:

  1. Describe the paragon of virtue: a society of perfectly rational human beings.
  2. Explain both why people fall short of that ideal, and how they can come closer to it.
  3. Explore the tensions in that account, put that plan into practice on an individual and communal level, and hold a meta-conversation about the best ways to do that.

This looks exactly like virtue ethics.

Now, we have heard that the meek shall inherit the earth. So we eschew the dark arts; embrace the virtues of accuracy, precision, and charity... (read more)

You can justify all sorts of spiritual ideas by a few arguments:

  1. They're instrumentally useful in producing good feelings between people.
  2. They help you escape the typical mind fallacy.
  3. They're memetically refined, which means they'll fit better with your intuition than, say, trying to guess where the people you know fit on the OCEAN scale.
  4. They're provocative and generative of conversation in a way that scientific studies aren't. Partly that's because the language they're wrapped in is more intriguing, and partly isn't because everybody's on a level playing fi
... (read more)
I would be interested in arguments about why we should eschew them that don't resort to activist ideas of making the world a "better place" by purging the world of irrationality and getting everybody on board with a more scientific framework for understanding social reality or psychology. I'm more interested in why individual people should anticipate that exploring these spiritual frameworks will make their lives worse, either hedonistically or by some reasonable moral framework. Is there a deontological or utilitarian argument against them?

A checklist for the strength of ideas:

Think "D-SHARP"

  • Is it worth discussing?
  • Is it worth studying?
  • Is it worth using as a heuristic?
  • Is it worth advertising?
  • Is it worth regulating or policing?

Worthwhile research should help the idea move either forward or backward through this sequence.

Why isn’t California investing heavily in desalination? Has anybody thought through the economics? Is this a live idea?

There's plenty of research going on, but AFAIK, no particular large-scale push for implementation. I haven't studied the topic, but my impression is that this is mostly something they can get by with current sources and conservation for a few decades yet. Desalinization is expensive, not just in terms of money, but in terms of energy - scaling it up before absolutely needed is a net environmental harm.
This article []seems to be about the case. The economics seem unclear. The politics seem bad because it means taking on the enviromentalists. 

My modified Pomodoro has been working for me. I set a timer for 5 minutes and start working. Every 5 minutes, I just reset the timer and continue.

For some reason it gets my brain into "racking up points" mode. How many 5-minute sessions can I do without stopping or getting distracted? Aware as I am of my distractability, this has been an unquestionably powerful technique for me to expand my attention span.

All actions have an exogenous component and an endogenous component. The weights we perceive differ from action to action, context to context.

The endogenous component has causes and consequences that come down to the laws of physics.

The exogenous component has causes and consequences from its social implications. The consequences, interpretation, and even the boundaries of where the action begins and ends are up for grabs.

Failure modes in important relationships

  • Being quick and curt when they want to connect and share positive emotions
  • Meeting negative emotions with blithe positive emotions (ie. pretending like they're not angry, anxious, etc)
  • Mirroring negative emotions: meeting anxiety with anxiety, anger with anger
  • Being uncompromising, overly "logical"/assertive to get your way in the moment
  • Not trying to express what you want, even to yourself
  • Compromising/giving in, hoping next time will be "your turn"

Practice this:

  • Focusing to identify your own elusive feelings
  • Empathy to id
... (read more)

Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.

For example, in reading "With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells]," look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.

Eigen's paradox is one of the most intractable puzzles in the study of the origins of life. It is thought that the error threshold concept described above limits the size of self replicating molecules to perhaps a few hundred digits, yet almost all life on earth requires much longer molecules to encode their genetic information. This problem is handled in living cells by enzymes that repair mutations, allowing the encoding molecules to reach sizes on the order of millions of base pairs. These large molecules must, of course, encode the very enzymes that re

... (read more)

What is the difference between playing devil's advocate and steelmanning an argument? I'm interested in any and all attempts to draw a useful distinction, even if they're only partial.


  • Devil's advocate comes across as being deliberately disagreeable, while steelmanning comes across as being inclusive.
  • Devil's advocate involves advancing a clearly-defined argument. Steelmanning is about clarifying an idea that gets a negative reaction due to factors like word choice or some other superficial factor.
  • Devil's advocate is a political act and is only rele
... (read more)

Empathy is inexpensive and brings surprising benefits. It takes a little bit of practice and intent. Mainly, it involves stating the obvious assumption about the other person's experience and desires. Offer things you think they'd want and that you'd be willing to give. Let them agree or correct you. This creates a good context in which high-value trades can occur, without needing an conscious, overriding, selfish goal to guide you from the start.

2Matt Goldenberg3y
FWIW, I like to be careful about my terms here. Empathy is feeling what the other person is feeling. Understanding is understanding what the other person is feeling. Active Listening is stating your understanding and letting the other person correct you. Empathic listening is expressing how you feel what the other person is feeling. In this case, you stated Empathy, but you're really talking about Active Listening.  I agree it's inexpensive and brings surprising benefits.
I think whether it's inexpensive isn't that obvious. I think it's a skill/habit, and it depends a lot on whether you've cultivated the habit, and on your mental architecture.
2Matt Goldenberg3y
Active listening at a low level is fairly mechanical, and can still acrue quite a few benefits. Its not as dependent on mental architecture as something like empathic listening.  It does require some mindfulness to create the habit, but for most people I'd put it on only a slightly higher level of difficulty to acquire than e.g. brushing your teeth.
Fair, but I think gaining a new habit like brushing your teeth is actually pretty expensive.
Empathy isn't like brushing your teeth. It's more like berry picking. Evolution built you to do it, you get better with practice, and it gives immediate positive feedback. Nevertheless, due to a variety of factors, it is a sorely neglected practice, even when the bushes are growing in the alley behind your house.
I don't think what I'm calling empathy, either in common parlance or in actual practice, decomposes neatly. For me, these terms comprise a model of intuition that obscures with too much artificial light.
2Matt Goldenberg3y
In that case, I don't agree that the thing you're claiming has low costs. As Raemon says in another comment this type of intuition only comes easily to certain people.  If you're trying to lump together the many skills I just pointed to, some are easy for others and some harder. If however, the thing you're talking about is the skill of checking in to see if you understand another person, then I would refer to that as active listening.
Of course, you're right. This is more a reminder to myself and others who experience empathy as inexpensive. Though empathy is cheap, there is a small barrier, a trivial inconvenience, a non-zero cost to activating it. I too often neglect it out of sheer laziness or forgetfulness. It's so cheap and makes things so much better that I'd prefer to remember and use it in all conversations, if possible.

Chris Voss thinks empathy is key to successful negotiation.

Is there a line between negotiating and not, or only varying degrees of explicitness?

Should we be openly negotiating more often?

How do you define success, when at least one of his own examples of a “successful negotiation” is entirely giving over to the other side?

I think the point is that the relationship comes first, greed second. Negotiation for Voss is exchange of empathy, seeking information, being aware of your leverage. Those factors are operating all the time - that’s the relationship.

The d

... (read more)

Hot top: "sushi-grade" and "sashimi-grade" are marketing terms that mean nothing in terms of food safety. Freezing inactivates pretty much any parasites that might have been in the fish.

I'm going to leave these claims unsourced, because I think you should look it up and judge the credibility of the research for yourself.

[This comment is no longer endorsed by its author]Reply
2Matt Goldenberg2y
It's partially about taste isn't it? Sushi grade and sashimi grade will theoretically smell less fishy
Fishy smell in saltwater fish is caused by breakdown of TMAO to TME. You can rinse off TME on the surface to reduce the smell. Fresher fish should also have less smell. So if people are saying “sushi grade” when what they mean is “fresh,” then why not just say “fresh?” It’s a marketing term.
2Matt Goldenberg2y
I always thought sushi grade was just the term for "really really fresh  :)"
[+][comment deleted]25d2
[+][comment deleted]3y2
[+][comment deleted]3y1