This is a special post for quick takes by Elizabeth. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
114 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post  mentioned a lot of people and organizations, so it seemed like useful data.

I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic.  This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.

Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.

It’s hard to say how sending an early draft changed things. Austin Chen got some extra anxiety joked about being anxious because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). Turns out they were fine but then I was w... (read more)

4Austin Chen
I think you're talking about me? I may have miscommunicated; I was ~zero anxious, instead trying to signal that I'd looked over the doc as requested, and poking some fun at the TODOs. FWIW I appreciated your process for running criticism ahead of time (and especially enjoyed the back-and-forth comments on the doc; I'm noticing that those kinds of conversations on a private GDoc seem somehow more vibrant/nicer than the ones on LW or on a blog's comments.)
Well in that case I was the one who was unnecessarily anxious so still feels like a cost, although one well worth paying to get the information faster.
While writing the email to give mentioned people and orgs a chance to comment, I wasn't sure whether to BCC (more risk of going to spam) or  CCed (shares their email). I took a FB poll, which got responses from the class of people who might receive emails like this, but not the specific people I emailed. Of the responses, 6 said CC and one said either. I also didn't receive any objections from the people I actually emailed. So seems like CCing is fine.

reposting comment from another post, with edits:

re: accumulating status in hope of future counterfactual impact.

I model status-qua-status (as opposed to status as a side effect of something real) as something like a score for "how good are you at cooperating with this particular machine?". The more you demonstrate cooperation, the more the machine will trust and reward you. But you can't leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score. 

There are exceptions. If you're exceptionally strategic you might make good use of that status by e.g. changing what the machine thinks it wants, or coopting the resources and splintering. It is also pretty useful to accumulate evidence you're a generally responsible adult before you go off and do something weird. But this isn't the vibe I get from people I talk to with the 'status then impact' plan, or from any of 80ks advice. Their plans only make sense if either that status is a fungible resource like money, or if you plan on cooperating with the machine indefinitely. 

So I don't think people should pursue status as a goal in and of itself, especially if there isn't a clear sign for when they'd stop and prioritize something else.

4Eli Tyre
Thank you for this. As you note, this seems like a very important insight / clarification, for power- accrual / status-accrual based plans. In general, I observe people thinking only very vaguely about these kinds of plans, and this post gives me a sense of the kind of crisp modeling that is possible here.
0Stephen Fowler
I agree with your overall point re: 80k hours, but I think my model of how this works differs somewhat from yours.  "But you can't leverage that into getting the machine to do something different- that would immediately zero out your status/cooperation score." The machines are groups of humans, so the degree to which you can change the overall behaviour depends on a few things.  1) The type of status (which as you hint, is not always fungible).  If you're widely considered to be someone who is great at predicting future trends and risks, other humans in the organisation will be more willing to follow when you suggest a new course of action. If you've acquired status by being very good at one particular niche task, people won't necessarily value your bold suggestion for changing the organisations direction.  2) Strategic congruence.  Some companies in history have successfully pivoted their business model (the example that comes to mind is Nokia). This transition is possible because while the machine is operating in a new way, the end goal of the machine remains the same (make money). If your suggested course of action conflicts with the overall goals of the machine, you will have more trouble changing the machine.  3) Structure of the machine.  Some decision making structures give specific individuals a high degree of autonomy over the direction of the machine. In those instances, having a lot of status among a small group may be enough for you to exercise a high degree of control (or get yourself placed in a decision making role). Of course, each of these variables all interact with each other in complex ways.  Sam Altman's high personal status as an excellent leader and decision maker, combined with his strategic alignment to making lots of money, meant that he was able to out-manoeuvre a more safety focused board when he came into apparent conflict with the machine.   

ooooooh actual Hamming spent 10s of minutes asking people about the most important questions in their field and helping them clarify their own judgment, before asking why they weren't working on this thing they clearly valued and spent time thinking about. That is pretty different from demanding strangers at parties justify why they're not working on your pet cause. 

He also didn't ask them both questions at the same day. 

3Eli Tyre
Somehow this seems like a very big diff.
What are typical answers to the question you get?
I don't get answers to that question because I don't accost strangers at parties demanding they justify their life choices to me

Brandon Sanderson is a bestselling fantasy author. Despite mostly working with traditional publishers, there is a 50-60 person company formed around his writing[1]. This podcast talks about how the company was formed.

Things I liked about this podcast:

  1. he and his wife both refer to it as "our" company and describe critical contributions she made.
  2. the number of times he was dissatisfied with the way his publisher did something and so hired someone in his own company to do it (e.g. PR and organizing book tours), despite that being part of the publisher's job.
  3. He believed in his back catalog enough to buy remainder copies of his books (at $1/piece) and sell them via his own website at sticker price (with autographs). This was a major source of income for a while. 
  4. Long term grand strategic vision that appears to be well aimed and competently executed.
  1. ^

    The only non-Sanderson content I found was a picture book from his staff artist. 

Abstract issues raised by the Nonlinear accusations and counter-accusations

  1. How do you handle the risk of witness tampering, in ways that still let innocent people prove themselves innocent? Letting people cut corners if they claim a risk of tampering sure sets up bad incentives, but it is a real problem the system needs to be able to deal with
  2. How do you handle the fact that the process of providing counter-evidence can be hacked, in ways that still let innocent people prove themselves? People can string it out, or bury you in irrelevant data, or provide misleading data that then requires more time to drill into. 
  3. How do you handle the risk of true, negative things coming out about the alleged victim? My take is that the first and strongest complaints will come from people who are extra sensitive, fragile, or bad at boundaries regardless of the situation, because duh. If you put two people in the same situation, the more sensitive person will complain more regardless of the situation. That's what sensitive means. 
  4. Probably the best thing for the community as a whole is for complete, accurate information to come out about both the victims and the org, but this has h
... (read more)
I think you are trying to reinvent law. I think all or at least most of these points have decent answers in a society with working rule of law. Granted, social media makes things more complicated, but the general dynamics are not new.
In ideal case we would like to have something better than law, because currently the law mostly works for people who have approximately the same amount of resources they can spend on law. If you have lots of money for lawyers, you can threaten people so they will be silent even if you hurt them. If you have lots of money for lawyers, you can say anything you want about anyone with less money than you, and then let the lawyers solve the problem. The easiest strategy is to drag out the lawsuit indefinitely until the other side burns all their resources, then offer them a settlement they cannot refuse (a part of the settlement is them publicly admitting that they were wrong, even if factually they were not). Law optimizes for a stable society, not for truth. Siding with the rich is a part of that goal.
That doesn't sound like proper rule of law, and indeed, the US is abysmal in that area specifically. Not that the US would be a paragon of rule of law overall.   Source:  Maybe that is why people resort to alternate ways of dispute resolution...
Can one of the disagreers explain their reasoning?

EA/rationality has this tension between valuing independent thought, and the fact that most original ideas are stupid. But the point of independent thinking isn't necessarily coming up with original conclusions. It's that no one else can convey their models fully so if you want to have a model with fully fleshed-out gears you have to develop it yourself. 

Well-known in tech circles.  Ideas are cheap.  Selection of promising ideas is somewhat valuable.  Good execution of ideas is the major bottleneck.

1 day later, my retraction has more karma than the original humming post

Retractions should get more votes than other types of posts. It is good to incentivize retractions IMHO.

There's a thing in EA where encouraging someone to apply for a job or grant gets coded as "supportive", maybe even a very tiny gift. But that's only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].

One really clear case was when I was encouraged to apply for a grant my project wasn't a natural fit for,  because "it's quick and there are few applicants".   This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn't the only person the grantmaker encouraged.  I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.

[if the grantmaker is reading this and recognizes themselves: I'm not mad at you personally]. 

I've been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone "yeah you're probably not good enough".

A lot of EA job postings encourage people t... (read more)

I'm not sure supportive/helpful vs mean is a useful framing.  It's not reasonable for a grant-maker or recruiter to have much knowledge about your costs, let alone to weight them equal to the large value (though small probability) of a successful application. I think the responsibility is always going to fall on the applicant to make these choices.  Grantmakers and recruiters SHOULD be as clear as possible about the criteria for acceptance, in order to make the value side (chance of success) easier to predict, but the cost side isn't something they are going to understand well. Note that there is an adversarial/competitive aspect to such matches, so the application-evaluator can't be as transparent as they might like, in order to reduce Goodhart or fraud in the applications they get.
-2Thoth Hermes
This behavior from orgs is close enough to something I've been talking about for a while as being potentially maladaptive that I think I agree that we should keep a close eye on this. (In general, we should try and avoid situations where there are far more applicants for something than the number accepted.)

I have a friend who spent years working on existential risk. Over time his perception of the risks increased, while his perception of what he could do about them decreased (and the latter was more important). Eventually he dropped out of work in a normal sense to play video games, because the enjoyment was worth more to him than what he could hope to accomplish with regular work. He still does occasional short term projects, when they seem especially useful or enjoyable, but his focus is on generating hedons in the time he has left. 

I love this friend as a counter-example to most of the loudest voices on AI risk.You can think p(doom) is very high and have that be all the more reason to play video games. 

I don't want to valorize this too much because I don't want retiring to play video games becoming the cool new thing. The admirable part is that he did his own math and came to his own conclusions in the face of a lot of social pressure to do otherwise. 

1Johannes C. Mayer
I know people like this. I really don't understand people like this. Why not just take the challenge to play real live it's a videogame with crushing difficulty. Oh wait that's maybe just me who plays games on very hard difficulty most of the time (in the past when I did play video games). I guess there is probably not one reason people do this. But I don't get the reason why you are being crushed by doom. At least for me using the heuristic of just not giving up, never (at least not consciously, I probably can't muster a lot of will as I am being disassembled by nanobots, because of all the pain you know), seemed to work really well. I just ended up reasoning myself into a stable state, by enduring long enough. I wonder if the same would have happened for your fried had he endured longer.
Because gamification is for things with a known correct answer. Solving genuine unknowns requires a stronger connection with truth. 
1Johannes C. Mayer
I am not quite sure what the correct answer is for playing Minecraft (let's ignore the Ender Dragon, which did not exist when I played it). I think there is a correct answer for what to do to prevent AI doom. Namely to take actions that achieve high expected value in your world model. If you care a lot about the universe then this translates to "take actions that achieve high expected value on the goal of preventing doom." So this only works if you really care about the universe. Maybe I care an unusual amount about the universe. If there was a button I could press that would kill me, but that would save the universe, then I would press it. At least in the current world, we are in. Sadly it isn't that easy. If you don't care about the universe sufficiently compared to your own well-being, the expected value from playing video games would actually be higher, and playing video games would be the right answer.
I think this perspective of "if I can't affect p(doom) enough, let me generate hedons instead" makes a lot of sense. But as someone who has spent way way way more time than his fair share on video games (and who still spends a lot of time on them), I want to make the somewhat nitpicky point that video games are not necessarily the hedon-optimizing option. Here's an alternative frame, and one into which I also fall from time to time: Suppose that, for whatever reason (be it due to x-risk; notoriously poor feedback loops in AI alignment research; or, in my case, past bouts of depression or illness), the fate of the world / your future / your health / your project / your day seems hard to affect and thus outside of your control (external locus of control). Then video games counteract that by giving you control (internal locus of control). Maybe I can't affect <project>, but I can complete quests or puzzles in games. Games are designed to allow for continuous progress, after all. Or as Dr. K of HealthyGamer puts it, video games "short-circuit the reward circuit" (paraphrased). Roughly, the brain rewards us for doing stuff by generating feelings of accomplishment or triumph. But doing stuff in the real world is hard, and in video games it's easy. So why do the former? In this sense, video games are a low-level form of wireheading. Also, excessive gaming can result in anhedonia, which seems like a problem for the goal of maximizing hedons. To tie this pack to the start: if the goal is to maximize hedons, activities other than gaming may be much better for this purpose (<-> goal factoring). If the goal is instead to (re)gain a sense of control, then video games seem more optimized for that.
For a lot of people, especially people that aren't psychologically stable, this is very, very good advice in general around existential risk. To be clear, I think that he has an overly pessimistic worldview on existential risk, but I genuinely respect your friend realizing that his capabilities weren't enough to tackle it productively, and that he realized that he couldn't be helpful enough to do good work on existential risk, so he backed away from the field as he realized his own limitations.
  man these seem like really unnecessarily judgemental ways to make this point
While I definitely should have been more polite in expressing those ideas, I do think that they're important to convey, especially the first one, as I really, really don't people to burn themselves out or get anxiety/depression from doing something that they don't want to do, or even like doing. I definitely will be nicer about expressing those ideas, but they're so important that I do think something like the insights need to be told to a lot of people, especially those in the alignment community.


A few month's ago, twitter's big argument was about this AITA, in which a woman left a restaurant to buy ranch dressing. Like most viral AITAs this is probably fake, but the discourse around it is still revealing. The arguments were split between "such agency! good for her for going after what she wants" and "what is she, 3?". I am strongly on the side of people doing what they want with their own food, but in this case I think the people praising her have missed the point, and the people criticizing her have focused on the wrong thing.

I think it's weird but harmless to drown all your food in ranch dressing. But it is, at best, terribly rude to leave a date for 20 minutes to run an errand. If it is so important to you to have ranch on all your food, either check with the restaurant ahead of time or just bring a small bottle by default. 

So this woman is agentic in the sense of "refusing to accept the environment as it is, working to bring it more in line with her preferences". But it's a highly reactive form of agency that creates a lot of negative externalities.

I see this a lot in the way rationalists talk about agency.  What... (read more)

Example of reactionary agency: someone who filled their house with air purifiers in 2020, but hasn't changed the filters since.  Their reaction was correct, and in this case they're probably net better off for it. But it would probably have been worth dropping some other expensive reaction in favor of regularly swapping air filters, or putting the purifiers aside since they're useless at this point.  [Full disclosure: I change my air purifiers regularly but haven't cleaned my portable AC filter in 3.5 years because I can't figure out how]


Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up. 

Faced with big dreams but unclear ability to enact them, people have a few options. 

  •  try anyway and fail badly, probably too badly for it to even be an educational failure. 
  • fake it, probably without knowing they're doing so
  • learned helplessness, possible systemic depression
  • be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you. 
  • discover more skills than they knew. feel great, accomplish great things, learn a lot. 

The first three are all very costly, especially if you repeat the cycle a few times.

My preferred version is ambition snowball or "get ambitious slowly". Pick something b... (read more)

None of my principled arguments against "only care about big projects" have convinced anyone, but in practice Google reorganized around that exact policy ("don't start a project unless it could conceivably have 1b+ users, kill if it's ever not on track to reach that") and they haven't home grown an interesting thing since.

My guess is the benefits of immediately aiming high are overwhelmed by the costs of less contact with reality.

As of October 2022, I don't think I could have known FTX was defrauding customers.

If I'd thought about it I could probably have figured out that FTX was at best a casino, and I should probably think seriously before taking their money or encouraging other people to do so.  I think I failed in an important way here, but I also don't think my failure really hurt anyone, because I am such a small fish.

But I think in a better world I should have had the information that would lead me to conclude that Sam Bankman-Fried was an asshole who didn't keep his promises, and that this made it risky to make plans that depended on him keeping even explicit promises, much less vague implicit commitments.  I have enough friends of friends that have spoken out since the implosion that I'm quite sure that in a more open, information-sharing environment I would have gotten that information. And if I'd gotten that information, I could have shared it with other small fish who were considering uprooting their lives based on implicit commitments from SBF. Instead, I participated in the irrational exuberance that probably made people take more risks on the margin, and left them more vulnerable to... (read more)

Check my math: how does Enovid compare to to humming?

Nitric Oxide is an antimicrobial and immune booster. Normal nasal nitric oxide is 0.14ppm for women and 0.18ppm for men (sinus levels are 100x higher).…

Enovid is a nasal spray that produces NO. I had the damndest time quantifying Enovid, but this trial registration says 0.11ppm NO/hour. They deliver every 8h and I think that dose is amortized, so the true dose is 0.88. But maybe it's more complicated. I've got an email out to the PI but am not hopeful about a response…


so Enovid increases nasal NO levels somewhere between 75% and 600% compared to baseline- not shabby. Except humming increases nasal NO levels by 1500-2000%.….

Enovid stings and humming doesn't, so it seems like Enovid should have the larger dose. But the spray doesn't contain NO itself, but compounds that react to form NO. Maybe that's where the sting comes from? Cystic fibrosis and burn patients are sometimes given stratospheric levels of NO for hours or days; if the burn from Envoid came from the NO itself than those patients would be in agony. 

I'm not fi... (read more)

I found the gotcha: envoid has two other mechanisms of action. Someone pointed this out to me on my previous nitric oxide post, but it didn't quite sink in till I did more reading. 
What are the two other mechanisms of action?
citric acid and a polymer
Enovid is also adding NO to the body, whereas humming is pulling it from the sinuses, right? (based on a quick skim of the paper). I found a consumer FeNO-measuring device for €550. I might be interested in contributing to a replication
I think that's their guess but they don't directly check here.  I also suspect that it doesn't matter very much.  * The sinuses have so much NO compared to the nose that this probably doesn't materially lower sinus concentrations.  * the power of humming goes down with each breath but is fully restored in 3 minutes, suggesting that whatever change happens in the sinsues is restored quickly * From my limited understanding of virology and immunology, alternating intensity of NO between sinuses and nose every three minutes is probably better than keeping sinus concentrations high[1]. The first second of NO does the most damage to microbes[2], so alternation isn't that bad.   I'd love to test this. The device you linked works via the mouth, and we'd need something that works via the nose. From a quick google it does look like it's the same test, so we'd just need a nasal adaptor. Other options: * Nnoxx. Consumer skin device, meant for muscle measurements * There are lots of devices for measuring concentration in the air, maybe they could be repurporsed. Just breathing on it might be enough for useful relative metrics, even if they're low-precision.    I'm also going to try to talk my asthma specialist into letting me use their oral machine to test my nose under multiple circumstances, but it seems unlikely she'll go for it.  1. ^ obvious question: so why didn't evolution do that? Ancestral environment didn't have nearly this disease (or pollution) load. This doesn't mean I'm right but it means I'm discounting that specific evolutionary argument.  2. ^ although NO is also an immune system signal molecule, so the average does matter. 

Much has been written about how groups tend to get more extreme over time. This is often based on evaporative cooling, but I think there's another factor: it's the only way to avoid the geeks->mops->sociopaths death spiral.

An EA group of 10 people would really benefit from one of those people being deeply committed to helping people but hostile to the EA approach, and another person who loves spreadsheets but is indifferent to what they're applied to. But you can only maintain the ratio that finely when you're very small. Eventually you need to decide if you're going to ban scope-insensitive people or allow infinitely many, and lose what makes your group different.

"Decide" may mean consciously choose an explicit policy, but it might also mean gradually cohere around some norms. The latter is more fine-tuned in some ways but less in others. 

Having AI voices read my drafts back to me feels like it's seriously leveled up my writing. I think the biggest, least replaceable feature is that I'm more less likely to leaves gaps in my writing- things where it's obvious to me but I need to spell it out. It also catches bad transitions, and I suspect it's making my copy editor's job easier.  

Toy model:

a person's skill level has a floor (what they can do with minimal effort) and ceiling (what they can do with a lot of thought and effort). Ceiling raises come from things we commonly recognize as learning: studying the problem, studying common solution. Floor raises come from practicing the skills you already have, to build fluency in them.

There's a rubber band effect where the farther your ceiling is from your floor, the more work you have to put in to raise it further. At a certain point the efficient thing to do is to grind until you have raised your floor, so that further ceiling raises are cheaper, even if you only care about peak performance. 

My guess for why that happens is your brain has some hard constraints on effort, and raising the floor reduces the effort needed at all levels. E.g. it's easier to do 5-digit multiplication if you've memorized 1-digit times tables. 

My guess is the pots theory of art works best  when a person's skill ceiling is well above their floor. This is true both because it means effort is likely the limiting reagent, the artist will have things to try rather than flailing at random, and they will be able to assess how good a given pot is.

Sounds plausible. If this is true, then the best way to learn is to alternate ceiling-increasing learning with floor-increasing learning (because too much of one without the other gives diminishing returns).

Are impact certificates/retroactive grants the solution to grantmaking corrupting epistemics? They're not viable for everyone, but for people like me who:

  1. do a lot of small projects (which barely make sense to apply for grants for individually)
  2. benefit from doing what draws their curiosity at the moment (so the delay between grant application and decision is costly)
  3. take commitments extremely seriously (so listing a plan on a grant application is very constraining)
  4. have enough runway that payment delays and uncertainty for any one project aren't a big deal

They seem pretty ideal.

So why haven't I put more effort into getting retroactive funding? The retroactive sources tend to be crowdsourced. Crowdfunding is miserable in general, and leaves you open to getting very small amounts of money, which feels worse than none at all. Right now I can always preserve the illusion I would get more money, which seems stupid. In particular even if I could get more money for a past project by selling it better and doing some follow up, that time is almost certainly better spent elsewhere. 

Here is some random NFT (?) company (?) that's doing retroactive grants to support its community builders. I am in no way endorsing this specific example as I know nothing about it, just noticing that some are trying it out.

It's weird how hard it is to identify what is actually fun or restorative, vs. supposed to be fun or restorative, or used to be fun or restorative but no longer is. And "am I enjoying this?" should be one of the easiest questions to answer, so imagine how badly we're fucking up the others.

A very rough draft of a plan to test prophylactics for airborne illnesses.

Start with a potential superspreader event. My ideal is a large conference,  many of whom travelled to get there, in enclosed spaces with poor ventilation and air purification, in winter. Ideally >=4 days, so that people infected on day one are infectious while the conference is still running. 

Call for sign-ups for testing ahead of time (disclosing all possible substances and side effects). Split volunteers into control and test group. I think you need ~500 sign ups in the winter to make this work. 

Splitting controls is probably the hardest part. You'd like the control and treatment group to be identical, but there are a lot of things that affect susceptibility.  Age, local vs. air travel, small children vs. not, sleep habits... it's hard to draw the line

Make it logistically trivial to use the treatment. If it's lozenges or liquids, put individually packed dosages in every bathroom, with a sign reminding people to use them (color code to direct people to the right basket). If it's a nasal spray you will need to give everyone their own bottle, but make it trivial to get more if someone l... (read more)

This sounds like a bad plan because it will be a logistics nightmare (undermining randomization) with high attrition, and extremely high variance due to between-subject design (where subjects differ a ton at baseline, in addition to exposure) on a single occasion with uncontrolled exposures and huge measurement error where only the most extreme infections get reported (sometimes). You'll probably get non-answers, if you finish at all. The most likely outcome is something goes wrong and the entire effort is wasted. Since this is a topic which is highly repeatable within-person (and indeed, usually repeats often through a lifetime...), this would make more sense as within-individual and using higher-quality measurements. One good QS approach would be to exploit the fact that infections, even asymptomatic ones, seem to affect heart rate etc as the body is damaged and begins fighting the infection. HR/HRV is now measurable off the shelf with things like the Apple Watch, AFAIK. So you could recruit a few tech-savvy conference-goers for measurements from a device they already own & wear. This avoids any 'big bang' and lets you prototype and tweak on a few people - possibly yourself? - before rolling it out, considerably de-risking it. There are some people who travel constantly for business and going to conferences, and recruiting and managing a few of them would probably be infinitely easier than 500+ randos (if for no reason other than being frequent flyers they may be quite eager for some prophylactics), and you would probably get far more precise data out of them if they agree to cooperate for a year or so and you get eg 10 conferences/trips out of each of them which you can contrast with their year-round baseline & exposome and measure asymptomatic infections or just overall health/stress. (Remember, variance reduction yields exponential gains in precision or sample-size reduction. It wouldn't be too hard for 5 or 10 people to beat a single 250vs250 one-off experi
All of the problems you list seem harder with repeated within-person trials. 

things I found interesting about this video:

  • Brennan's mix of agency (organizing 100 person LARPs at 15, becoming creative director at LARP camp by 19), and mindless track following (thinking the goal of arts school was grades). 
  • He's so proactively submissive about starting community college at 14. "Oh man I was so annoying. I apologize to anyone who had to be around me back then". You can really see the childhood bullying trauma.
    • This isn't conjecture, he says outright he still expects every new group he meets to put him in a trashcan.
    • I imagine hearing him talk about this would be bad for a 14yo in a similar position, which is a shame because the introspection around choosing his own goals vs. having them picked for him seems really useful.
  • About his recommendation of a social strategy: "Am I lying or telling the truth? I'm telling the truth to myself but you shouldn't do it".
  • Frank discussion of how financial constraints affected his life. 
  • A happy ending where all the weird side threads from his life came together to create the best possible life for him. 

My sink is way emptier when my todo list item is "do a single dish" than "do all the dishes"

The risk I took was calculated, but man, am I bad at math

There's a category of good thing that can only be reached with some amount of risk, and that are hard to get out once you start. All of romance risks getting your heart broken. You never have enough information to know a job will always and forever be amazing for you. Will anti-depressants give you your life back or dull your affect in hard to detect ways? 

This is hard enough when the situation is merely high variance with incomplete information. But often the situations are adversarial: abusive partners and jobs camouflage themselves.  Or the partner/job might start out good and get bad, as their finances change. Or they might be great in general but really bad for you (apparently other people like working for Google? no accounting for taste). 

Or they might be genuinely malicious and telling you the issue is temporary, or that their ex wasn't a good fit or you are.

Or they might not be malicious, it might genuinely be the situation, but the situation isn't going to get better so it's damaging you badly. 

You could opt out of the risk, but at the cost of missing some important human experiences and/or food.

How do you calculate risks when the math is so obfuscated?

When I did my vegan nutrition write-ups, I directed people to's Guide to Vegan+Vegetarian Supplements. Unfortunately, it is paywalled. Fortunately, it is now possible to ask your library to buy access, so you can read that guide plus their normal supplement reviews at no cost to yourself. 

Library explainer:

Ven*n guide:

Alternate way to get it: register for free trial -> 'download pdf' button -> cancel trial.

A repost from the discussion on NDAs and Wave (a software company). Wave was recently publicly revealed to have made severance dependent on non-disparagement agreements, cloaked by non-disclosure agreements. I had previously worked at Wave, but negotiated away the non-disclosure agreement (but not the non-disparagement agreement).

But my guess is that most of the people you sent to Wave were capable of understanding what they were signing and thinking through the implications of what they were agreeing to, even if they didn't actually have the conscientious

... (read more)
This is one benefit to paying people well, and a reason having fewer better-paid workers is sometimes better than more people earning less money. If your grants or salary give you just enough to live as long as the grants are immediately renewed/you don't get fired, even a chance of irritating your source of income imperils your ability to feed yourself. 6 months expenses in savings gives you the ability to risk an individual job/grant. Skills valued outside EA give you the ability to risk pissing off all of EA and still be fine.  I'm emphasizing risk here because I think it's the bigger issue. If you know something is wrong, you'll usually figure out a way to act on it. The bigger problem is when you some concerns but they legitimately could be nothing, but worry that investigating will imperil your livelihood.
I agree, and it seems important, but could you perhaps give more examples (maybe as a separate article)? "If you never sign an NDA, truth-telling becomes cheaper." (Question is, how much cheaper. I mean, people can still sue you. Not necessarily because you said something false, just because they can, and because the process is the punishment.) How to generate more examples? Go through a list of virtues and think: "what preparation could I make in advance to make this easier / what to avoid to prevent this becoming harder"? Let's try it: * prudence - study things, be (epistemically) rational * fortitude - practice expanding your comfort zone? or rather, practice martial arts and build a safety network? * temperance - practice self-control? or rather, make sure that your needs are satisfied all the time, so that you are not too strongly tempted? (the latter seems more in spirit of your example) * justice - don't do things that would allow others to blackmail you, gather power * chastity - get married to a person who enjoys sex * faith - observe miracles, avoid nonbelievers
This seems like a great thing to exist and you have my encouragement to write it. 

Problems I am trying to figure out right now:

1. breaking large projects down into small steps. I think this would pay off in a lot of ways: lower context switching costs, work generally easier, greater feelings of traction and satisfaction, instead of "what the hell did I do last week? I guess not much". This is challenging because my projects are, at best ill-defined knowledge work, and sometimes really fuzzy medical or emotional work. I strongly believe the latter have paid off for me on net, but individual actions are often lottery tickets with payouts ... (read more)

4Steven Byrnes
It's pretty goofy but for the past year I've had monthly calendar printouts hanging on my wall, and each day I put tally marks for how many hours of focused work I did, and usually scrawl a word or two about what I was doing that day, and when I figure out something important I draw a little star on that day of the calendar and write a word or two reminding myself of what it is (and celebrate that night by eating my favorite kind of ice cream sandwich). This is mostly stolen from the book Deep Work (not the ice cream sandwiches though, that's my own innovation). Having those sheets hanging on my wall is good for “what did I do last week” or “what kinds of stuff was I doing last April” or “oh where has the time gone” type questions to myself. I also have a to-do list using an online kanban tool and I always move tasks into a Done column instead of just archiving them directly. This is entirely pointless, because now and then I'll go through the Done column and archive everything. So I added an extra step that does nothing. But it feels nice to get an extra opportunity to revisit the Done column and feel good about how many things I've done. :)
I feel your pain, but anyway those were things you wanted to do. In some sense, the information "this doesn't work" is also a payout, just not the one you hoped for, but that is hindsight. If your best guess was that this was worth doing, then actually doing it is a legitimate work done, even if it ultimately didn't achieve what you hoped for. There is some kind of "doublethink" necessary. On one hand, we ultimately care about the results. Mere effort that doesn't bring fruit is a waste (or signalling, that detracts from the intended goal). On the other hand, in everyday life we need to motivate ourselves by rewarding the effort, because results come too infrequently and sometimes are too random, and we want to reward following a good strategy rather than getting lucky. (Also: goals vs systems.) Perhaps we should always add "according to my current knowledge" at the end of these question, just to remind ourselves that sometimes the right thing to do is stop prioritizing and collect more information instead.
some features I definitely want in an app: * ~infinitely nested plans similar to workflowy or roam * when I check off a task on a plan, it gets added to a "shit I did on this date" list. I can go to that page and see what I did on various days
Out of curiosity, did Roam turn out to support the functionality I mentioned in my other comment here?
Many outliner apps can already do that, and from what I can tell this doesn't even require plugins. You mention Roam, but there are also e.g. Logseq (free) and Tana (outliner with extensive AI features; currently lacks smooth onboarding; is in beta with a waitlist, but one can get an instant auto invite by introducing oneself in their Slack channel). I personally don't use outliners anymore after learning from Workflowy that I absolutely need the ability to write non-nested stuff like long-form text, so I unfortunately can't tell if those apps are a good fit for people who do like outliners. Anyway, after clicking around in Logseq, here's how your requested feature looks there: Whenever you open the app, it loads a Journal page of the current day where you'd add the tasks you want to do that day. Then tasks marked as TODO or DONE can be found in the graph view, like so. In Roam, these TODO and DONE pages supposedly also exist (from what I can tell from here, anyway), so the same strategy should work there, too. And in Tana, you can probably also do things just like this; or you would add tasks anywhere (including on a project page), then mark tasks with a #task tag so Tana treats them like items in a database, and then you'd add a Done Date field to tasks.

"Do or Do Not: There is No Try"

Like all short proverbs each word is doing a lot of work and you can completely flip the meaning by switching between reasonable definitions. 

I think "there is no try" often means "I want to gesture at this but am not going to make a real attempt" in sentences like "I'll try to get to the gym tomorrow" and "I'll try to work on my math homework tonight". 

"there is no try" means "I am going to make an attempt at this but it's not guaranteed to succeed" in sentences like "I'm going to try to bench 400 tomorrow", "I'm t... (read more)

OOOOH it's maybe encapsulated in "I'll try to do action" vs "I'm trying this action"

"have one acceptable path and immediately reject anyone who goes off it" cuts you off from a lot of good things, but also a lot of bad things. If you want to remove that constraint to get at the good weirdness, you need to either tank a lot of harm, or come up with more detailed heuristics to prevent it

Curiosity killed the cat by exposing it to various "black swan" risks.

repurposed from my comment on a FB post on an article criticizing all antidepressants as basically placebos

epistemic status: kind of dreading comments on this because it's not well phrased, but honing it is too low a priority. Every time you criticize insufficient caveating an angel loses its wings. 

medical studies are ~only concerned with the median person. Any unusual success or failure is written off as noise, instead of replicable variability. As conditions get defined they narrow that to "median person with condition X" rather than "median person... (read more)

People talk about sharpening the axe vs. cutting down the tree, but chopping wood and sharpening axes are things we know how to do and know how to measure. When working with more abstract problems there's often a lot of uncertainty in:

  1. what do you want to accomplish, exactly?
  2. what tool will help you achieve that?
  3. what's the ideal form of that tool? 
  4. how do you move the tool to that ideal form?
  5. when do you hit diminish returns on improving the tool?
  6. how do you measure the tool's [sharpness]?

Actual axe-sharpening rarely turns into intellectual masturbation be... (read more)

I think alternating periods of cutting and sharpening is useful here, reducing/increasing the amount of sharpening based on the observed marginal benefits of each round of sharpening on the cutting.
I have met people who geeked out over sharpening. They are usually more focused on knives but they can also geek out over sharpening axes.  Is it that you have never met a person who geeked out over sharpening (maybe because those people mostly aren't in your social circles) or do you think that's qualitatively different from intellectual masturbation?
I think doing things for their own sake is fine, it's only masturbation with negative valence if people are confused about the goal. 

Some things are coordination problems. Everyone* prefers X to Y, but there are transition costs and people can't organize to get them paid. 

Some things are similar to coordination problems, plus the issue of defectors, Everyone prefers X (no stealing) to Y (constant stealing), but too many prefer X'(no one but me steals). So even if you achieve X, you need to pay maintenance costs. 

Sometimes people want different things. These are not coordination problems.

Sometimes people endorse a thing but don't actually want it. These are not coordination pro... (read more)

I think I would have missed the inference if I didn't know what the specific thing was here (although maybe I am underestimating other people's inferencing)
I asked ChatGPT: Response: Which is essentially what seems reasonable to guess, though it's not very specific.  My first guess as to specifics is "Elizabeth tried to organize a weekly gathering where people would pick a paper, read it, write up their thoughts, and discuss it at the meeting, and couldn't get people to commit the time necessary, and ended up questioning someone along the lines of 'Well, several people said it was good to practice these skills, and that the summaries are valuable public services, so why aren't they ...?', leading to the incident at the end."  Other variations that came to mind included hiring a writing teacher for a group, or some kind of large-scale book buying, though neither of those involves both reading and writing.

That's a pretty reasonable guess, although I wasn't quite that dumb.

I'm temporarily working a full time gig. The meetings are quite badly run. People seemed very excited when I introduced the concept of memo meetings[1], but it kept not happening or the organizer would implement it badly. People (including the organizer) said nice things about the concept so I assumed this was a problem with coordination, or at least "everyone wants the results but is trying to shirk". 

But I brought it up again when people were complaining about the length of one part of a meeting, and my boss said[2] "no one likes reading and writing as much as you", and suddenly it made sense that people weren't failing to generate the activation energy for a thing they wanted, they were avoiding a thing they didn't want but endorsed (or I pressured them into expressing more enthusiasm than they actually felt, but it felt like my skip boss genuinely wanted to at least try it and god knows they were fine shooting down other ideas I expressed more enthusiasm over). 

So the problem was I took people's statements that they wanted memo meetings but got distracted by something urgent to be true, when actu... (read more)

I have a new project for which I actively don't want funding for myself: it's too new and unformed to withstand the pressure to produce results for specific questions by specific times*. But if it pans out in ways other people value I wouldn't mind retroactive payment. This seems like a good fit for impact certificates, which is a tech I vaguely want to support anyway.

Someone suggested that if I was going to do that I should mint and register the cert now, because that norm makes IC markets more informative, especially about the risk of very negative proje... (read more)

I have friends who, early in EA or rationality, did things that look a lot like joining nonlinear. 10+ years later they're still really happy with those decisions. Some of that is selection effects of course, but think some of it is the reasons they joined were very different.

People who joined early SingInst or CEA by and large did it because they'd been personally convinced this group of weirdos was promising. The orgs maybe tried to puff themselves up, but they had almost no social proof. Whereas nowadays saying "this org is EA/rationalist" gives you a b... (read more)


Sometimes different people have different reaction to the same organization simply because they want different things. If you want X, you will probably love the organization that pushes you towards X, and hate the organization that pushes you away from X.

If this is clearly communicated at an interview, the X person probably will not join the anti-X organization. So the problem is when they figure it out too late, when changing jobs again would be costly for them.

And of course it is impossible to communicate literally everything, and also sometimes things change. I think that a reasonable rule of thumb would be to communicate the parts where you differ significantly from the industry standard. Which leads to a question what is the industry standard. Is it somewhere documented explicitly? But there seems to be a consensus, if you e.g. go to Workplace Stack Exchange, about what is normal and what is not.

(...getting to the point...)

I think the "original weirdos" communicated their weirdness clearly.

Compared to that, the EA community is quite confusing for me (admittedly, an outsider). On one hand, they handle tons of money, write grant applications, etc. On the other hand, they sometim... (read more)

In the spirit of this comment on lions and simulacra levels I present: simulacra and halloween decorations


Level 1: this is actually dangerous. Men running at you with knives, genuinely poisonous animals.

Level 2: this is supposed to invoke genuine fear, which will dissipate quickly when you realize it's fake. Fake poisonous spiders that are supposed to look real, a man with a knife jumps with a fake knife but doesn't stab you, monsters in media that don't exist but hit primal fear buttons in your brain. 

Level 3: reminds people of fear without eve... (read more)

I agree with 1-3, but would change level 4 to something like "people don't even associate it with fear, we just think it is a cute tradition for small kids (see: bat balloons)". I think that level 4 is like: "it might be connected to the territory somehow, but I really don't care how, it just seems to work for some unspecified reason and that is okay for me". Analogical things could be said about Christmas, but on level 1 it is actually two unrelated things (birth of the Messiah; Saint Nicholas). Actually, all holidays have an aspect of this; some people celebrate Independence Day or Labor Day to make a political statement, but most people just do it because it is a tradition.

I know we hate the word content but sometimes I need a single word to refer to history books, longrunning horror podcasts, sitcoms, a Construction Physics blog post, and themepark analysis youtube essays. And I don't see any other word volunteering. 

All of what you've described can be considered texts but that's usually in the context of critique/analysis. I see content as the preferable term when not engaging in critique/analysis though.
3David Hornbein
Back in the ancient days we called all this stuff "media".
Oh yeah, that did seem better.
And we knew it is the plural form of "medium," which is isomorphic to the message.
2[comment deleted]

Let's say there's a drug that gives people 20% more energy (or just cognitive energy). My intuition is that if I gave it to 100 people, I would not end up with 120 people's worth of work. Why?


  • the energy gets soaked up by activities other than the ones I am measuring. e.g. you become better at cleaning your house, or socializing, or spend more time on your hobby.
  • The benefits accrue to other people- you have more energy which means you lose chore-chicken with your partner, who now has slightly more energy for their stuff.
  • Energy wasn't the only
... (read more)
The scenarios you described sound plausible, but it could also be the other way round: * if there is a constant amount of work to do at house, you can do it 20% faster, so not only you have more energy for the remaining work but also more time; * you could spend some of the extra energy on figuring out how to capture the benefits of your work; * you could spend some of the extra energy on fixing things that were slowing you down; * the drug might make you better at other things, or at least having more energy could create a halo effect. So I guess the answer is "it depends", specifically it depends on whether you were bottlenecked by energy.
I don't know what "cognitive energy" nor "worth of work" means, in any precise way that would let me understand why you'd expect a 100% linear relationship between them, or why you'd feel the need to point out that you don't expect that. If I did have such measures, I'd START by measuring variance across days for a given person, to determine the relationship, then variance across time for groups, and various across groups.   Only after measuring some natural variances would I hypothesize about the effect of a pill (and generally, pills aren't that "clean" in their effect anyway). edit (because I can't reply further): Deep apologies.  I will stop commenting on your shortforms, and attempt to moderate my presentation on posts as well.  Thanks for the feedback.
This is the 5th comment you've left on my shortform, most of which feel uncollaborative and butterfly-squashing. I think your comments are in the harsh-side-of-fine zone for real posts, but are harsher than I want to deal with on shortform, so I ask that you stop. 

I'm convinced people are less likely to update when they've locked themself into a choice they don't really want.

If I am excited to go to 6 flags and get a headache that will ruin the rollercoasters for me, I change my plans. But if I'm going out of FOMO or to make my someone else happy and I get a headache it doesn't trigger an update to my plans. The utilitarian math on this could check out, but my claim is that's not necessary, once I lock myself in I stop paying attention to pain signals and can't tell if I should leave or not. 

I think "locked themself into a choice" is unhelpful, and perhaps obfuscatory.  There are lots of different lock-in mechanics, and they are incredibly unequal.  I also don't see this as a failure to update, but just a different weighting of costs and benefits.  though there's ALSO a failure to update, in that I tend to lie to myself if I don't want to question a decision. Depending on the group and the frequency of contact, it's quite likely that the relationship impact will be an order of magnitude larger than the actual hedonic content of the outing.  In this case, you'd absolutely be willing to suffer some in order to maintain plans.   That said, I cannot explain how it is that I forget the existence of analgesics so regularly.  

I've heard that a lot of skill in poker is not when to draw or what to discard, it's knowing how much to bet on a given hand. There isn't that much you can do to improve any given hand, but folding earlier and betting more on good hands are within your control. 

feels like a metaphor for something. 

In Texas Hold ‘Em, the most popular form of poker, there is no drawing or discarding, just betting and folding. This seems like strong evidence that those parts are where the skill lies — somebody came up with a version that removed the other parts, and everyone switched to it. Not sure how that affects the metaphor. For me I think it weakened the punch, since I had to stop and remember that there exist forms of poker with drawing and discarding.

Is there a lesswrong canon post for the quantified impact of different masks? I want to compare a different intervention to masks and it would be nice to use a reference that's gone through battle testing.

AFAICT, for novel independent work:

genuine backchaining > plan-less intuition or curiosity > fake backchaining.

And most attempts to move people from intuition/curiosity to genuine backchaining end up pushing them towards fake backchaining instead. This is bad because curiosity leads you to absorb a lot of information that will either naturally refine your plans without conscious effort, or support future backchaining. Meanwhile fake backchaining makes you resistant to updating, so it's a very hard state to leave. Also curiosity is fun and fake backch... (read more)

4Eli Tyre
What's an example of fake backchaining?
Real backchaining is starting from a desired outcome and reverse engineering how to get it, step by step. e.g. I want to eat ice cream <- I had ice cream in the house <- I drove to the store and bought ice cream X no wait I don't have a car X I ordered ice cream delivered <- I had money <- I had a job Fake backchaining is claiming to have done that, when you didn't really. In the most obvious version the person comes up with the action first, forward to chains to how it could produce a good outcome, and the presents that as a backchain. I think forward chaining can be fine (I'd probably rank "I forward chained and BOTECed the results" ahead of intuition alone), but presenting it as backchaining means something screwy is going on.  The more insidious version follows the form of backchaining, but attention slides off at key points, generating terrible plans.  E.g. (from the same person, who lacks a car)  I want to eat ice cream <- I had ice cream in the house <- I drove to the store and bought ice cream <- I own a car <- I had money <- I had a job. The difference between faking backchaining and being honest but bad at it is that if you point out flaws to the latter kind of person they are delighted to find an easier way to achieve their goals. The fake backchainer in the same situation will get angry, or be unable to pay attention, or look attentive but change nothing once you walk away (although this can be difficult to distinguish from the advice being terrible).  E.g. I have a planned project (pending funding) to do a lit review on stimulants. I think this project has very high EV, and it would be really easy for me to create a fake backchain for it. But the truth is that someone suggested it to me, and I forward chained as far ahead as "make x-risk workers more effective", and left it at that. If I had created a fake backchain it would imply more thought than I put in to e.g. importance of x-risk work relative to others. 

I feel like it was a mistake for Hanson to conflate goodharting, cooperative coordination, accurate information transfer, and extractive deception.

[good models + grand vision grounded in that model] > [good models + modest goals] > [mediocre model + grand vision]

There are lots of reasons for this, but the main one is: Good models imply skill at model building, and thus have a measure of self-improvement. Grand vision implies skill at building grand vision unconnected to reality, which induces more error.

[I assume we're all on board that a good, self-improving models combined with a grand vision is great, but in short supply]

Difficulties with nutrition research:

  1. ~Impossible to collect information on a population level. We could dig into the reasons this is true, but it doesn't mater because...
  2. High variance between people means population data is of really limited applicability
  3. Under the best case circumstances, by the time you have results from an individual it means they've already hurt themselves. 


The best way through I see is to use population studies to find responsive markers, so people can run fast experiments on themselves. But it's still pretty iffy. 

Ambition snowballs/Get ambitious slowly works very well for me, but sonepeople seem to hate it. My first reaction is that these people need to learn to trust themselves more, but today I noticed a reason I might be unusually suited for this method.

two things that keep me from aiming at bigger goals are laziness and fear. Primarily fear of failure, but also of doing uncomfortable things. I can overcome this on the margin by pushing myself (or someone else pushing me), but that takes energy, and the amount of energy never goes down the whole time I'm working... (read more)

I think it's weird that saying a sentence with a falsehood that doesn't change its informational content is sometimes considered worse than saying nothing, even if it leaves the person better informed than the were before.

This feels especially weird when the "lie" is creating a blank space in a map that you are capable of filling in ( e.g. changing irrelevant details in an anecdote to anonymize a story with a useful lesson), rather than creating a misrepresentation on the map.

3Thoth Hermes
I've always thought it was weird that logic traditionally considers a list of statements concatenated with "and's" where at least one statement in the list is false as the entire list being one false statement. This doesn't seem to completely match intuition, at least the way I'd like it to. If I've been told N things, and N-1 of those things are true, it seems like I've probably gained something, even if I am not entirely sure which one out of the N statements is the false one. 
2Rafael Harth
I think the consideration makes sense because "lies are bad" is a much simpler norm than "lies are bad if they reduce the informational usefulness of the sentence below 0". The latter is so complex that if it were the accepted norm, it'd probably be so difficult to enforce and so open to debate that it'd lose its usefulness.
2Adam Zerner
Do you have any examples in mind? I'm having a hard time thinking about this without something concrete and am having trouble thinking of an example myself.
I'm surprised that you find this weird.  Beliefs are multi-dimensional and extremely complicated - it's almost trivial to construct cases where a loss in accuracy on one dimension paired with a gain on another is a net improvement.