LESSWRONG
LW

488
The Lightcone Principles
Rationality
Frontpage

54

Put numbers on stuff, all the time, otherwise scope insensitivity will eat you

by habryka
16th Nov 2025
4 min read
2

54

Rationality
Frontpage

54

Previous:
Increasing returns to effort are common
3 comments71 karma
Put numbers on stuff, all the time, otherwise scope insensitivity will eat you
7Buck
2habryka
New Comment
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 10:07 AM
[-]Buck7h73

I’ve been loving this sequence, please keep these coming.

Reply31
[-]habryka6h20

Aww, thank you! And yep, I'll keep posting one a day for about another week!

Reply
Moderation Log
More from habryka
View more
Curated and popular this week
2Comments

Context: Post #6 in my sequence of private Lightcone Infrastructure memos edited for public consumption.


In almost any role at Lightcone you will have to make prioritization decisions about which projects to work on, which directions to take a project in, and how much effort to invest in any aspect of a project. Those decisions are hard. Often those decisions have lots of different considerations that are hard to compare.

To make those decisions you will have to make models of the world. Many models are best expressed as quantitative relationships between different variables. Often, a decision becomes obvious when you try to put it into quantitative terms and compare it against other options or similar decisions you've recently made. One of the most common errors people make is to fail to realize that one consideration is an order of magnitude more important than another because they fail to put the considerations into quantitative terms.

It is extremely valuable if you can put a concrete number onto any consideration relevant to your work. Here are some common numbers related to Lightcone work that all generalists working here should be able to derive to within an order of magnitude (or within a factor of 2 if that is more relevant) in less than 30 seconds:

  • Annual unique visitors to LessWrong (according to Google Analytics)
  • Total annual burn rate of the organization
  • Total annual expenses of Lighthaven
  • Total annual revenue of Lighthaven
  • Total annual costs associated with LessWrong
  • Monthly active logged-in users on LW
  • Monthly active logged-in users above 1000 karma on LW
  • Rough total number of full-time people working in "AI Safety" roles worldwide
  • Total amount of funding on X-risk adjacent work worldwide
  • Number of people subscribed to the LessWrong curated mailing list
  • Your hourly salary
  • Your hourly replacement rate for the organization (i.e. at what external compensation is the organization indifferent between getting more money and losing an hour of your time)
  • Total online traffic to content about AI-safety/existential-risk/rationality worldwide/in the US (measured in views and ideally also in engagement-hours)
  • Rough total market valuation of frontier AI companies (this doesn't have a neat ground truth because Google, Microsoft and FB of course have other business aspects, but it shouldn't be too hard to get within an order of magnitude) 

To be clear, you will almost never be able to capture a consideration perfectly with a fermi estimate using the numbers above. Usually the best you can do is to find a not perfectly robust (but good enough) upper or lower bound on the value of some activity that you are working on and then compare that to other activities using similar bounds (plus unquantified considerations). 

As an illustration of what reasoning like this looks like, here is a random prioritization question I haven't thought about in quite a while, and how I am thinking through it right now:

Should I put more effort into designing the R:A-Z page on LessWrong?

  • Well, how many people currently visit it? I mean, it can't be more than 100,000 per month, because like, I don't think >10% of the traffic to LW is to just that page. Probably it's less, my guess is like 10,000/mo?
  • Ok, but that's potentially quite a few people. So seems like it passes a first sanity-check. How good is the current page?
  • Well, not perfect, but also not terrible. IDK, I do think I could probably do better? Like, IDK how to measure it, but I do think I could maybe make it something like 5-10% better in terms of conversion rate/direct-user-value-provided with a few days of effort.
  • How would I trade off a better experience on the R:A-Z page against a better experience on the homepage? Man, seems really hard, but I don't think I would value it at 10x. Maybe 2x? IDK, quite a lot of variance.
  • Would I usually feel good about spending a few days of effort making the LW homepage better by a few percent? Definitely. We've spent much more time than that on relatively minor issues.
  • Ok, but the frontpage does sure also see a lot more engagement hours, I think by something like 10x? So that kind of knocks that down.
  • How does this compare to others projects? Like, AI 2027 got something like 4 million views. Those do seem probably a lot less valuable on average, but I do feel like it's not like 100x worse. And I feel like I have a bunch more traction on making that go better.
  • Ok, sounds like it's at least not a slam dunk good idea to spend more of our current resources on R:A-Z compared to frontpage work and AI 2027, though there are a few uncertainties. Maybe if I update that sequences reads are more than 10x more important, then it becomes more overdetermined.
  • The other big crux would be if we could drive a lot more traffic to LessWrong or the sequences by making a better R:A-Z page. But man, when I imagine something that achieves that, I imagine it being a lot more work. Definitely more like staff-months. Which is maybe worth it, but would require more scoping out.
  • So IDK, maybe worth thinking about some more, but at least doesn't seem like a hair-on-fire situation right now. My guess is other stuff dominates, but it could flip if I update that those things are really a lot worse than marginal R:A-Z reads.

This isn't a perfect estimate, it doesn't arrive at an overwhelmingly strong conclusion, but it also roughly concluded in making other efforts at least not seem obviously silly.

There is lots more to be said to the art of making estimates that are informed quantitatively. Culturally, expect to pretty regularly bet with people about specific concrete numbers you can check, and guess lots of random numbers on a daily basis that you then use to make further estimates. Take pride in forming a sense of how big all kinds of things are, and to get fast at estimating new variables you never thought about.