Pandemic Prediction Checklist: H5N1

Pandemic Prediction Checklist: Monkeypox

Correlation does imply some sort of causal link.

For guessing its direction, simple models help you think.

Controlled experiments, if they are well beyond the brink

Of .05 significance will make your unknowns shrink.

Replications prove there's something new under the sun.

Did one cause the other? Did the other cause the one?

Are they both controlled by something already begun?

Or was it their coincidence that caused it to be done?

Wiki Contributions


This comment is a local counterargument to this specific paragraph (emphases mine):

Mystery 2: An Abrupt Shift 

Another thing that many people are not aware of is just how abrupt this change was. Between 1890 and 1976, people got a little heavier. The average BMI went from about 23 to about 26. This corresponds with rates of obesity going from about 3% to about 10%. The rate of obesity in most developed countries was steady at around 10% until 1980, when it suddenly began to rise.

This paragraph is very misleading. In the United States, the obesity rate among adults 20-74 years old was already 13.4% in 1960-1962 (a), 18-20 years before 1980. Moreover, the SMTM authors cite no source for the claim that “the rate of obesity in most developed countries was steady at around 10% until 1980,” and in the United States at least that claim seems to be very wrong – we don’t have nationally representative data for the obesity rate in the early 20th or late 19th centuries, but it might have been as low as ~1.5% or as high as 3%, indicating that the obesity rate in the US increased by a factor of >4x from ~1900 to ~1960.

Here is my argument tree for your debate here:

  • SMTM: There was a 3x increase in obesity from 1890-1976
    • Mendonça: The obesity rate increased by a factor of 4x from 1900-1960
  • SMTM: The rate of obesity in most developed countries was steady at around 10% until 1980
    • Mendonça: The obesity rate was 13.4% by 1960-1962
  • SMTM: In 1980, the rate of obesity suddenly began to rise
    • Mendonça: Links to graph that does show an abrupt acceleration in obesity and severe obesity starting in 1971-1974 in a section labeled "there wasn't an abrupt shift in obesity rates in the late 20th century"

It seems to me that your rebuttals to this paragraph exaggerate the differences in your perspectives. SMTM points out a 3x increase in obesity from 1890-1976, you point out it was actually 4x but our data is shaky that far back. SMTM says the rate of obesity was "steady around 10%," you say that it was 13.4%.  And the evidence you supply in the paper you link does show an abrupt increase in obesity in the era SMTM references.


Accepting that obesity rates anywhere went up anywhere from 4x to 9x from 1900-1960 (i.e. from 1.5%-3% to 13.4%), I still think we have to explain the "elbow" in the obesity data starting in 1976-80. It really does look "steady around 10%" in the 1960-1976 era, with an abrupt change in 1976. If we'd continued to increase our obesity rates at the rate of 1960-74, we'd have less than 20% obesity today rather than the 43% obesity rate we actually experience. I think that is the phenomenon SMTM is talking about, and I think it's worth emphasizing. However, I do think their language is sufficiently imprecise (is it fair to call a tripling of obesity "people got a little heavier"?) and lacking in citations that it's worth interrogating as you have done.


Your subsequent data may give sufficient support to your counterargument that I'd be ready to agree that there was no abrupt shift in obesity rates in the late 20th century, but if we look at this one study, I think it does look like there was an obvious abrupt shift.

I agree with you that we are probably seeing AP being selectively broken down by the liver and colon. It therefore fails to reach the normal senescent cells in these tissues, and does not trigger their destruction. This causes a higher level of senescent cells to remain in these tissues after AP administration stops. If those liver and colon senescent cells can go on to trigger senescence in neighboring cells, that may explain why a temporary administration of senolytics fails to provide lasting protection against aging, despite the accumulation of senescent cells being a root cause of aging. 

Under this hypothesis, senescent cells are a root cause of aging, as they trigger conversion of other cells to senescence - as suggested in the ODE model paper you linked - but this root cause can only be controlled in a lasting way by ensuring that senolytics eliminates senescent cells in a non-tissue selective manner. We can't leave any pockets of them hanging out in the liver and colon, for example, or they'll start spreading senescence to other nearby organs again as soon as you stop senolytics. Or alternatively, they might simply leave the mouse with an aged liver and colon, which might be enough to kill the mouse consistently, so that there's no real lifespan benefit.

Edit: Sorry if I'm responding to a rebuttal you changed your mind about :)

ah, i see what you're driving at. I'm not sure if I see becoming what it is now via a nonprofit model, but it's plausible to me that could have worked for facebook. But I'm not sure if they could support the staff and hardware it takes to keep it operational using a donation-based model. Mainly, I'd rather get these products into existence than capture additional public value by open-sourcing them, and that seems like a tradeoff to me. what do you think?

Now that I've synthesized the advice, we can work backwards to figure out how to find good ideas:

Get deeply involved with a weird industrial/technological niche that nobody understands very well and get to know all the pain points in that sector from personal experience. Be trying to do stuff that nobody's done before in some domain, where fundamental ideas have barely been reduced to practice. Make friends with the most capable people you meet along the way. Come up with ideas as often as you can, then find the cheapest, fastest ways to rule them out until you find something you just can't get away from. Then take a month to build it and try to sell it immediately.

  • Right now, this might be something like services related to prompt engineering.
    • Maybe... a prompt engineering analytics company that compiles an enormous dataset of metrics on how well engineered prompts perform on specific tasks for a particular type of model, then sells that data to companies.
    • I recently came across a startup building advanced ChatGPT-cheating detection software that takes the text-entry behavior of the user into a Google Doc into account along with analysis of the text itself.

I'd also note that the whole class of "good tech startup ideas" has been under optimization pressure for decades now, which means that these ideas, which may have accurately reflected where to find alpha when Paul Graham was making his first billion, may not apply any longer. By the time you have this much advice easy consensus available by proven experts, it might be the worst sort of advice to take.

How familiar are you with the reasoning behind these ideas?

The tending-towards-monopoly idea is most clearly articulated (AFAIK) by Peter Thiel in his book, Zero to One. A marketplace or social network would be an example, where the bigger you are, the more value you provide to users and thus the harder it is for a competitor to get established. Contrast this with an artificial monopoly for a product that ought to be a commodity good, but that is a monopoly because of corporate shenanigans. Standard Oil would be a classic example.

Businesses don't need to be targeted, but because they have a focused, more or less rational agenda, and deep pockets, it can be easier to conceive of and demonstrate the value of products that meet their needs and get well-compensated for it.

I don't find either of those ideas objectionable, but if you do, it might be helpful to explain why.

I synthesized the advice:

Find a little-recognized but massive problem with no good solution, that affects you personally, where early adopters will love even the crappy MVP and pay $50-$250/month for it ($600-$3000/year), and where you can bust out a marketable early version in weeks. Be specific about the problem, solution, and customer (ideally a business). Find a business that can become enormous and that intrinsically tends toward monopoly.

Make sure the reason the idea is neglected is something like "it's intimidating" or "it's uncool" or "it's a weird super-specific niche almost nobody's ever heard of" or "the timing wasn't right until just recently." Be on the bleeding edge of some industrial frontier to find these ideas, and be capable, willing to sacrifice, flexible and adaptable and a great fit for the work so that you can execute.

Don't compromise on any of this.

I don't know about Yair's example, but it's possible they just miss the rebuttal. They'd see the criticism, not happen to log onto EA forum on the days when Givewell's response is on the top of the forum, and update only on the criticism because by a week or two later, many people probably just have a couple main points and a background negative feeling left in their brains.

Remember: if an authority is doing something you don't like, make sure to ask them before you criticize them. By being an org, they are more important than you, and should be respected. Make sure to respect your betters.

I'm not sure if you changed your mind or kinda-sorta still mean this. But I also think that it would be best to have a norm of giving individual people a week to read and respond to a critical post, unless you have reason to think they'd use the time to behave in a tactical/adversarial manner. Same for orgs. If you think an organization would just use the week to write something dishonest or undermine your reputation, then go right ahead and post immediately. But if you're criticizing somebody or an org who you genuinely think will respond in good faith, then a week of response time seems like a great norm to me - it's what I would want if I was on the receiving end.

Agreed. And even after you've plucked all the low-hanging fruit, the high-hanging fruit may still offer the greatest marginal gains, justifying putting effort into small-improvement optimizations. This is particularly true if there are high switching costs to transition between top priorities/values in a large organization. Even if OpenAI is sincere in its "capabilities and alignment go hand in hand" thesis, they may find that their association with Microsoft imposes huge or insurmountable switching costs, even when they think the time is right to stop prioritizing capabilities and start directly prioritizing alignment.

And of course, the fact that they've associated with business that cares for nothing but profit is another sign OpenAI's priority was capabilities pure and simple, all along. It would have been relatively easy to preserve their option to switch to a capabilities priority if they'd remained independent, and I predict they will not be able to do so, could foresee this, and didn't care as much as they cared about impressive technology and making money.

After thinking about this more, I think that the 80/20 rule is a good heuristic before optimization. The whole point of optimizing is to pluck the low hanging fruit, exploit the 80/20 rule, eliminate the alpha, and end up with a system where remaining variation is the result of small contributing factors that aren’t worth optimizing anymore.

When we find systems in the wild where the 80/20 rule doesn’t seem to apply, we are often considering a system that’s been optimized for the result. Most phenotypes are polygenic, and this is because evolution is optimizing for advantageous phenotypes. The premise of “atomic habits” is that the accumulation of small habit wins compounds over time, and again, this is because we already do a lot of optimizing of our habits and routines.

It is in domains where there’s less pressure or ability to optimize for a specific outcome that the 80/20 rule will be most in force.

It’s interesting to consider how this jives with “you can only have one top priority.” OpenAI clearly has capabilities enhancement as its top priority. How do we know this? Because there are clearly huge wins available to it if it was optimizing for safety, and no obvious huge wins to improve capabilities. That means they’re optimizing for capabilities.

Load More