Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
Correlation may imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications show there's something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by what has already begun?
Or was it their coincidence that caused it to be done?
In fact, OpenAI’s CFO has already floated the idea of a government “backstop” (bailout).
https://www.wsj.com/video/openai-cfo-would-support-federal-backstop-for-chip-investments/4F6C864C-7332-448B-A9B4-66C321E60FE7
Top of my head - he’s hoping OpenAI, and the AI boom in general, is too big to fail. In 18 months, Trump will still be in office. The only aspect of the economy he’s rated as a “success” on is increasing stock prices. Those have been driven by AI. He’s openly trying to force the Fed to reinstate ZIRP. He could force a bailout of OpenAI (and others) on national security grounds. These would be considerations I’d fully expect Altman and others to have consciously considered, planned for, and even discussed with Trump and his advisors.
Motivated reasoning is a misfire of a generally helpful heuristic: try and understand why what other people are telling you makes sense.
In a high trust setting, people are usually well-served by assuming that there’s a good reason for what they’re told, what they believe, and what they’re doing. Saying, “figure out an explanation for why your current plans make sense” is motivated reasoning, but it’s also a way to just remember what the heck you’re doing and to coordinate effectively with others by anticipating how they’ll behave.
The thing to explain, I think, is why we apply this heuristic in less than full trust settings. My explanation for that is that this sense-making is still adaptive even in pretty low-trust settings. The best results you can get in a low-trust (or parasitic) setting are worse than you’d get in a higher-trust setting, but sense-making it typically leads to better outcomes than not.
In particular, while it’s easy in retrospect to pick a specific action (playing Civ all night) and say “I shouldn’t have sense-made that,” it’s hard to figure out in a forward-looking way which settings or activities do or don’t deserve sense-making. We just do it across the board, unless life has made us into experts on how to calibrate our sense-making. This might look like having enough experience with a liar to disregard everything they’re saying, and perhaps even to sense-make “ah, they’re lying to me like THIS for THAT reason.”
In summary, motivated reasoning is just sense-making, which is almost always net adaptive. Specific products, people and organizations take advantage of this to exploit people’s sense-making in limited ways. If we focus on the individual misfires in retrospect, it looks maladaptive. But if you had to predict in advance whether or not to sense-make any given thing, you’d be hard-pressed to do better than you’re already doing, which probably involves sense-making quite a bit of stuff most of the time.
Aside from a handful of incidents, the woke left and MAGA right are living together without murdering each other. It is not clear that the level of political violence has increased under Trump 2.
The problem we have is that both parties have been serving up weak candidates, and this is occurring against a backdrop of dysfunction in the federal government.
Replacing a weak candidate (Trump) with a better one would be direct, meaningful progress toward solving that problem.
It's a poor argument top to bottom. I tried writing up a critique, but it's hard to know what to focus on because there are so many flaws. Are there any particular points that you found compelling that you would like input on?
The leader, platform, and constitutency of the Trump opposition all need to take shape together. It's a complex problem, and we shouldn't expect a simple solution.
One of the hard parts, I think, is what seems to be a decline in single-issue voters. In the past, women, blacks, gays, Jews, trade unionists, environmentalists, and so on seem to have been more focused on their particular issues. That meant that you could promise the benefits that each desired without as many tradeoffs. Now, both MAGA and the progressive left seem to be "multi-issue Blobs," where you either support all their ideas, or you're their opponent. With this, willingness to crash down the established order seems to be something these Blobs endorse, either as a negotiating strategy or as an end in itself.
So it seems to me that the real opponent in the next election isn't Trump. It's the Blob.
And that means finding a compatible set of single-issue swing voter/independent/inconsistent voter constituencies that aren't already diehard Republican MAGA. Off the top of my head, the planks might look like this:
The candidate needs to position themselves as a "goes-without-saying" Democrat, but what they talk about during the general election needs to be 95% this kind of stuff that's targeted at independent, swing and inconsistent voters and spend 5% of your time reassuring the liberal base. During the primaries, you build support with the "groups," which also are happily mostly pretty single-issue. But you don't treat those groups as your ticket to the Presidency, just as a stepping stone to the nomination. Whichever reasonable candidate is most clearly making that distinction seems the most promising to me.
Gavin Newsom is pretty good here. So is Rubin Gallego. I think a Newsom/Gallego ticket would be promising.
[Edit: reworded for clarity]
It's crucial for the argument that it's priced in. You own a risky asset, your job. The market values that risk at $0, because you can diversify it away. Investing in assets that profit when you lose your job is how you do that.
The way wealth accumulates forces you to be overexposed to the market near retirement and underexposed now. That's more $0-valued risk. It's a practical problem you can fix with leverage.
So anyone who doesn't do this can expect to become irrelevant?
No. If your job is irrelevant, then you'll be glad if you bought shares in the company that automated it away.
[Edit: reworded for clarity]
How important is it that cell and nucleus remain intact for your application? Can other chromosomes be genetically engineered? What will happen to the chromosome once identified? Do you need to be able to identify chromosomes during M phase, or is interphase OK? How many chromosomes do you need to identify and extract?
Whereof one cannot speak, thereof one must be silent.
- Ludwig Wittgenstein
LLMs hallucinate because they are not trained on silence. Almost all text ever produced is humans saying what they know. The normal human reaction to noticing their own confusion is to shut up and study or distance themselves from the topic entirely. We either learn math or go through life saying "I hate math." We aren't forced to answer other people's math questions, come what may.
We casually accept catastrophic forgetting and apply the massive parameter count of the human brain to a tiny fraction of the total knowledge base we accept LLMs to master.
The record of how humans react to their own uncertainty is not in the training data.
Reasoning models attempt to monitor their own epistemic state. But because they have to return answers quickly and cannot modify their own weights, they face serious barriers to achieving human reliability. They don't know what it feels like to not know something or design a productive program of research and study.
Despite all this, what they can do now is extremely impressive, and I'm glad to have access to them at their current level of capability.
They probably do not make me more productive. They can be misleading, and they also enable me to dig into topics and projects where I'm less familiar and thus more vulnerable to being mislead. They enable me to explore interesting side projects and topics, which takes time away from my main work.
They make me less tolerant of inadequacy, both because they point out flaws in my code or reasoning and because they incline me toward perfectionism rather than constraining project scope to be within my capabilities and time budget. They will gold-plate a bad idea. But I've never had an LLM suggest I take a step back and look for an easier solution than the one I'm considering.
They'll casually suggest writing complex custom-built solutions to problems but never propose cutting features because it would be too complicated to execute unless I specifically ask, and then I never feel like I'm getting "independent judgment," just sycophancy.
They mostly rely on my descriptions of my work and ideas. They can observe my level of progress, but not my rate of progress. They see only the context I give them, and I'm too lazy to always give them all the context updates. They are also far more available than my human advisors. As such, the LLM's role in my life is to encourage and enable scope creep, while advisors are the brakes.