This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
There’s a simple pattern I keep running into when I look at biology, energy, space and progress of humanity as a whole:
We know how to do more than we are doing.
The binding constraint is fear routed through institutions, not capital or knowledge.
“Fear” here is not a moral failing. It’s a rational response that got amplified by wealth, age, and politics until it became the new default. Somewhere around the 1970s the West quietly swapped systems and our view on progress. We moved from permission to try unless forbidden to forbidden until proven safe. That swap saved us from some genuine harms, but it also leads to stagnation. We got stuck in comfort. Focus is on the possible backlash instead of the potential improvement.
The question is not whether to keep ethics, safety and guardrails. But how to keep them without sacrificing truth‑seeking, progress and the right to run experiments in the physical world.
How we Got Here
The twenty‑year boom. From the late 1940s to the late 1960s, we compressed a century of invention into a single generation: transistor (1947), DNA structure (1953), commercial nuclear (1954), the pill (1960), lasers (1960), integrated circuits (1958), satellites, container shipping, ARPANET, microprocessors, the Green Revolution. Measured another way, total factor productivity in the U.S. and the west as a whole grew unusually fast in the 50s–60s.
The moral whiplash. New power surfaced new fears. Thalidomide. Tuskegee. Atmospheric testing. Three Mile Island and Chernobyl. If you lived through that arc, “slow down” felt like wisdom and the right thing to do:
Drug approval shifted to ex‑ante proof of efficacy.
IRBs became mandatory for human subjects work.
Environmental review became a prerequisite for building.
Nuclear and biotech layered multiple clearance gates.
Each fix was sensible in isolation. Together they changed the default. Instead of, “show us why you can’t,” the new question became, “prove to us you won’t.” Prove to us this is safe before we continue.
Comfort and demographics did the rest. As societies get richer and older, they select for risk aversion. Voters gain more to protect and less time to enjoy long‑term upside. Media and now social media add asymmetric penalties: a visible failure creates scandal while an invisible counterfactual success creates nothing. Regulators get the message. Their careers depend on preventing the next headline, not enabling the next breakthrough.
The Economics of Friction
A useful empirical result from growth economics: ideas got harder to find. To keep Moore’s Law alive, the number of researchers had to multiply dramatically versus the 1970s. That’s partly because physics gets harder. But a big chunk is also administrative: more layers to clear, more paperwork, more meetings, more lawyers, more PR landmines.
When failure carries concentrated blame and success distributes benefits delayed thinly across society, rational actors seek cover. “Let’s wait for more evidence” becomes a way of life.
Plenty of micro‑decisions that are individually safe and collectively paralyzing.
Stagnation in 2025
Biology: We can already screen embryos for disease risk and some cognitive and psychiatric traits. We know how to make multiple precise edits with good verification on animal models. We know how to improve longevity in theory. What blocks clinical iteration is less wet‑lab feasibility and more a totalizing fear of being the person or institution who “went too far.” So studies that should take three years take a decade or never happen. Potential discovery < Potential backlash. Especially as biology is the most controversial topic involving humans directly. Seen as playing god.
Energy: It’s normal for a clean power project to spend longer on process than on construction. Environmental review, meant to weigh tradeoffs, often becomes a veto in slow motion. The climate clock ticks while we produce PDFs. Chernobyl scared generations.
Biotech: We can now design proteins, drugs, and even genetic edits in silico faster than any wet-lab team could validate them. Yet regulatory and ethical bottlenecks prevent running them. Models capable of accelerating medicine by orders of magnitude sit idle because the experiment that proves safety is itself deemed unsafe. The irony is that over-protective guardrails preserve today’s risk at the cost of tomorrow’s cure. We will never solve cancer if any real trial is prohibited.
Space: We swapped one lunar landing and a near‑term Mars plan for fifty years of caution, committee, and low‑earth‑orbit launches.
None of these domains is suffering from a lack of cleverness or talent. They are suffering from an abundance of veto points and the career risk that attends them. Nobody wants to try anymore. There is too much to lose and not enough to gain.
Calculating the Cost of Doing Nothing
We’re good at counting harms from action. We’re bad at counting harms from delay.
Take a simple model. Suppose we could shift a population mean IQ by +3 to +5 points with safe, audited interventions like gene edits over time. Under a normal model (μ=100, σ=15), that roughly:
increases the ≥130 tail from 2.28% -> 3.59% -> 4.78%,
≥ 145 tail from 0.135% -> 0.256% -> 0.383%,
≥ 160 tail from 0.0032% -> 0.0072% -> 0.0123%
Per million births:
~22,750 → 35,930 → 47,790 at ≥ 130; +210%
~1,350 → 2,555 → 3,830 at ≥ 145; +283%
~32 → 72 → 123 at ≥ 160. +384%
Small average shifts produce large right‑tail multipliers, and the right tail disproportionately drives discovery and institutional competence. A year of delay means a cohort that never gets that upside. We rarely put these counterfactual losses on the same page as the risks we fear.
The same math applies to cancer mortality if you slow an effective therapy, “Do nothing” is not neutral. It is a choice with a cost. Often higher long term, but we don’t see it. We don't calculate doing nothing.
A Pendulum in Motion
At this point it’s hard to talk about “fixes.” What we’re watching is a civilizational rhythm. A pendulum in motion. Every surplus-producing society seems to swing between expansion and preservation. Scarcity builds coordination while abundance dissolves it. Risk tolerance rises when survival feels uncertain, and falls once comfort is the default. Then comfort breeds complexity, complexity breeds inertia, and the pendulum moves back toward stagnation until a new pressure forces motion again.
It’s hard to see how that could be avoided without something like an external forcing function. Historically that’s been war, collapse, or a rival civilization moving faster. None are pleasant ways to restart momentum.
There’s a deterministic feel to all of this. The system optimizes for local stability until external stress breaks it loose again. Progress and caution are both adaptive at different phases. Maybe no society can hold a permanent equilibrium between them?
I am wondering:
Can we design institutions that self-correct before collapse forces them to? How?
Will "safetyism" burn out on its own when the opportunity cost becomes too visible? How can we make it visible?
What event or discovery might flip collective psychology back toward risk and building? What could shift it permanently?
I don’t have the answers. I’m curious what you think? If this really is a pendulum law of civilizations, are we doomed to wait for the next crisis, or have we evolved enough to break the cycle and not repeat the pattern?
Maybe you see this differently and think more safety and rules are required, as we are approaching a time where progress and innovation have huge effects on the world like never before? The potential harm of modern innovations like nuclear energy is far greater compared to “harmless” innovations from the past.
Closing
We became careful for good reasons. The mistake was letting caution take over and results into veto by default. Real safety does not mean zero risk. It means known risk, measured honestly, with the ability to stop when the world pushes back.
Truth has always required exposure to error. If we want progress again, we need institutions that let us take bounded risks, in public.
Appendix: tail math used a normal model with μ=100, σ=15. Probabilities for ≥130/145/160 under mean shifts of +0, +3, +5
There’s a simple pattern I keep running into when I look at biology, energy, space and progress of humanity as a whole:
“Fear” here is not a moral failing. It’s a rational response that got amplified by wealth, age, and politics until it became the new default. Somewhere around the 1970s the West quietly swapped systems and our view on progress. We moved from permission to try unless forbidden to forbidden until proven safe. That swap saved us from some genuine harms, but it also leads to stagnation. We got stuck in comfort. Focus is on the possible backlash instead of the potential improvement.
The question is not whether to keep ethics, safety and guardrails. But how to keep them without sacrificing truth‑seeking, progress and the right to run experiments in the physical world.
How we Got Here
The twenty‑year boom.
From the late 1940s to the late 1960s, we compressed a century of invention into a single generation: transistor (1947), DNA structure (1953), commercial nuclear (1954), the pill (1960), lasers (1960), integrated circuits (1958), satellites, container shipping, ARPANET, microprocessors, the Green Revolution. Measured another way, total factor productivity in the U.S. and the west as a whole grew unusually fast in the 50s–60s.
The moral whiplash.
New power surfaced new fears. Thalidomide. Tuskegee. Atmospheric testing. Three Mile Island and Chernobyl. If you lived through that arc, “slow down” felt like wisdom and the right thing to do:
Each fix was sensible in isolation. Together they changed the default. Instead of, “show us why you can’t,” the new question became, “prove to us you won’t.” Prove to us this is safe before we continue.
Comfort and demographics did the rest.
As societies get richer and older, they select for risk aversion. Voters gain more to protect and less time to enjoy long‑term upside. Media and now social media add asymmetric penalties: a visible failure creates scandal while an invisible counterfactual success creates nothing. Regulators get the message. Their careers depend on preventing the next headline, not enabling the next breakthrough.
The Economics of Friction
A useful empirical result from growth economics: ideas got harder to find. To keep Moore’s Law alive, the number of researchers had to multiply dramatically versus the 1970s. That’s partly because physics gets harder. But a big chunk is also administrative: more layers to clear, more paperwork, more meetings, more lawyers, more PR landmines.
When failure carries concentrated blame and success distributes benefits delayed thinly across society, rational actors seek cover. “Let’s wait for more evidence” becomes a way of life.
Plenty of micro‑decisions that are individually safe and collectively paralyzing.
Stagnation in 2025
None of these domains is suffering from a lack of cleverness or talent. They are suffering from an abundance of veto points and the career risk that attends them. Nobody wants to try anymore. There is too much to lose and not enough to gain.
Calculating the Cost of Doing Nothing
We’re good at counting harms from action. We’re bad at counting harms from delay.
Take a simple model. Suppose we could shift a population mean IQ by +3 to +5 points with safe, audited interventions like gene edits over time. Under a normal model (μ=100, σ=15), that roughly:
Per million births:
Small average shifts produce large right‑tail multipliers, and the right tail disproportionately drives discovery and institutional competence. A year of delay means a cohort that never gets that upside. We rarely put these counterfactual losses on the same page as the risks we fear.
The same math applies to cancer mortality if you slow an effective therapy,
“Do nothing” is not neutral. It is a choice with a cost. Often higher long term, but we don’t see it. We don't calculate doing nothing.
A Pendulum in Motion
At this point it’s hard to talk about “fixes.” What we’re watching is a civilizational rhythm. A pendulum in motion.
Every surplus-producing society seems to swing between expansion and preservation. Scarcity builds coordination while abundance dissolves it.
Risk tolerance rises when survival feels uncertain, and falls once comfort is the default. Then comfort breeds complexity, complexity breeds inertia, and the pendulum moves back toward stagnation until a new pressure forces motion again.
It’s hard to see how that could be avoided without something like an external forcing function. Historically that’s been war, collapse, or a rival civilization moving faster. None are pleasant ways to restart momentum.
There’s a deterministic feel to all of this. The system optimizes for local stability until external stress breaks it loose again. Progress and caution are both adaptive at different phases. Maybe no society can hold a permanent equilibrium between them?
I am wondering:
I don’t have the answers. I’m curious what you think?
If this really is a pendulum law of civilizations, are we doomed to wait for the next crisis, or have we evolved enough to break the cycle and not repeat the pattern?
Maybe you see this differently and think more safety and rules are required, as we are approaching a time where progress and innovation have huge effects on the world like never before? The potential harm of modern innovations like nuclear energy is far greater compared to “harmless” innovations from the past.
Closing
We became careful for good reasons. The mistake was letting caution take over and results into veto by default. Real safety does not mean zero risk. It means known risk, measured honestly, with the ability to stop when the world pushes back.
Truth has always required exposure to error. If we want progress again, we need institutions that let us take bounded risks, in public.
Appendix: tail math used a normal model with μ=100, σ=15. Probabilities for ≥130/145/160 under mean shifts of +0, +3, +5