The Coward’s Epistemology

The future on offer is extraordinary. That should make us angry—genuinely, productively angry—that a coward's epistemology is positioned to foreclose it.

by Chip J | Apr 6, 2026

The children started going blind first.

In the mid-1990s, researchers at the Swiss Federal Institute of Technology completed a decade of work on a modified strain of rice. They called it Golden Rice. By inserting two genes—one from maize, one from a soil bacterium—they engineered a grain that produces beta-carotene in its edible portion. The modification was elegant and the need was urgent: Vitamin A deficiency affects roughly 250 million children worldwide, causing between 250,000 and 500,000 to go blind every year, half of whom die within twelve months of losing their sight.

Golden Rice worked. It was safe. Independent testing confirmed both facts repeatedly over two decades.

It still isn’t widely deployed.

Regulatory bodies across Asia, bowing to organized opposition from environmental groups invoking the precautionary principle, delayed, restricted, and in many cases blocked the very technology that could have ended a preventable catastrophe. The Philippines, one of the countries where Golden Rice was closest to approval, saw its field trials destroyed by activists in 2013. A 2019 analysis in Nature Biotechnology calculated that the delay in Golden Rice deployment in India alone cost 1.4 million life-years annually.

That is what the precautionary principle does when applied with sufficient conviction.

Not saving lives. Taking them.

A Principle That Looks Like Caution

The precautionary principle has a reasonable-sounding version and an operational version, and the gap between them is where the damage happens.

The reasonable-sounding version says something like: when an activity might cause serious or irreversible harm, uncertainty alone is not an excuse for inaction. That sounds like good sense. It is also almost trivially true—nobody seriously argues that regulators should ignore potential harms until proven catastrophic. This version gets consensus at international conferences, makes it into treaties, and lends the PP its air of obvious wisdom.

Then the operational version takes over.

The operational PP inverts the burden of proof. It holds that a proposed change to the status quo must demonstrate safety before being permitted—that the innovator bears the burden of proving no harm, rather than the regulator bearing the burden of proving harm. The status quo, by implication, is the safe condition. Deviation from it is the risk. This is not a refinement of the reasonable version. It is a different principle with opposite implications, wearing the reasonable version’s borrowed credibility.

That implicit assumption—that the existing condition is safe and change is dangerous—is false, and the falseness is what kills people. The status quo is never safe. It is merely the accumulated result of previous changes, each of which was once an unproven deviation from what came before. Vitamin A deficiency was the status quo for millions of children. The children going blind were not an abstraction—they were the cost of preserving the existing condition against an unproven speculative risk. The PP looked at documented, quantified, ongoing harm on one side, and hypothetical ecological risk on the other, and told regulators to preserve the existing harm.

This is not caution. It is a bias dressed as caution—one that assigns present harm a weight of zero and speculative future harms infinite weight.

The deeper problem: the PP’s paralysis runs in one direction only. It is never applied to the status quo. Coal plants killed more people every year than nuclear power has killed in its entire operational history—but no precautionary framework blocked coal while nuclear power faced decades of regulatory obstruction. The principle does not weigh risks. It weighs novelty. Anything new carries the burden. Anything existing carries none. That is not epistemology. That is conservatism dressed as safety, enforcing the existing order against challenge.

The Question the PP Cannot Ask

The actual alternative to precautionary logic is not recklessness. It is honest tradeoff accounting, applied symmetrically.

The relevant question is never whether a technology introduces risk. Everything introduces risk. The question is whether the upside outweighs the downside, and whether the downside can be reduced over time as knowledge compounds. Every technology that advanced human welfare involved exactly this calculus. The steam engine was spectacularly dangerous. Early aviation killed pilots routinely. The first surgical anesthetics had non-trivial fatality rates. Early MRI technology involved powerful magnetic fields whose biological effects were imperfectly understood. The question was never “is this safe?”—it was “is this worth it, and how do we make it safer?”

That question is what the PP forecloses before it can be asked. It substitutes “can we prove no harm?” for “does the tradeoff favor proceeding?”—and since no one can ever prove no harm, the machinery stalls.

Kahneman’s research identified loss aversion as a cognitive bias, not a rational orientation: humans systematically overweight potential losses relative to equivalent potential gains, a hardwired asymmetry that produces predictable errors in judgment. The precautionary principle is loss aversion encoded into law. It takes a known cognitive error and makes it mandatory regulatory procedure, ensuring that every innovation must fight uphill against a bias baked into the approval process itself. Calling this prudence is like calling a phobia good risk management.

When We Didn’t Ask Permission

Between roughly 1870 and 1960, Western civilization ran a ninety-year experiment in what happens when builders build without requiring proof that nothing will go wrong.

The results were not subtle.

Electricity went from laboratory curiosity to universal infrastructure in forty years. Internal combustion remade transportation in a generation. Antibiotics collapsed mortality curves for bacterial infection so sharply that the graph looks like a cliff. Aviation progressed from Kitty Hawk to the sound barrier in forty-five years. Nuclear fission moved from theoretical prediction to operational power plant in a decade. The transistor, discovered in 1947, made modern computing possible within fifteen years.

None of these technologies were safe in any meaningful precautionary sense when deployed. Electrical fires were common. Early cars killed their drivers without seatbelts, airbags, or crash standards. Antibiotic dosing required enormous experimentation on human patients. Aviation accidents were frequent enough to be unremarkable. Nuclear technology was developed in explicit awareness that it could produce civilization-ending weapons.

The risks were real. Builders built anyway—because the calculation was obvious. The upside was transformative and the downside, while genuine, was manageable through the actual mechanism of progress: deploy, observe, correct, improve. Not predict-all-harms-first. Not prove-zero-risk-before-proceeding. Build it. Watch what happens. Fix what breaks.

That ninety-year period produced more improvement in human lifespan, material welfare, and physical capability than the preceding ten thousand years combined. Not because risk was absent—because risk was treated as a problem to solve rather than a signal to stop.

The World the PP Froze

Then something changed.

  1. Storrs Hall’s Where Is My Flying Car? documents the resulting stagnation with a precision that should be embarrassing to everyone who has ever administered a regulatory agency. His central observation: the physical world and the digital world have diverged so completely since roughly 1970 that they seem governed by different physics.

In the world of electrons—software, computation, communication, information—progress has been exponential and relentless. Processing power, storage capacity, bandwidth: all have followed curves that make the advances seem almost supernatural in retrospect. The smartphone in your pocket is more computationally powerful than the entire Apollo program.

In the world of atoms—energy, transportation, construction, manufacturing, medicine delivered by physical means—progress has stalled or regressed. Commercial aviation flies at the same speed it did in 1960, and the Concorde, which broke that ceiling, was retired in 2003 and never replaced. Construction costs per square foot have risen in real terms for decades. Drug development timelines have lengthened dramatically; the average new drug takes over a decade and more than two billion dollars to reach approval. Nuclear power, the most energy-dense technology ever developed, has been effectively frozen at 1970s-era deployment in the West—despite an extraordinary safety record that should have made it the obvious answer to every serious energy concern of the last fifty years.

The divergence is not explained by technical difficulty. Bits are not inherently easier to improve than atoms. The divergence is explained by regulatory and liability architecture—by the systematic application of precautionary logic to physical-world innovation while digital innovation, which operated largely outside that architecture during its critical growth period, ran free.

We got the future, but only in one dimension. Supercomputers and same-speed jets. AI that writes code and buildings that cost more to construct than they did fifty years ago. The PP didn’t stop all progress. It stopped the progress you can touch.

The Psychological Capture

How did this happen? Not through conspiracy—through a cultural shift in which one psychological register captured institutional decision-making entirely.

Every person, and every healthy civilization, holds two orientations in productive tension. One asks: what can we build? It runs toward potential, reward, and the asymmetric upside of successful creation. The other asks: what could go wrong? It runs toward protection and the asymmetric downside of catastrophic failure. Both are legitimate. Both are necessary. The tension between them is how civilizations navigate between recklessness and paralysis.

What happened across regulatory agencies, international bodies, university research culture, and the NGO ecosystem in the late twentieth century is that one register achieved near-total institutional dominance. The harm-avoidance orientation didn’t merely gain influence—it captured the vocabulary of public decision-making so completely that articulating the builder orientation required justification, while articulating harm-avoidance required none. The asymmetry became invisible because it became the ambient assumption.

The men who built this architecture—and they were overwhelmingly men, running the EPA and the international bodies and the activist organizations—were not passive inheritors of a cultural drift. They were its architects. They built the intellectual frameworks, staffed the agencies, wrote the treaties, organized the campaigns. The builder’s impulse didn’t get crowded out. It got abandoned by people who found that preventing things conferred more status, funding, and moral authority in their particular cultural moment than making them did. A generation of technically capable people chose the regulator’s desk over the engineer’s bench, and dressed that choice as virtue.

The result is an institutional class whose entire professional identity rests on the question “what could go wrong?”—and whose career interests are served every time the answer is “more than we thought.” This is not cynicism about individuals. It is a structural observation about incentives. The precautionary principle doesn’t just reflect this orientation. It codifies it, gives it legal force, and makes it the mandatory starting position for every significant physical-world innovation.

A Handicap Only We Wear

One paragraph suffices, because the point is that simple.

The precautionary principle, as practiced, is a Western institution. China has built or is building roughly twenty Generation III nuclear reactors. Germany shut down its last three operating nuclear plants in 2023 while importing French nuclear power across the border. The EU’s chemical regulations add years and hundreds of millions of dollars to product approvals that face no equivalent friction in competing markets. Golden Rice was blocked in the Philippines while China approved its own transgenic crop varieties without decade-long regulatory theater. A principle that hobbles only those who follow it is not a safety mechanism. It is a competitive handicap, worn voluntarily, by institutions that have mistaken the wearing of it for moral seriousness.

The Stakes Are Not Abstract

This is where the historical argument stops being interesting and starts being urgent.

We are at the beginning of a convergence that the atoms-electrons divergence made impossible for fifty years. The two tracks are rejoining—and the potential on the other side is not incremental. It is civilizational.

Consider what is actually within reach. AI systems are collapsing drug discovery timelines that previously ran a decade or more, identifying protein structures and candidate compounds in months rather than years. Autonomous vehicles are approaching the reliability threshold that makes personal air mobility—actual flying cars, not a fantasy—an engineering problem rather than a regulatory one. Advanced fission reactor designs, modular and manufacturable at scale, are finally moving toward commercial deployment after decades of regulatory stasis. Fusion research has crossed the ignition threshold; the engineering questions are now the binding constraint, not the physics. Space-based resource extraction is transitioning from science fiction to business plan: the asteroid belt contains mineral concentrations that would dwarf every terrestrial reserve ever discovered. Life-extension therapies targeting the cellular mechanisms of aging are in clinical trials—not longevity supplements, but interventions aimed at the aging process itself. Bionics have reached the point where prosthetic limbs interface directly with the nervous system; the next decade will see that technology extend into cognitive augmentation. The energy production required to power all of this—the computation, the manufacturing, the transportation, the sheer material ambition of an expanding civilization—is enormous, and the only technology capable of providing it at the required scale and density is nuclear.

Every item on that list faces precautionary obstruction. Every single one.

Drug approval timelines that add a decade between discovery and deployment are not protecting people—they are killing people who would have lived. Autonomous vehicle certification frameworks designed by regulators who treat the existing accident rate as acceptable while requiring perfection from the new are not protecting people—they are preserving a status quo that kills 40,000 Americans annually. Nuclear permitting processes that add twenty years and billions of dollars to reactor construction are not protecting people—they are ensuring that the only scalable clean energy technology remains unscaled.

The pattern is the same everywhere: the existing harm is invisible, the speculative harm is vivid, and the PP ensures that vividness determines outcome.

What Replaces It

Not recklessness. The alternative is honest tradeoff accounting applied symmetrically, combined with the actual mechanism of human progress: deploy, observe, correct.

Every knowledge advance in history came through that process. You cannot know in advance whether a new technology will work, what its side effects will be, or how to mitigate them—you discover that by deploying it carefully, watching what happens, and fixing what breaks. The PP short-circuits this process by requiring that the discovery happen before the deployment. It demands that we know what we can only learn by doing.

The mechanisms for navigating risk under uncertainty are not complicated, and they don’t require a regulatory priesthood to administer them. Liability law creates powerful incentives for safety without requiring pre-certified perfection—a company that maims its customers faces ruin, which concentrates the mind considerably. Insurance markets price risk with skin in the game that no regulatory agency possesses; an insurer who misprices a novel technology’s danger absorbs the loss. Investment capital performs its own due diligence, and investors who fund genuinely reckless ventures lose their money. Criminal law handles fraud and willful negligence. These mechanisms are imperfect, but they are self-correcting in ways that regulatory bodies—which face no penalty for being wrong, and whose institutional survival is served by finding more to regulate—structurally cannot be.

To the extent regulatory bodies exist, the least damaging version asks “how do we learn from this quickly?” rather than “how do we prevent this entirely?”

As I argued in the civilizations series: dynamic societies treat the unknown as an invitation. Static societies treat it as a threat. The precautionary principle is not neutral on that distinction—it is the policy expression of stasis, the institutionalization of “what could go wrong?” as the terminal inquiry, beyond which no further questions are permitted.

The builders who are assembling the next wave of civilizational advance are doing it mostly in the spaces the PP hasn’t fully colonized yet. That won’t last. The regulatory apparatus moves toward every frontier. The atoms-and-electrons convergence will attract exactly the precautionary infrastructure that froze the atom-world in the first place, unless the epistemological argument is made loudly, made now, and made without apology.

The future on offer is extraordinary. That should make us angry—genuinely, productively angry—that a coward’s epistemology is positioned to foreclose it.

Chip J is a contributing writer to Capitalism Magazine. You can follow him on X at @ChipActual.

No spam. Unsubscribe anytime.

The views represent those of the author and do not necessarily represent those of the editors and publishers of Capitalism Magazine.

Pin It on Pinterest