Friday, December 5, 2025

(Draft 2) No Golden Age of Facts: Will to Truth, Managed Consensus, and AI Pseudo-Reasons (

 


Jill Lepore has popularized a compelling story about the "era of the fact." On her telling, modern democracies—especially mid-twentieth-century America—briefly achieved a workable evidentiary order: courts, research universities, and professional journalism together underwrote a shared factual reality. That order, she argues, is now eroding under the pressures of datafication, digital media, and "post-truth" politics. The present, in this narrative, is a dangerous after-times: a moment when the institutions that once curated facts have been hollowed out, and truth itself seems up for grabs.

The argument here begins from a different starting point. There was no golden age in which facts, once properly gathered, yielded a stable, shared picture of the world. What Lepore calls an "era of facts" is better understood as a historically specific constellation of practices in which the modern will to truth managed to install its own rules of evidence as if they were simply how reality is. Under pressure from that same will to truth—from within religion, science, law, journalism, and social movements—those regimes have repeatedly been forced to confront their own limits. The crisis we inhabit now is not the sudden "death of facts" but the predictable backfire of treating fallible, theory-laden, power-saturated practices as if they guaranteed certainty.

Fallibilism, Frames, and Humility

This essay is written from an explicitly fallibilist stance. All strong metaphysical claims—whether realist or anti-realist, naturalist or supernaturalist—currently outrun what inquiry warrants. There may or may not be a Creator God; there may or may not be a "final vocabulary" or a future physics that makes today's ontology look parochial. Nothing in our present stock of knowledge justifies ruling such possibilities in or out a priori. The only honest posture is to evaluate live claims with the best conceptual and evidential resources available, to state one's current beliefs, and to acknowledge that they may need to change.

Crucially, this fallibilism operates on two levels. At the frame-boundary level—questions about what counts as "natural," whether there can be non-physical causation, whether consciousness is substrate- independent—our criteria are not yet settled. These are questions about which domains are even open to systematic investigation. Here, the only responsible stance is agnosticism: neither credulous acceptance nor dogmatic exclusion, but an admission that, for now, we lack adequate methods to settle them.

Within particular frames of practice, however—physics, forensics, epidemiology, cognitive neuroscience, journalism—we do have standards and methods that make inquiry both possible and productive. These standards are themselves provisional and revisable, but they function. When controlled studies of intercessory prayer repeatedly show null results, when parapsychological phenomena fail to reproduce under scrutiny, when certain forensic techniques collapse under validity testing, those are not mere "perspectives." They are our best-supported judgments within those frames. The fact that future science might reconceive the boundaries of its own object domain does not make present discriminations arbitrary.

There is, on this view, no coherent "present working ontology" that unifies all domains into a synoptic model of "the world as a whole." What we actually have are multiple, partially connected frames of reference—forms of life, if one prefers—with family resemblances and criss-crossing connections but no single, worked-out picture into which everything fits. We can still make claims that draw on several frames at once—linking evolutionary, biological, and social phenomena—without pretending that these local integrations amount to a final, unified story about What There Really Is. That absence of a One Big Ontology is not a problem to be solved by speculative metaphysics; it is a condition of our current epistemic situation.

This accepts what might be called trivially true anthropological relativism: different cultures, traditions, and domains develop non-identical concepts and standards, as any ethnographer or traveler observes. But it rejects the philosophical worry that this entails global incommensurability—the notion that different frames cannot communicate or find common purchase. Quine's gavagai/rabbit thought experiment dramatizes semantic underdetermination, but actual practice shows that partial overlap and navigable ambiguity suffice for translation, cultural diffusion, and syncretism. Even within established fact-regimes —medicine, law, science—there is no a priori principle determining when precision is possible or necessary. Cystic fibrosis admits genetic markers; Crohn's disease involves multiple, shifting, somewhat fuzzy criteria. Water is definitionally H2O; legal concepts like "general welfare" or "reasonable doubt" remain usefully ambiguous. As Edward Levi argued about legal reasoning, ambiguity is not always a defect: it provides "a forum for the discussion of policy in the gap of ambiguity," making possible incremental movement on contested questions. The interpretive space where definitions remain flexible— in psychiatry's evolving nosology, in hybrid biological concepts, in constitutional interpretation—is often where productive work gets done.

What this refuses is the quest for certainty: the demand that we either achieve perfect semantic unity and operational precision everywhere, or collapse into nihilism because we cannot. Neither captures how inquiry actually proceeds. We work with the precision available in specific contexts, we navigate ambiguity where definitions remain contested or fluid, and we distinguish both from gibberish without needing algorithmic rules for doing so. Conflict is constitutive of this process, not an aberration. Sometimes interpretive negotiation succeeds and yields productive hybrids; sometimes it fails and conflicts persist. There is no a priori principle determining which outcome will obtain, and no method that guarantees reconciliation.

Will to Truth Inside Religion

The modern will to truth does not originate outside religion. It arises from deep within religious traditions that demand sincerity, doctrinal precision, and honest self-scrutiny. Medieval theology already distinguished between truths of reason and truths of revelation; natural theology and natural law claimed that certain matters—about the created world and about justice—could be known by human reason, even as core mysteries of faith exceeded it. Late-medieval dissent and the Reformation intensified this dynamic. To "protest" in Protestantism is to refuse clerical monopoly on mystery: Sola Scriptura, vernacular translations, lay Bible study, and the practice of sending theses to public disputation all assume that individuals and congregations may contest what counts as true, using argument and exegesis, not merely deference.

At the institutional level, this will to truth takes forms that later secularize their own results. In Protestant theological faculties in early modern Germany, scholars apply the best available philological and historical methods—originally honed on classical texts—to scripture itself. The aim is not to debunk the Bible but to purify faith by anchoring it in solidly established history. The effect, however, is to reveal composite authorship, late redaction, and textual layering where tradition had seen Mosaic books. Once these techniques exist, they are not owned by skeptics; they are cultivated within seminaries, and they legitimate treating sacred texts as human artifacts.

The Jesuit "rites" controversies show the same pattern in a Catholic missionary context. Jesuits in China and India pursued a rigorous understanding of local practices, trying to distinguish between what was genuinely idolatrous and what might be retained as culturally specific expressions compatible with Christian monotheism. Their internal reports forced Rome to confront a basic question: is Christian truth a fixed doctrinal core that must be expressed identically everywhere, or can it legitimately take on diverse ritual and linguistic forms? The condemnations of the Chinese and Malabar rites did not end the matter. They made visible that "pure doctrine" had always depended on negotiated accommodations with local realities.

The Inquisition offers a darker but equally revealing case. Its tribunals were designed as massive truth- producing machines: standardized procedures for accusation, interrogation, confession, and sentencing, all in the name of orthodoxy. Yet the archives they created, centuries later, became prime sources for historians reconstructing popular beliefs, alternative spiritualities, and the everyday life of dissent. The apparatus built to suppress error unwittingly preserved an enormous body of evidence that relativizes and pluralizes "the faith" it was protecting.

Pietist diary-keeping presents the same logic in an intimate key. Encouraged to record their spiritual state in meticulous detail, believers were to track true conversion and sanctification. Over time, the practice of radical self-examination shifted the locus of authority: loyalty to one's "intellectual conscience" could come into tension with loyalty to external dogma. The imperative to be honest with oneself, originally framed as a religious duty, became a seed of the secular, self-authorizing subject who can subject even faith itself to critique.

Nietzsche's claim that Christianity's own ethic of truthfulness turned against Christian belief gives a name to this pattern. The will to truth is not an Enlightenment bolt from the blue; it is a long religious project that, when radicalized, eats into its own foundations.

Science and the Fractured Mirror

Early modern science inherits this will to truth and gives it new, explicitly empirical institutions. Bacon's fictional Salomon's House, then the Royal Society and the Académie des Sciences, cast inquiry as a collective, disciplined, methodical enterprise: division of labor, standardized experiments, careful observation and record-keeping, public reporting and contestation. These bodies often saw themselves as reading the Book of Nature with a reverence akin to scriptural study, but with methods designed to minimize individual bias and error.

It was tempting to hope that such practices would eventually yield a "mirror of nature": a set of theories whose facts correspond transparently to how the world is in itself. But serious reflection on scientific practice has undercut that mirror image from within. Revived ancient skepticism and Descartes's methodological doubt dramatized, at the birth of modern science, how difficult it is to secure even one's own existence or the reality of an external world against skeptical scenarios. Later, Thomas Kuhn's account of scientific revolutions portrayed the development of science not as a smooth convergence on truth but as a sequence of paradigm-bound frameworks. On this view, "facts" are not raw givens but items already interpreted within a conceptual scheme, and different paradigms may sort and weigh them in ways that are not straightforwardly comparable.

Quantum theory adds a different dimension to the problem. As a piece of empiricism, it is a triumph: its formalism predicts experimental results with extraordinary precision and supports technologies on which contemporary life depends. Yet the conceptual picture it yields is not a unified, transparent image of reality. At the most basic level of fundamental physics, we have a deeply successful but theoretically uneasy pair: general relativity for gravitating, large-scale structures and quantum field theory for the micro-world. Attempts at unification remain speculative. Within quantum mechanics itself, multiple interpretations—Copenhagen, many-worlds, Bohmian mechanics, objective collapse models—fit the same experimental data. The theory does not, by itself, force a single realist metaphysics; it licenses several, and many working physicists adopt a de facto pragmatism, focusing on calculations and predictions while bracketing metaphysical debates as optional.

None of this refutes empiricism. Rather, it is empiricism pursued with uncompromising rigor. But it does undermine the stronger claim that science has already, or inevitably will, furnish a final, God's-eye catalogue of how things are. Our best scientific theories are astonishingly effective tools that structure perception and intervention, but they do not currently entitle us to the kind of metaphysical certainties that mid-century "age of facts" rhetoric often took for granted. Strong scientific realism may yet be vindicated by future developments; it may not. At present, neither it nor its denial is warranted by the state of inquiry.

What follows from this is not that science tells us nothing, but that its authority is frame-bound. Within physics, certain claims are extremely well supported and others are rejected; within biology, evolutionary theory outperforms creationism by any reasonable metric. Attempts to leap from those within-frame successes to a single ontological story in which "everything is really just particles" are speculative unifications riding on the prestige of some frames over others. They are not themselves products of the empirical methods they invoke.

Law, Journalism, and the Managed Consensus

Law and journalism took their own paths to factual authority. Modern courts refined rules of evidence— excluding hearsay, defining admissibility, distinguishing between lay and expert testimony—in the name of letting an impartial jury discover "what really happened." The incorporation of forensic science into criminal justice, from fingerprints and ballistics to blood typing and later DNA, seemed to anchor legal fact-finding in increasingly objective procedures.

Yet here, too, the will to truth has turned on its own practices. Scientific scrutiny of forensic fields has shown that several widely used techniques—bite-mark analysis, comparative bullet-lead analysis, some forms of microscopic hair comparison—rest on weak or nonexistent empirical foundations. At the very moment when the legal system leaned hardest on science to bolster its claims to factual accuracy, more rigorous science exposed key parts of that reliance as misplaced. Truth-seeking methods, applied to themselves, revealed institutional conventions masquerading as neutral facts.

Journalism's twentieth-century ideal of objectivity—verification, separation of news from opinion, balanced sourcing—was a parallel attempt to organize public life around facts. Walter Lippmann, worried about the cognitive limits of mass publics in a complex industrial society, concluded that citizens must inevitably depend on pictures of the world curated by experts and professionals. In that framework, manufacturing consent is not necessarily nefarious; it is how a democratic mass public can function at all.

John Dewey rejected this relegation of the public to spectatorship. For him, social problems are not abstract puzzles handed to remote experts; they are concrete troubles experienced in local contexts. Those who live with the consequences are best placed to define what counts as a problem and to participate in inquiry about how to address it. Dewey did not deny the need for expertise, but he refused to treat experts as oracles. Inquiry, for him, is fallibilist, experimental, and inherently social: it must integrate technical knowledge with the intelligence of those affected. The role of the press, on this view, is not merely to transmit facts downward but to help constitute a public capable of deliberate action.

We do have real-world examples of this kind of Deweyan inquiry. Community-based environmental justice campaigns, for instance, have combined local testimony about health effects and everyday conditions with air-quality measurements, epidemiological studies, and legal advocacy to challenge "official" accounts of pollution. In some participatory budgeting processes, residents have worked with city staff to analyze fiscal data, propose projects, and deliberate about trade-offs, rather than merely voting on pre-packaged options. These are not utopias, but they show that fact-finding and problem- solving can be organized around those affected rather than being reserved for distant elites.

Yet it is important not to overstate what fallibilist, collaborative inquiry can achieve. Dewey sometimes wrote as if experimental method could eventually resolve all conflicts, restoring experiential continuity through shared problem-solving. That quasi-Hegelian optimism—conflicts sublating into higher unities— is empirically unwarranted. Sometimes interpretive negotiation succeeds and yields productive hybrids; sometimes it fails and conflicts persist. There is no a priori principle determining which outcome will obtain. Conflict is constitutive of human life, not an aberration to be overcome through better method. The task is not to guarantee resolution but to build institutions that can work with persistent conflict— through constitutional constraints, agonistic respect, and the humility to recognize when further inquiry will not bridge the gap.

The mid-twentieth-century United States—the period Lepore highlights—looked, to many, like the realization of a Lippmann-style fact-regime. Major newspapers and broadcasters set the agenda; research universities and government labs produced expert knowledge; the courts claimed to administer justice based on evidence. There were, to be sure, deep conflicts and exclusions: Jim Crow, redlining, McCarthyism, the national-security state. But for a time, a relatively narrow elite could treat its own worldview as "the facts," and much of the country acquiesced.

That apparent solidity masked ongoing fractures. Courts upheld "separate but equal" segregation and compulsory sterilization; immigration law enforced racially coded quotas; scientific establishments lent their authority to eugenic and racist theories; foreign policy elites justified firebombing cities and building thermonuclear arsenals as rational strategic necessities. These practices were not deviations from the factual order; they were features of it. What changed in the 1960s was not that facts suddenly became contested; it was that the truth-telling virtues the regime claimed for itself were turned against it. Civil rights activists, anti-war protesters, feminists, and others marshaled witness testimony, documents, photographs, leaked reports, and statistical analyses to expose what had been excluded or normalized. They did not reject facts; they insisted on different ones, or on giving weight to facts that the consensus had treated as marginal.

Crucially, many of these movements also altered the processes by which facts became actionable. Freedom Schools, consciousness-raising groups, welfare rights organizations, and other experiments in democratic education and organizing were not just sites of protest; they were sites of inquiry. They redefined whose experiences counted as evidence and who had standing to interpret it. This exemplified a Deweyan alternative in practice: collaborative investigation by those affected, integrating expertise without surrendering judgment to it.

Fantasy, Civil Religion, and Non-Epistemic Practices

It might be tempting to oppose this managed factual order to a surrounding swamp of fantasy and credulity. American culture is indeed full of revivals, Great Awakenings, faith healings, spiritualist séances, and conspiratorial subcultures. But the line between "rational fact" and "irrational fantasy" is not so easily drawn. The same Protestant biblical culture that underwrote pro-slavery theologies in the American South also generated abolitionist movements grounded in readings of scripture and natural law. The rhetoric of the Declaration of Independence and later human-rights discourse—natural rights, equality before God, inalienable dignity—owes as much to religious and deist imaginaries as to empirical science. Those ideals could not have been derived from a neutral inventory of facts; they are value-laden constructions that were, and remain, contested.

The European Enlightenment itself exhibits this ambiguity. French revolutionaries converted churches into Temples of Reason, staged festivals to the Goddess of Reason, and attempted to build a secular religion of Liberty and Nature. When that proved unstable, Robespierre promoted a Cult of the Supreme Being, echoing Voltaire's conviction that, if God did not exist, it would be necessary to invent him in order to secure moral order. Kant's "postulates of practical reason" (freedom, immortality, God), Paine's deism, and American civil religion around a providential nation all suggest that modern egalitarian projects and democratic legitimacy have long leaned on quasi-religious commitments. The will to truth destabilized certain metaphysical claims; it did not remove the need for orienting values and narratives that go beyond, or beneath, empirical evidence.

It is also important to say that not all valuable human activities are organized around factual claims at all. Fiction, music, visual art, dance, contemplative disciplines, and liturgy are not defective sciences; their point is not to predict or explain but to disclose, express, and reshape experience. They involve skills and can be better or worse, but not primarily by the standards of empirical correctness. The critique of "fact- regimes" in this essay is aimed at domains that do claim to tell us how things are—science, law, journalism, policy—and at the temptation to extend their standards everywhere, or to treat their current frames as if they already gave us a unified ontology of the whole.

A consistent fallibilism refuses both Dewey's naturalizing sublation and Rorty's anti-realist closure. Dewey sometimes undercut his own fallibilism by drawing a hard line between "natural" and "supernatural," treating the latter as a priori out of bounds. But what counts as the "natural" object domain of inquiry is itself historically expanding and revisable; no one now can say what forms future science might take. Rorty, more subtly, forecloses frame-boundary questions by declaring metaphysical inquiry itself passé—"there is no final vocabulary" functioning as an inverted dogmatism, a negative metaphysics that somehow knows there is nothing to know. Even late Wittgenstein's therapeutic approach—treating religious and metaphysical questions as confusions about grammar rather than genuine inquiries—evades rather than confronts the uncertainty. Weak agnosticism refuses these closures. Frame-boundary questions —about God, consciousness, ultimate reality—may or may not be answerable. We do not currently have methods to settle them. That is not a reason to dismiss them (Rorty), naturalize them away (Dewey), or dissolve them therapeutically (Wittgenstein at his most reductive). It is a reason for humility.

Here the levels distinction matters. Frame-boundary questions—"Could there be non-physical causation?" "Might consciousness be substrate-independent?" "Could prayer work through mechanisms we do not yet understand?"—do not currently have established investigative criteria. We cannot yet set up observations that would definitively settle them, precisely because they are questions about what kinds of things are subject to investigation in the first place. Within established frames, however—forensic science, quantum physics, genetics—we can and must make sharp discriminations. When forensic methods fail replication studies, when quantum mechanics yields stable predictions across contexts, when genetic markers reliably identify specific conditions, these are not mere "perspectives"; they are our best-warranted claims under operative standards that have been tested and refined.

The danger Dewey wanted to avoid—treating every supernatural claim as a live scientific hypothesis despite a long record of failed tests—is real. But the corrective is not to rule such claims out by definition. It is to demand that any claim, whatever its content, submit to the same discipline: clear methods, testable consequences, openness to refutation, and integration with the rest of what we have reason to accept. When controlled studies of intercessory prayer show null results, when séances cannot survive controlled conditions, when dualist theories cannot explain systematic mind-brain correlations, we do not conclude that the "supernatural" is metaphysically impossible. We conclude that these particular claims are not presently warranted within any successful frame of inquiry. That is fallibilism doing its work.

AI, Pseudo-Reasons, and Entangled Agency

The current wave of AI does not make facts disappear. It does, however, introduce a new and particularly insidious form of trouble: the mass production of what might be called pseudo-reasons. If Lepore's concern is that we have lost a shared factual reality, the deeper problem is that we are losing the practices of accountable reason-giving that once aspired, however imperfectly, to produce facts worth sharing. Contemporary systems, especially large language models, generate outputs in ordinary language. They recommend, explain, summarize, justify, and converse. To users, these outputs are almost indistinguishable, in form, from the things people say when they have thought about something: "You'll love this movie," "This candidate is not a good fit," "This patient is low risk," "This teacher is underperforming."

The resemblance is deceptive. Humans give reasons by operating in the space of meaning and intention. Even when motives are mixed, we act and speak about something; we can, at least in principle, say what considerations we took into account, what we were trying to do, and why we thought our judgment was apt. Current AI systems operate exclusively in the space of causes. They manipulate tokens according to statistical patterns extracted from training data. They have no awareness of what their words mean, no sense of purposes, no capacity for genuine deliberation. When an HR system flags a teacher as a dismissal risk, or a medical triage assistant ranks a patient as low priority, there is no agent there who can say, in human terms, "Here is how I weighed the evidence."

The danger is not only that such systems can be wrong. It is that humans, confronted with fluent language and impressive performance on benchmark tasks, slide into what might be called the user's illusion: treating the system as if it were a reason-giving subject and acting accordingly. In hiring, a score produced by an opaque model may trump a supervisor's experience; in policing, a "risk" label learned from biased arrest data may guide patrol patterns and sentencing; in medicine, a generated summary may quietly shape diagnostic choices. In each case, the AI output plays the social role of a reason: it is cited, relied upon, and allowed to settle questions. Yet it is, in fact, only a complex artifact of pattern-matching over earlier behavior and recorded correlations.

The problem is compounded by entanglement. We do not stand outside these systems, choosing whether to "use tools" or not. Workflows in finance, healthcare, logistics, education, and media are already organized around networks of algorithms and platforms. Decisions about who gets credit, who receives a test, which story is promoted, or whose complaint is escalated routinely pass through layers of automation that no individual can survey. In many institutions, the default is effectively "read and accept": if no human intervenes, the system's output stands.

From a fallibilist, Deweyan angle, the central question is where, in this mangle, genuinely accountable reason-giving can still occur. The answer cannot be to abolish complex computational systems and return to a pre-digital world. Nor can it be to reassure ourselves that "humans remain in the loop" when, in practice, many loops run too fast, too opaquely, and with too little institutional support for meaningful review. A more realistic program is to identify the critical junctions—those points where error or bias inflicts serious, often irreversible harm—and to rebuild workflows so that human judgment with real authority is structurally required there. That may mean, among other things, slowing some processes down, protecting certain domains from full automation, and treating unread algorithmic outputs as unacted-upon, rather than as tacitly approved.

The deeper ethical point is that we must stop pretending that entities which cannot understand, intend, or answer can bear the responsibility for reasons. Trust, in such systems, is misplaced not because machines are metaphysically incapable of truth, but because they are not, and cannot be, appropriate objects of our demand for justification. The only loci that can be held to account are humans and the institutions we build. Designing AI infrastructures so that they augment, rather than displace, our capacity to give and ask for reasons is a political and moral task, not a technical afterthought. If, at some future point, systems emerge that supply convincing evidence of understanding and agency by criteria we would be willing to apply to one another, we will have to rethink these judgments. At present, nothing in the behavior or design of large-scale AI warrants such a leap.

No Lost "Fact Age," Only the Ongoing Work of Inquiry

Against this backdrop, nostalgia for a lost era of facts looks misplaced. There was never a time when facts, once properly certified, yielded a transparent, shared view of reality that could anchor politics without remainder. There were, rather, successive attempts to tame the will to truth by institutionalizing particular evidentiary regimes: within churches and seminaries, in scientific academies and laboratories, in courts and newsrooms, in schools and bureaucracies. Those regimes did real cognitive and moral work. They also generated, from within themselves, the doubts and resistances that now unsettle them.

The point of recognizing this is not to give up on truth or to collapse into cynicism. It is to shift expectations. Facts will always be provisional, theory-laden, and contested. They will always be entangled with values and power. That does not make them useless; it means they must be handled as tools within fallible, revisable practices of inquiry and judgment, not as immutable foundations. A democratic politics worth having will not be one that pines for a restored "age of facts," but one that builds institutions capable of working with disagreement, exposing and correcting their own blind spots, and resisting the temptation to outsource responsibility to opaque systems.

This essay itself is part of that work. Its genealogy of the will to truth and its critiques of fact-regimes are not the last word; they are hypotheses offered for contestation and refinement. The hope is not to step outside the problem but to model a way of engaging it: by assembling reasons, acknowledging limits, and inviting challenge. If there is any way forward from our present discontents, it will not come from reviving the comforts of certainty, but from cultivating the harder virtues of humility, patience, and shared inquiry in a world where facts have never been simple and where pseudo-reasons now abound.

No comments:

Post a Comment