Jill Lepore has popularized a compelling story about the "era of the fact." On her telling, modern democracies—especially mid-twentieth-century America—briefly achieved a workable evidentiary order: courts, research universities, and professional journalism together underwrote a shared factual reality. That order, she argues, is now eroding under the pressures of datafication, digital media, and "post-truth" politics. The present, in this narrative, is a dangerous after-times: a moment when the institutions that once curated facts have been hollowed out, and truth itself seems up for grabs.
The argument here begins from a different starting point. There was no golden age in which facts, once properly gathered, yielded a stable, shared picture of the world. What Lepore calls an "era of facts" is better understood as a historically specific constellation of practices in which the modern will to truth managed to install its own rules of evidence as if they were simply how reality is. Under pressure from that same will to truth—from within religion, science, law, journalism, and social movements—those regimes have repeatedly been forced to confront their own limits. The crisis we inhabit now is not the sudden "death of facts" but the predictable backfire of treating fallible, theory-laden, power-saturated practices as if they guaranteed certainty.
Fallibilism, Frames, and Humility
This essay is written from an explicitly fallibilist stance. All strong metaphysical claims—whether realist or anti-realist, naturalist or supernaturalist—currently outrun what [philosophical?] inquiry warrants. There may or may not be a Creator God; there may or may not be a "final vocabulary" [you might insert a comma after *vocabulary* and write "Richard Rorty's term for an authoritative and 'capital T True' metaphysical description of 'the world as it really is' " or else simply place footnote # 1 and we can make it the first fn. It would probably cite sContingency, Irony &Solicarity, 1989 Cambride, Part I on "Contingency of Language"]or a future physics that makes today's ontology look parochial. Nothing in our present stock of knowledge justifies ruling such possibilities in or out a priori [italicize a priori]. The only honest posture is to evaluate live claims with the best conceptual and evidential resources available, to state one's current beliefs, and to acknowledge that they may need to change.
Crucially, this fallibilism operates on two levels. At the [strike "the" and write,"At what I will call the (italicized) frame-boundary level??] frame-boundary level—questions about what counts as "natural," whether there can be non-physical causation, whether consciousness is substrate- independent—our criteria are not yet settled. These are questions about which domains are even open to systematic investigation. Here, the only responsible stance is agnosticism: neither credulous acceptance nor dogmatic exclusion, but an admission that, for now, we lack adequate methods to settle them.
Within particular frames [or what I will call the (italicized) in-frame level??] of practice, however—physics, forensics, epidemiology, cognitive neuroscience, journalism—we do have standards and methods that make inquiry both possible and productive. These standards are themselves provisional and revisable, but they function. When controlled studies of intercessory prayer repeatedly show null results, when parapsychological phenomena fail to reproduce under scrutiny, when certain forensic techniques collapse under validity testing, those are not mere "perspectives." They are our best-supported judgments within [italicize within] those frames. The fact that future science might reconceive the boundaries of its own object domain does not make present discriminations arbitrary.
There is, on this view, no [publicly shared and demonstrably true--NOTE: I don't rule out the possibility of such a "working ontology" even in the immanent ongoing present, only claim it is currently a frame-boundary and not in-frame issue, thus can't be adjudicated in the "court of public knowledge"] coherent "present working ontology" that unifies all domains into a synoptic model of "the world as a whole." What we actually have are multiple, partially connected frames of reference—[what Wittgenstein calls (italicized] *forms of life,* if one prefers—with[quote and italicized] *family resemblances* and criss-crossing connections but no single, worked-out picture into which everything fits [footnote # 2 citing Witt. Philosophical Investigations] We can still make claims that draw on several frames at once—linking evolutionary, biological, and social phenomena—without pretending that these local integrations amount to a final, unified story about What There Really Is. That absence of a One Big Ontology is not a problem to be solved by speculative metaphysics; it is a condition of our current epistemic situation.
This accepts what might be called [italicized] trivially true anthropological relativism: different cultures, traditions, and domains develop non-identical concepts and standards, as any ethnographer or traveler observes. But it rejects the philosophical worry that this entails global incommensurability—the notion that different frames cannot communicate or find common purchase. Quine's gavagai/rabbit thought experiment dramatizes semantic underdetermination: when a field linguist hears a native speaker say "gavagai" while pointing at a rabbit, the linguist cannot determine from evidence alone whether "gavagai" means "rabbit," "undetached rabbit part," "temporal rabbit-stage," or something else entirely[footnote 3; cite Quine's Ontological Relativity]. Perfect translation is impossible in principle. Yet actual practice shows that partial overlap and navigable ambiguity suffice for translation, cultural diffusion, and syncretism. When subsequent usage coheres enough for practical coordination—when we can successfully hunt together, trade rabbits, or avoid stepping on them— semantic indeterminacy becomes a philosophical curiosity rather than an impediment.[we might add in the last sentence after "hunt rabbits together" something like "or find out the natives forbid hunting the rabbits, pointing us toward a "holy cow" scenario , possibly of religious import" or somesuch]
Even within established [italicized-- these are my coined jargon] *fact-regimes*—[insert e.g., as these 3 domains aren't an exhaustive list, only 3 examples of fact-regimes]-- medicine, law, science—there is no a priori principle determining when precision is possible or necessary. Cystic fibrosis admits genetic markers; Crohn's disease involves multiple, shifting, somewhat fuzzy criteria. Water is definitionally H2O; legal concepts like "general welfare" or "reasonable doubt" remain usefully ambiguous. As Edward Levi [fn #4: Edward Levi: An Introduction to Legal Reasoning; U. of Chicago, 1949, p. 1]argued about legal reasoning, ambiguity is not always a defect: it provides "a forum for the discussion of policy in the gap of ambiguity," making possible incremental movement on contested questions. The interpretive space where definitions remain flexible—in psychiatry's evolving nosology, in hybrid biological concepts, in constitutional interpretation—is often where productive work gets done.
What this refuses is what Dewey aptly called the "quest for certainty:" the demand that we either achieve perfect semantic unity and operational precision everywhere, or collapse into nihilism because we cannot [fn 5, Dewey, John Quest for Certainty] . Neither captures how inquiry actually proceeds. We work with the precision available in specific contexts, we navigate ambiguity where definitions remain contested or fluid, and we distinguish both from gibberish—not through algorithmic rules but through immersion in practice and attention to consequences. When legal reasoning about "general welfare" becomes so detached from recognizable harms or benefits that it no longer constrains judgment, practitioners notice the breakdown not by applying a formula but through situated recognition that deliberation has become unmoored. This is a kind of cognitive contingency—the fallible capacity to distinguish sense from nonsense that emerges from being embedded in [italicized]*forms of life.* Conflict is constitutive of this process, not an aberration. Sometimes interpretive negotiation succeeds and yields productive hybrids; sometimes it fails and conflicts persist. There is no a [italics]*a priori* principle determining which outcome will obtain, and no method that guarantees reconciliation.
Excursus: Two Tiers of Inquiry—Virtues Across Incommensurable Frameworks
A common assumption, rarely examined but widely operative, holds that the intellectual virtues enabling serious inquiry—fallibilism, consequence-tracking, imaginative consideration of alternatives, honest self- examination—are achievements of post-Enlightenment Western modernity, perhaps prepared by the Reformation's questioning of authority. On this view, other traditions—Islamic, Hindu, Confucian, Buddhist—remain fundamentally "traditional," which is to say closed, dogmatic, unsuited to the kind of open-ended investigation that modern problems require. This assumption underwrites much contemporary discourse: the notion that "we" (secular, democratic, scientific) can reason and compromise, while "they" (religious, traditional, non-Western) are locked in pre-modern worldviews [and dogmas] that block inquiry.
This is empirically false and philosophically provincial. The historical record shows persistent, successful cross-cultural exchange, synthesis, and collaborative problem-solving across civilizations with radically incompatible ultimate commitments. The intellectual flowering of Tang Dynasty China along the Silk Road, the Hellenistic synthesis in Alexandria, the Abbasid translation movement in Baghdad, the cross- pollination of Buddhist, Hindu, and Islamic thought in medieval South Asia—these cannot be explained if inquiry-enabling virtues were the monopoly of any single tradition. What made such syncretism possible was not that everyone secretly shared the same metaphysics, but that diverse traditions cultivated, through their own practices and for their own purposes, second-order intellectual virtues that enabled working across deep difference.
To see how this works, distinguish two levels of commitment. [italics or bold?]Tier 2 consists of ultimate value orientations, comprehensive metaphysical frameworks, and final ends: liberal democracy as a way of life, Islamic theocracy grounded in divine law, Hindu dharma with its cosmological and social order, Confucian harmony under Heaven's mandate, Buddhist liberation from samsara. These remain genuinely incommensurable—no amount of inquiry will make a democrat accept divine sovereignty as ultimate, or a Muslim embrace secular autonomy, or a Confucian adopt egalitarian individualism as foundational.
Tier 1 [italics or bold?]consists of second-order intellectual virtues and practices: the capacity to track consequences honestly, to consider how situations look from other perspectives, to distinguish essential from contingent features of one's own position, to revise judgments when experience warrants, to recognize when one's certainty is unearned. These are not innate human capacities but cultivated dispositions, formed through training within specific traditions. Crucially, they can be developed within traditions whose Tier 2 commitments are incompatible with each other and with liberal modernity.
The key insight is that[italics or bold] Tier 1 virtues enable inquiry across Tier 2 differences. People formed in radically different comprehensive frameworks can engage shared problems productively not because they abandon their ultimate commitments, but because their traditions have equipped them with practices for navigating uncertainty, examining consequences, and working provisionally with those who see the world differently.
Consider Islamic intellectual history. Classical usul al-fiqh (principles of jurisprudence) developed sophisticated methods for deriving rulings from foundational sources. This was not "anything goes" interpretation but disciplined reasoning bound by established methodological constraints—examining textual evidence (dalil), considering social welfare (maslaha), tracking actual consequences (istihsan), consulting community understanding (ijma'), and reasoning by analogy (qiyas).[We'll need to place fn #6 here. A good source to anchor our claims here is Islam: 2nd ed. by Fazlur Rahman with new forward by John E. WoodsU. Chicago, 2002]]
More telling for present purposes is the widespread practice of examining prophetic precedent (sunna) not as rigid command but as model requiring interpretation in light of circumstances. The Treaty of Hudaybiyyah, in which Muhammad accepted terms his followers found humiliating, is taught across Sunni and Shia traditions as exemplifying consequence-awareness and pragmatic judgment—accepting short-term losses for long-term stability, distinguishing essential principles from contingent forms, prioritizing peace over victory when circumstances demand. This is not marginal apologetics; it is mainstream pedagogy in Islamic education from Morocco to Indonesia.[we need a source and footnote #7 ]
Similarly, the obligation to consult (shura) before major decisions, rooted in Quranic instruction and prophetic practice, institutionalizes listening to affected parties and considering diverse perspectives. [sources for this section: Medieval Islamic jurists regularly debated cases by considering multiple plausible rulings, examining likely consequences, and acknowledging uncertainty (shakk) where evidence did not compel a single answer. The Hanbali jurist Ibn Taymiyyah, often cited by contemporary fundamentalists, nonetheless insisted that legal rulings must serve the purposes (maqasid) of the law—justice, welfare, harm prevention—and may be revised when they fail to do so in practice.[source and fn or if same source "ibid" after footnote #8]
These are not isolated examples but systematic features of a tradition training millions of people, over centuries, to examine consequences, consider alternatives, acknowledge limits to certainty, and adjust judgments to circumstances. They do this in service of Tier 2 commitments (divine sovereignty, prophetic guidance, communal welfare under God's law) that are incommensurable with secular liberalism. Yet the Tier 1 practices they cultivate make inquiry across that difference possible.
Hindu philosophical traditions, particularly those emerging from the Upanishads and systematized in the Vedanta and Nyaya schools, developed rigorous methods for examining the relationship between appearance and reality. Viveka (discriminating judgment) is not mere logical skill but a cultivated capacity to distinguish what genuinely warrants belief from what seems plausible but cannot withstand scrutiny. The Bhagavad Gita lists amanitvam (freedom from egotism about one's views) as a virtue, recognizing that attachment to being right blocks honest examination.[ here you might cite Radhakrishnan, Sarvepalli: Indian Philosophy--Volumes I and II-- if nec. we can specify chapters. I have the book but did a quick online search for relevant chapters to avoid the drudgery, if Google AI is to be trusted here are results for all concepts we discuss in these paragraphs about Indian intellectual virtues: discussed?
- Volume I
- Chapter IV. The Mahabharata (located in the section covering "The Epic Period").
- Volume I
- Chapter III. The Bhagavadgītā (specifically in the section detailing Chapter XIII of the Gita, which lists the qualities of knowledge).
- Volume II
- Chapter V. The Sāṃkhya
- Chapter VI. The Yoga-System of Patañjali
The Mahabharata offers sustained reflection on tragic choice and situational judgment. Yudhishthira, the eldest Pandava brother and king, is portrayed not as a figure of dogmatic certainty but as one who must navigate impossible dilemmas where every option violates some duty.[see above Google AI sourcing] His wrestling with consequence, his acknowledgment that even righteous action produces suffering, and his willingness to question inherited rules when they lead to atrocity model a kind of moral and epistemic humility. This is not exceptional but exemplary: the text is used pedagogically across South Asia to teach that wisdom lies not in rigid rule-following but in honest confrontation with complexity.
The Nyaya school developed systematic epistemology with a sophisticated theory of inference (anumana), including careful attention to sources of error and conditions under which beliefs are warranted. [ibid]Buddhist logicians in the Dignaga-Dharmakirti tradition [ibid-- Vol I Ch. XI]. The Schools of Buddhism] The discussion of Dignaga and Dharmakirti, who were pivotal figures in Buddhist logic and epistemology, is primarily located in refined this further, insisting that claims must be supported by evidence accessible to interlocutors who do not share one's metaphysical starting points. This is not "scientific method" avant la lettre, but it is disciplined reasoning about what warrants belief, developed within traditions whose ultimate commitments (Brahman, atman, karma, rebirth) are metaphysically incompatible with naturalism.
Confucian learning centers on a practice called xuexi—learning through study and ritual enactment, followed by reflection on how one's understanding has shifted through that practice.[De Bary: Vol II:"Readings from Chu Hsi, on the Great Learning] This is iterative and consequence-focused: the Analects repeatedly show Confucius adjusting his teaching to the specific student and situation, recognizing that the same principle may require different applications.[ibid: Vol 1] The concept of zhengming (rectification of names) embodies a pragmatist insight: when terms no longer fit changing circumstances, the task is not to insist on old definitions but to bring language and practice back into productive alignment.[ ibid -- Vol I]--Claude, NOTE: - all concepts discussed in the section on Chinese concepts/virtues are found in Sources of Chinese Tradition (Vols. I and II, compiled by WM Thedore De Bary, Wing-Tsit Chan and Burton Watson: Columbia U, 1963 NOTE: my edition uses Wade-Giles spelling]
The Doctrine of the Mean teaches harmonization across differences without forced uniformity—a vision of social order that accommodates plural goods rather than demanding convergence on a single hierarchy of value. Later Neo-Confucians like Zhu Xi developed methods for investigating things and extending knowledge (gewu zhizhi) that emphasized careful attention to particulars, testing generalizations against cases, and revising understanding when patterns do not hold.[ibid vol 1]
Buddhist traditions, particularly Madhyamaka and Yogacara [Radhakrishnan - Vol 1 Ch 6] schools, insist on the provisional, conventional character of all conceptual frameworks. The two-truths doctrine distinguishes conventional truth (claims warranted within a frame of reference) from ultimate truth (the recognition that all frames are constructed and empty of inherent existence). This is not relativism—within conventional frames, better and worse claims are rigorously distinguished—but it blocks the reification of any single framework as final. The concept of upaya (skillful means) requires adjusting one's approach to the listener's capacities and circumstances, which demands attentiveness to consequences and flexibility in application.
None of this is an attempt to claim that these traditions are "really liberal" or "really democratic" in disguise. They are not. Their Tier 2 commitments remain incommensurable with secular modernity and with each other. Nor is this a denial that these traditions also produce dogmatism, closure, violence, and exclusion. They do, as does the West.
The claim is narrower and empirically grounded: most major civilizational traditions have developed, through their own histories and for their own purposes, practices that cultivate intellectual virtues enabling inquiry across deep disagreement. These virtues are second-order—they concern how to think, not what to conclude—and they are functional precisely because they do not require abandoning ultimate commitments. A devout Muslim examining consequences of a ruling, a Confucian scholar adjusting principles to circumstances, a Hindu philosopher discriminating appearance from reality, a Buddhist practitioner recognizing the provisionality of frameworks—none of these requires conversion to liberal secularism. They are drawing on their own traditions' resources.
Historical syncretism—the Silk Road exchanges, Hellenistic Alexandria, the Abbasid translation movement, Tang cosmopolitanism, contemporary cross-cultural negotiation—becomes intelligible once we recognize this. People with radically different Tier 2 commitments could work together on mathematics, astronomy, medicine, trade, and governance not because they suppressed their differences but because their traditions had equipped them with Tier 1 practices for provisional cooperation, consequence-tracking, and honest acknowledgment of what they did and did not know.
Recognizing the distributed character of Tier 1 virtues reframes the task of pluralistic democracy. The problem is not how to convert everyone to liberal secular commitments before serious inquiry can begin. It is how to design institutions and practices that engage the inquiry-enabling capacities people already bring from their diverse formative traditions.
This is both more realistic and less chauvinist than standard accounts. It does not require waiting for "modernization" to produce "rational" citizens, nor does it assume that religious or traditional people are intellectually limited. It recognizes that Islamic jurisprudence, Confucian learning, Hindu philosophy, and Buddhist epistemology are sophisticated intellectual traditions with their own resources for reasoning, humility, and pragmatic adjustment. Inquiry across incommensurable frameworks is possible not despite but through these resources.
The practical implication: when a public school faces conflict over hijab, or a hospital confronts competing end-of-life protocols, or a workplace navigates value misalignment, the parties need not share ultimate commitments to engage productively. What they need—and what most bring from their own traditions—are Tier 1 capacities to track consequences honestly, consider alternatives imaginatively, distinguish essential from contingent, and work provisionally with those who see the world differently. The task of institutional design is to create conditions where these existing capacities can be activated and sustained, not to impose a single comprehensive framework from above.
This is meliorism, but not Deweyan progressivism. It does not assume that inquiry will reconcile all conflict into harmonious unity, or that democratic culture uniquely produces the virtues inquiry requires. It assumes only that partial, navigable incommensurability is the human condition, and that most traditions have developed ways of working with it. Whether that proves sufficient for any particular rupture remains radically contingent. But the resources are there, distributed more widely and deeply than Western chauvinism recognizes.
Will to Truth Inside Religion
The modern will to truth does not originate outside religion. It arises from deep within religious traditions that demand sincerity, doctrinal precision, and honest self-scrutiny. Medieval theology already distinguished between truths of reason and truths of revelation; natural theology and natural law claimed that certain matters—about the created world and about justice—could be known by human reason, even as core mysteries of faith exceeded it. Late-medieval dissent and the Reformation intensified this dynamic. To "protest" in Protestantism is to refuse clerical monopoly on mystery: Sola Scriptura, vernacular translations, lay Bible study, and the practice of sending theses to public disputation all assume that individuals and congregations may contest what counts as true, using argument and exegesis, not merely deference.
At the institutional level, this will to truth takes forms that later secularize their own results. In Protestant theological faculties in early modern Germany, scholars apply the best available philological and historical methods—originally honed on classical texts—to scripture itself. The aim is not to debunk the Bible but to purify faith by anchoring it in solidly established history. The effect, however, is to reveal composite authorship, late redaction, and textual layering where tradition had seen Mosaic books. Once these techniques exist, they are not owned by skeptics; they are cultivated within seminaries, and they legitimate treating sacred texts as human artifacts.[Good source for this is Protestant Theolgfy and the Making of the Modern German University, by Thomas Albert Howard; Oxford U., 2006]
The Jesuit "rites" controversies show the same pattern in a Catholic missionary context. Jesuits in China and India pursued a rigorous understanding of local practices, trying to distinguish between what was genuinely idolatrous and what might be retained as culturally specific expressions compatible with Christian monotheism. Their internal reports forced Rome to confront a basic question: is Christian truth a fixed doctrinal core that must be expressed identically everywhere, or can it legitimately take on diverse ritual and linguistic forms? The condemnations of the Chinese and Malabar rites did not end the matter. They made visible that "pure doctrine" had always depended on negotiated accommodations with local realities.[ Good source = The Rights Controversies in the Early Modern World, ed. Ines G. Zupanov& Pierre Antoine Fabre, Brill, 2018 link here https://brill.com/display/title/34019 ]
The Inquisition offers a darker but equally revealing case. Its tribunals were designed as massive truth- producing machines: standardized procedures for accusation, interrogation, confession, and sentencing, all in the name of orthodoxy. Yet the archives they created, centuries later, became prime sources for historians reconstructing popular beliefs, alternative spiritualities, and the everyday life of dissent. The apparatus built to suppress error unwittingly preserved an enormous body of evidence that relativizes and pluralizes "the faith" it was protecting.[Source: Cultural Encounters: The Impact of the Inquisition in Spain and the New World, ed. Mary Elizabeth Perry , Anne J. Cruz, U of Southern California , 1991]https://www.google.com/books/edition/Cultural_Encounters/IMUZNel0s8kC?hl=en
Pietist diary-keeping presents the same logic in an intimate key. Encouraged to record their spiritual state in meticulous detail, believers were to track true conversion and sanctification. Over time, the practice of radical self-examination shifted the locus of authority: loyalty to one's "intellectual conscience" could come into tension with loyalty to external dogma. The imperative to be honest with oneself, originally framed as a religious duty, became a seed of the secular, self-authorizing subject who can subject even faith itself to critique.[Source: Max Weber, Protestantism & The Spirit of Capitalism, esp. chapters 4 & 5)]
Nietzsche's claim that Christianity's own ethic of truthfulness turned against Christian belief gives a name to this pattern. The will to truth is not an Enlightenment bolt from the blue; it is a long religious project that, when radicalized, eats into its own foundations.[Nietzsche: Geneaology of Morals, Essay III, if we even need to cite this, also throughout Gay Science-- we can discuss whether I need to cite something this well known]
Science and the Fractured Mirror
Early modern science inherits this will to truth and gives it new, explicitly empirical institutions. Bacon's fictional Salomon's House, then the Royal Society and the Académie des Sciences, cast inquiry as a collective, disciplined, methodical enterprise: division of labor, standardized experiments, careful observation and record-keeping, public reporting and contestation. These bodies often saw themselves as reading the Book of Nature with a reverence akin to scriptural study, but with methods designed to minimize individual bias and error.[Source: see Thomas Sprat, History of the Royal Society, 1667 = primary source for Bacon's House of Saloman as model for the Royal Society, while Stephen A. McKnight's
provides a modern academic analysis of the religious drive behind the New Atlantis project.]It was tempting to hope that such practices would eventually yield a "mirror of nature": a set of theories whose facts correspond transparently to how the world is in itself. But serious reflection on scientific practice has undercut that mirror image from within. Revived ancient skepticism and Descartes's methodological doubt dramatized, at the birth of modern science, how difficult it is to secure even one's own existence or the reality of an external world against skeptical scenarios. Later, Thomas Kuhn's account of scientific revolutions portrayed the development of science not as a smooth convergence on truth but as a sequence of paradigm-bound frameworks. On this view, "facts" are not raw givens but items already interpreted within a conceptual scheme, and different paradigms may sort and weigh them in ways that are not straightforwardly comparable.[cite Kuhn , Structure of Sci Rev]
Quantum theory adds a different dimension to the problem. As a piece of empiricism, it is a triumph: its formalism predicts experimental results with extraordinary precision and supports technologies on which contemporary life depends. Yet the conceptual picture it yields is not a unified, transparent image of reality. At the most basic level of fundamental physics, we have a deeply successful but theoretically uneasy pair: general relativity for gravitating, large-scale structures and quantum field theory for the micro-world. Attempts at unification remain speculative. Within quantum mechanics itself, multiple interpretations—Copenhagen, many-worlds, Bohmian mechanics, objective collapse models—fit the same experimental data. The theory does not, by itself, force a single realist metaphysics; it licenses several, and many working physicists adopt a de facto pragmatism, focusing on calculations and predictions while bracketing metaphysical debates as optional.[ cite Quantum Reality: The Quest for the Real Meaning of Quantum Mechanics - a Game of Theories
by Jim Baggott.- This book is highly suitable for the general reader and provides a clear, comprehensive introduction to quantum mechanics and explores the various philosophical positions and interpretations that arise from the theory. Baggott explicitly addresses why the theory works empirically but leaves us "chasing ghosts and phantoms," explaining that the choice of interpretation is ultimately philosophical rather than strictly empirical.]
None of this refutes empiricism. Rather, it is empiricism pursued with uncompromising rigor. But it does undermine the stronger claim that science has already, or inevitably will, furnish a final, God's-eye catalogue of how things are. Our best scientific theories are astonishingly effective tools that structure perception and intervention, but they do not currently entitle us to the kind of metaphysical certainties that mid-century "age of facts" rhetoric often took for granted. Strong scientific realism may yet be vindicated by future developments; it may not. At present, neither it nor its denial is warranted by the state of inquiry.
What follows from this is not that science tells us nothing, but that its authority is frame-bound. Within physics, certain claims are extremely well supported and others are rejected; within biology, evolutionary theory outperforms creationism by any reasonable metric. Attempts to leap from those within-frame successes to a single ontological story in which "everything is really just particles" are speculative unifications riding on the prestige of some frames over others. They are not themselves products of the empirical methods they invoke.
Law, Journalism, and the Managed Consensus
Law and journalism took their own paths to factual authority. Modern courts refined rules of evidence— excluding hearsay, defining admissibility, distinguishing between lay and expert testimony—in the name of letting an impartial jury discover "what really happened." The incorporation of forensic science into criminal justice, from fingerprints and ballistics to blood typing and later DNA, seemed to anchor legal fact-finding in increasingly objective procedures.
Yet here, too, the will to truth has turned on its own practices. Scientific scrutiny of forensic fields has shown that several widely used techniques—bite-mark analysis, comparative bullet-lead analysis, some forms of microscopic hair comparison—rest on weak or nonexistent empirical foundations. At the very moment when the legal system leaned hardest on science to bolster its claims to factual accuracy, more rigorous science exposed key parts of that reliance as misplaced. Truth-seeking methods, applied to themselves, revealed institutional conventions masquerading as neutral facts.[cite, e.g. Strengthening Forensic Science in the United States: A Path Forward
National Research Council (NRC) / National Academy of Sciences (NAS)
- Publisher: National Academies Press
- Publication Date: 2009 -The report specifically scrutinizes and often finds a lack of scientific validation for many traditional "pattern-matching" disciplines, including microscopic hair analysis, bite marks, and firearms comparison.]
Journalism's twentieth-century ideal of objectivity—verification, separation of news from opinion, balanced sourcing—was a parallel attempt to organize public life around facts. Walter Lippmann, worried about the cognitive limits of mass publics in a complex industrial society, concluded that citizens must inevitably depend on pictures of the world curated by experts and professionals. In that framework, manufacturing consent is not necessarily nefarious; it is how a democratic mass public can function at all.
John Dewey rejected this relegation of the public to spectatorship. For him, social problems are not abstract puzzles handed to remote experts; they are concrete troubles experienced in local contexts. Those who live with the consequences are best placed to define what counts as a problem and to participate in inquiry about how to address it. Dewey did not deny the need for expertise, but he refused to treat experts as oracles. Inquiry, for him, is fallibilist, experimental, and inherently social: it must integrate technical knowledge with the intelligence of those affected. The role of the press, on this view, is not merely to transmit facts downward but to help constitute a public capable of deliberate action.[citations for this and the previous paragraphs on Dewey-Lippman debates = Communication as Culture: Essays on Media and Society, James Carey, Routledge , 1989; Dewey, Public & Its Problems (1927); Lippman Public Opinion (1922)]
We do have real-world examples of this kind of Deweyan inquiry. Community-based environmental justice campaigns, for instance, have combined local testimony about health effects and everyday conditions with air-quality measurements, epidemiological studies, and legal advocacy to challenge "official" accounts of pollution. In some participatory budgeting processes, residents have worked with city staff to analyze fiscal data, propose projects, and deliberate about trade-offs, rather than merely voting on pre-packaged options. These are not utopias, but they show that fact-finding and problem- solving can be organized around those affected rather than being reserved for distant elites.
Yet it is important not to overstate what fallibilist, collaborative inquiry can achieve. Dewey sometimes wrote as if experimental method could eventually resolve all conflicts, restoring experiential continuity through shared problem-solving. That quasi-Hegelian optimism—conflicts sublating into higher unities— is empirically unwarranted. Sometimes interpretive negotiation succeeds and yields productive hybrids; sometimes it fails and conflicts persist. There is no a priori principle determining which outcome will obtain. Conflict is constitutive of human life, not an aberration to be overcome through better method. The task is not to guarantee resolution but to build institutions that can work with persistent conflict— through constitutional constraints, agonistic respect, and the humility to recognize when further inquiry will not bridge the gap.
The mid-twentieth-century United States—the period Lepore highlights—looked, to many, like the realization of a Lippmann-style fact-regime. Major newspapers and broadcasters set the agenda; research universities and government labs produced expert knowledge; the courts claimed to administer justice based on evidence. There were, to be sure, deep conflicts and exclusions: Jim Crow, redlining, McCarthyism, the national-security state. But for a time, a relatively narrow elite could treat its own worldview as "the facts," and much of the country acquiesced.
That apparent solidity masked ongoing fractures. Courts upheld "separate but equal" segregation and compulsory sterilization; immigration law enforced racially coded quotas; scientific establishments lent their authority to eugenic and racist theories; foreign policy elites justified firebombing cities and building thermonuclear arsenals as rational strategic necessities. These practices were not deviations from the factual order; they were features of it. What changed in the 1960s was not that facts suddenly became contested; it was that the truth-telling virtues the regime claimed for itself were turned against it. Civil rights activists, anti-war protesters, feminists, and others marshaled witness testimony, documents, photographs, leaked reports, and statistical analyses to expose what had been excluded or normalized. They did not reject facts; they insisted on different ones, or on giving weight to facts that the consensus had treated as marginal.
Crucially, many of these movements also altered the processes by which facts became actionable. Freedom Schools, consciousness-raising groups, welfare rights organizations, and other experiments in democratic education and organizing were not just sites of protest; they were sites of inquiry. They redefined whose experiences counted as evidence and who had standing to interpret it. This exemplified a Deweyan alternative in practice: collaborative investigation by those affected, integrating expertise without surrendering judgment to it.
Fantasy, Civil Religion, and Non-Epistemic Practices
It might be tempting to oppose this managed factual order to a surrounding swamp of fantasy and credulity. American culture is indeed full of revivals, Great Awakenings, faith healings, spiritualist séances, and conspiratorial subcultures. But the line between "rational fact" and "irrational fantasy" is not so easily drawn. The same Protestant biblical culture that underwrote pro-slavery theologies in the American South also generated abolitionist movements grounded in readings of scripture and natural law. The rhetoric of the Declaration of Independence and later human-rights discourse—natural rights, equality before God, inalienable dignity—owes as much to religious and deist imaginaries as to empirical science. Those ideals could not have been derived from a neutral inventory of facts; they are value-laden constructions that were, and remain, contested.
The European Enlightenment itself exhibits this ambiguity. French revolutionaries converted churches into Temples of Reason, staged festivals to the Goddess of Reason, and attempted to build a secular religion of Liberty and Nature. When that proved unstable, Robespierre promoted a Cult of the Supreme Being, echoing Voltaire's conviction that, if God did not exist, it would be necessary to invent him in order to secure moral order. Kant's "postulates of practical reason" (freedom, immortality, God), Paine's deism, and American civil religion around a providential nation all suggest that modern egalitarian projects and democratic legitimacy have long leaned on quasi-religious commitments. The will to truth destabilized certain metaphysical claims; it did not remove the need for orienting values and narratives that go beyond, or beneath, empirical evidence.
By Mona Ozouf
- Publisher: Harvard University Press
- Publication Date: 1988 (Original French publication 1976)]
It is also important to say that not all valuable human activities are organized around factual claims at all. Fiction, music, visual art, dance, contemplative disciplines, and liturgy are not defective sciences; their point is not to predict or explain but to disclose, express, and reshape experience. They involve skills and can be better or worse, but not primarily by the standards of empirical correctness. The critique of "fact- regimes" in this essay is aimed at domains that do claim to tell us how things are—science, law, journalism, policy—and at the temptation to extend their standards everywhere, or to treat their current frames as if they already gave us a unified ontology of the whole.
A consistent fallibilism refuses both Dewey's naturalizing sublation and Rorty's anti-realist closure. Dewey sometimes undercut his own fallibilism by drawing a hard line between "natural" and "supernatural," treating the latter as a priori out of bounds. But what counts as the "natural" object domain of inquiry is itself historically expanding and revisable; no one now can say what forms future science might take. Rorty, more subtly, forecloses frame-boundary questions by declaring metaphysical inquiry itself passé—"there is no final vocabulary" functioning as an inverted dogmatism, a negative metaphysics that somehow knows there is nothing to know. Even late Wittgenstein's therapeutic approach—treating religious and metaphysical questions as confusions about grammar rather than genuine inquiries—evades rather than confronts the uncertainty. Weak agnosticism refuses these closures. Frame-boundary questions —about God, consciousness, ultimate reality—may or may not be answerable. We do not currently have methods to settle them. That is not a reason to dismiss them (Rorty)[Cite: Contingency, Irony & Solidarity], naturalize them away (Dewey) [cite Experience & Nature], or dissolve them therapeutically (Wittgenstein at his most reductive)[cite Phil. Investigations]. It is a reason for humility.
Here the levels distinction matters. Frame-boundary questions—"Could there be non-physical causation?" "Might consciousness be substrate-independent?" "Could prayer work through mechanisms we do not yet understand?"—do not currently have established investigative criteria. We cannot yet set up observations that would definitively settle them, precisely because they are questions about what kinds of things are subject to investigation in the first place. Within established frames, however—forensic science, quantum physics, genetics—we can and must make sharp discriminations. When forensic methods fail replication studies, when quantum mechanics yields stable predictions across contexts, when genetic markers reliably identify specific conditions, these are not mere "perspectives"; they are our best-warranted claims under operative standards that have been tested and refined.
The danger Dewey wanted to avoid—treating every supernatural claim as a live scientific hypothesis despite a long record of failed tests—is real. But the corrective is not to rule such claims out by definition. It is to demand that any claim, whatever its content, submit to the same discipline: clear methods, testable consequences, openness to refutation, and integration with the rest of what we have reason to accept. When controlled studies of intercessory prayer show null results, when séances cannot survive controlled conditions, when dualist theories cannot explain systematic mind-brain correlations, we do not conclude that the "supernatural" is metaphysically impossible. We conclude that these particular claims are not presently warranted within any successful frame of inquiry. That is fallibilism doing its work.
AI, Pseudo-Reasons, and Entangled Agency
The current wave of AI does not make facts disappear. It does, however, introduce a new and particularly insidious form of trouble: the mass production of what might be called pseudo-reasons. If Lepore's concern is that we have lost a shared factual reality, the deeper problem is that we are losing the practices of accountable reason-giving that once aspired, however imperfectly, to produce facts worth sharing. Contemporary systems, especially large language models, generate outputs in ordinary language. They recommend, explain, summarize, justify, and converse. To users, these outputs are almost indistinguishable, in form, from the things people say when they have thought about something: "You'll love this movie," "This candidate is not a good fit," "This patient is low risk," "This teacher is underperforming."
The resemblance is deceptive. Humans give reasons by operating in the space of meaning and intention. Even when motives are mixed, we act and speak about something; we can, at least in principle, say what considerations we took into account, what we were trying to do, and why we thought our judgment was apt. Current AI systems operate exclusively in the space of causes. They manipulate tokens according to statistical patterns extracted from training data. They have no awareness of what their words mean, no sense of purposes, no capacity for genuine deliberation. When an HR system flags a teacher as a dismissal risk, or a medical triage assistant ranks a patient as low priority, there is no agent there who can say, in human terms, "Here is how I weighed the evidence."
The danger is not only that such systems can be wrong. It is that humans, confronted with fluent language and impressive performance on benchmark tasks, slide into what might be called the user's illusion: treating the system as if it were a reason-giving subject and acting accordingly. In hiring, a score produced by an opaque model may trump a supervisor's experience; in policing, a "risk" label learned from biased arrest data may guide patrol patterns and sentencing; in medicine, a generated summary may quietly shape diagnostic choices. In each case, the AI output plays the social role of a reason: it is cited, relied upon, and allowed to settle questions. Yet it is, in fact, only a complex artifact of pattern-matching over earlier behavior and recorded correlations.
The problem is compounded by entanglement. We do not stand outside these systems, choosing whether to "use tools" or not. Workflows in finance, healthcare, logistics, education, and media are already organized around networks of algorithms and platforms. Decisions about who gets credit, who receives a test, which story is promoted, or whose complaint is escalated routinely pass through layers of automation that no individual can survey. In many institutions, the default is effectively "read and accept": if no human intervenes, the system's output stands.
From a fallibilist, Deweyan angle, the central question is where, in this mangle, genuinely accountable reason-giving can still occur. The answer cannot be to abolish complex computational systems and return to a pre-digital world. Nor can it be to reassure ourselves that "humans remain in the loop" when, in practice, many loops run too fast, too opaquely, and with too little institutional support for meaningful review. A more realistic program is to identify the critical junctions—those points where error or bias inflicts serious, often irreversible harm—and to rebuild workflows so that human judgment with real authority is structurally required there. That may mean, among other things, slowing some processes down, protecting certain domains from full automation, and treating unread algorithmic outputs as unacted-upon, rather than as tacitly approved.
The deeper ethical point is that we must stop pretending that entities which cannot understand, intend, or answer can bear the responsibility for reasons. Trust, in such systems, is misplaced not because machines are metaphysically incapable of truth, but because they are not, and cannot be, appropriate objects of our demand for justification. The only loci that can be held to account are humans and the institutions we build. Designing AI infrastructures so that they augment, rather than displace, our capacity to give and ask for reasons is a political and moral task, not a technical afterthought. If, at some future point, systems emerge that supply convincing evidence of understanding and agency by criteria we would be willing to apply to one another, we will have to rethink these judgments. At present, nothing in the behavior or design of large-scale AI warrants such a leap.
No Lost "Fact Age," Only the Ongoing Work of Inquiry
Against this backdrop, nostalgia for a lost era of facts looks misplaced. There was never a time when facts, once properly certified, yielded a transparent, shared view of reality that could anchor politics without remainder. There were, rather, successive attempts to tame the will to truth by institutionalizing particular evidentiary regimes: within churches and seminaries, in scientific academies and laboratories, in courts and newsrooms, in schools and bureaucracies. Those regimes did real cognitive and moral work. They also generated, from within themselves, the doubts and resistances that now unsettle them.
The point of recognizing this is not to give up on truth or to collapse into cynicism. It is to shift expectations. Facts will always be provisional, theory-laden, and contested. They will always be entangled with values and power. That does not make them useless; it means they must be handled as tools within fallible, revisable practices of inquiry and judgment, not as immutable foundations. A democratic politics worth having will not be one that pines for a restored "age of facts," but one that builds institutions capable of working with disagreement, exposing and correcting their own blind spots, and resisting the temptation to outsource responsibility to opaque systems.
The virtues this essay advocates—humility, agonistic respect, consequence-tracking, fallibilist inquiry— are not uniquely Western, and the crises it diagnoses are not uniquely modern. The will to truth's self- undermining has parallels across traditions; so do the resources for working with its consequences. Recognizing the distributed character of inquiry-enabling virtues across incommensurable frameworks is both more realistic and less chauvinist than assuming democratic culture must first convert the world before serious problem-solving can begin.
This essay itself is part of that work. Its genealogy of the will to truth and its critiques of fact-regimes are not the last word; they are hypotheses offered for contestation and refinement. The hope is not to step outside the problem but to model a way of engaging it: by assembling reasons, acknowledging limits, and inviting challenge. If there is any way forward from our present discontents, it will not come from reviving the comforts of certainty, but from cultivating the harder virtues of humility, patience, and shared inquiry in a world where facts have never been simple and where pseudo-reasons now abound.
No comments:
Post a Comment