Perhaps the most profound consequence of large-scale AI deployment is "systemic
dehumanization"—the gradual transformation of individuals from moral agents deserving
consideration into data points to be processed efficiently. This operates not through explicit cruelty
but through systematic replacement of human-centered processes with optimization algorithms that
treat people as variables in mathematical functions.
Immigration enforcement provides a stark example. When AI systems identify individuals for
deportation based on algorithmic risk assessment, they reduce complex human stories to
computational variables. The system cannot consider depth of community ties, nuance of family
circumstances, or moral weight of separating children from parents. These human factors become
externalities to be managed rather than central concerns guiding policy implementation.
This erosion of agency contributes to "narrative incoherence"—the inability of individuals and
communities to provide meaningful accounts of their experiences and choices. When major life
decisions are increasingly influenced by algorithmic mediation, people struggle to construct coherent
stories about their agency and responsibility. The space of pseudo-reasons provides apparent
explanations but not the substantive reasoning that supports authentic self-understanding.
Toggle Competence and the Critique of Quantitative Fundamentalism
Defining Toggle Competence as Practical Wisdom
The preservation of human agency in AI‑mediated contexts requires what might be termed toggle competence—a learned capacity to fluidly shift between treating AI outputs as meaningful contributions (adopting what Dennett calls the intentional stance
that makes collaboration possible) and maintaining critical awareness
that these systems operate through pattern‑matching rather than genuine
reasoning. This is fundamentally a balancing act: leaning too far
toward enchantment risks outsourcing deliberative agency to algorithmic
pseudo‑reasons; leaning too far toward demystification makes productive
engagement impossible—like attempting to appreciate cinema by analyzing
projector mechanics rather than absorbing the narrative.
Toggle
competence is not a static equilibrium but an ongoing,
context‑sensitive practice requiring constant micro‑adjustments. When
exploring interpretive questions—literary analysis, philosophical
inquiry, creative brainstorming—practitioners can afford deeper
immersion in collaborative meaning‑making with periodic critical
pullbacks. When reviewing AI‑generated medical diagnoses, legal briefs,
or governance recommendations, sustained vigilance becomes necessary
with only tactical acceptance of algorithmic suggestions. The
appropriate balance varies not only by domain but by practitioner and
situation—much as an “excellent diet” means something radically
different for a sumo wrestler than for a competitive sprinter, even
though both exemplify nutritional virtue in their respective contexts.
This toggle competence resists quantification precisely because it exemplifies what Michael Polanyi termed tacit knowledge—the
kind of practical wisdom one recognizes in action but cannot reduce to
explicit rules or metrics. Practitioners know when they are toggling
effectively; they can cultivate this capacity through practice and
reflection; but they cannot specify an algorithm for when to shift modes
or measure their “toggle rate” in any meaningful way across contexts.
The appropriate timing depends on situated judgment about what available
evidence can and cannot decide, what conceptual frameworks can and
cannot capture, and what questions can be meaningfully posed given one’s
epistemic position.
The
difficulty of operationalizing toggle competence points to a deeper
problem pervading contemporary discourse: what might be called quantitative fundamentalism—the
assumption that only measurable phenomena merit serious consideration,
that all meaningful questions can ultimately be resolved through metrics
and optimization. This orientation appears not only in AI governance
discussions that demand precise measurements for inherently qualitative
capacities like dramatic rehearsal or narrative coherence, but also in
scientific discourse where physicists dismiss philosophical inquiry
while simultaneously making metaphysical commitments that exceed
empirical evidence.
A crucial clarification follows: the critique of quantitative fundamentalism is not
a critique of mathematics, measurement, or modeling as such. In
ordinary practice, we routinely use quantitative tools without smuggling
in an ontological thesis about what is ultimately real—treating
formalism as instrument rather than revelation. The pathology emerges
when methodological success is silently converted into metaphysical
authority: when “what we can measure” becomes “what there is,” and when
the inability to operationalize a phenomenon is treated as evidence of
its non‑being rather than as a limit of the current investigative frame.
This
is also why toggle competence cannot itself be reduced to a metric
without self‑contradiction. It includes the capacity to recognize when
quantification is appropriately sovereign (because the question is
genuinely quantitative) and when the very demand for quantification
constitutes a category mistake—an attempt to force qualitative or
interpretive problems to “confess” in a register that cannot, in
principle, contain them.
Quantifiable vs. Interpretive Is Not Objective vs. Subjective
Here a distinction sharpened in everyday disputes about “data‑driven” wisdom‑of‑crowds claims becomes essential: quantifiable vs. interpretive is not the same as objective vs. subjective.
Many domains that matter most to collective life—criminal sentencing,
psychiatric diagnosis, constitutional law, historical evaluation,
electoral judgment—have no single discrete right answer that can be
scored as one would score a multiple‑choice test, yet they are not
thereby arbitrary or epistemically weightless.
Everyday
practice already presupposes this. Before there were any theories of
wavelengths or optics, people could reliably say “turn right at the red
sign two blocks ahead” and be understood; these were factual assertions
embedded in shared practices, not mystical projections. Even in
physics, as Polanyi emphasized, the scientist ultimately must trust that
she has correctly read notch 2 rather than notch 5, or distinguished
the red line from the blue line on a graph; one cannot keep appealing to
equations to vindicate perception, because equations themselves must be
seen and interpreted by someone whose perception we
eventually simply trust. Perception, in this sense, is not an
embarrassing residue of “subjectivity” but the tacit, intersubjective
floor without which our most precise sciences cannot get off the ground
at all.
The
core mistake of quantitative fundamentalism is to collapse four
distinctions into one: measurable vs. non‑measurable; public vs.
private; reliable vs. unreliable; objective vs. subjective. Once this
conflation is in place, anything non‑quantifiable is easily dismissed as
“merely subjective,” and “subjective” is silently equated with
“idiosyncratic and error‑prone,” while quantitative outputs are treated
as inherently more objective and real. But there are non‑quantifiable
yet public and checkable domains—consider historians’
comparative assessments of political leaders, legal reasoning about
proportional sentence ranges, or psychiatric debates over evolving
diagnostic criteria—that are neither reducible to metrics nor equivalent
to unanchored personal preference. They involve comparative
reason‑giving under ambiguity, where some positions are more reasonable,
better supported, or more coherent with the evidence and with other
commitments, even though no single scalar score settles the matter.
This
has direct implications for contemporary enthusiasm about the “wisdom
of crowds” and “data‑driven” decision‑making. Classic
demonstrations—Galton’s ox‑weight estimates, certain prediction markets,
game‑show multiple‑choice crowds—work under two conditions: (1) there
is a discrete, evaluable right answer; and (2) we have a clear metric of
accuracy. When one moves from those contexts to elections (“who is
best suited to govern?”), judicial sentencing, or “best candidate”
questions more generally, the structure changes: there is no single
correct answer in the oxide‑sense, no uncontroversial metric for “best,”
and no way to define error the way one defines mis‑guessing an ox’s
weight. To treat these questions as if they were of the same
kind—because votes produce numbers or because large datasets enable
sophisticated aggregation—is precisely to enact quantitative
fundamentalism: where quantification fails, the imported frame fails
with it.
A
parallel point applies to semantics itself. Attempts to insist that a
term is meaningful only if it admits of a fixed, non‑circular, metric
definition (the old verificationist or non‑cognitivist impulse) would,
if applied consistently, declare vast swathes of ordinary language and
institutional vocabulary—justice, harm, common good, love, art—“meaningless.”
That reductio reveals not the emptiness of these concepts but the
overreach of the metric demand. Meaning in these domains consists not in
a single pinpoint on a semantic bullseye but in the ability to navigate
a structured interpretive space: to broaden terms when cooperation and institutional flexibility require it, and to narrow them when action or clarity demands.
We might think of this as a kind of “breathing” in our conceptual life. Sometimes we broaden—using
intentionally open‑textured terms like “general welfare” or “due
process” so that constitutional or legal frameworks can adapt to
unforeseen circumstances. Sometimes we narrow—specifying
“due process” through habeas corpus, notice‑and‑hearing requirements,
exclusionary rules. There is no algorithm that tells us, ex ante, when
to broaden and when to narrow without reinstating quantitative
fundamentalism at a meta‑level; it is precisely here that tacit,
situated judgment must guide the management of interpretive space.
Toggle competence, on this picture, is as much about knowing when to
move between narrow and broad interpretive frames as it is about
toggling between empirical and philosophical modes of inquiry.
With
this clarified, we can now examine how quantitative fundamentalism
manifests in a supposedly “hardest” domain—contemporary physics—and what
successful toggling looks like in contrast.
Toggle Failure in Physics: The Case of Quantitative Fundamentalism
Physicist
Lawrence Krauss provides an instructive example of this failure to
toggle between empirical and interpretive modes. In his 2012 book A Universe from Nothing: Why There Is Something Rather Than Nothing,
Krauss explicitly dismisses philosophy as having “no contribution to
make” to questions about cosmic origins. He argues that physics can now
explain how universes emerge from “nothing”—by which he means quantum
vacuum states with fluctuating fields governed by physical laws.
When philosopher David Albert reviewed the book in The New York Times, pointing out that quantum vacuums are emphatically not
metaphysical nothingness, Krauss dismissed the critique as mere
semantic quibbling. But Albert’s point was precisely about the toggle
failure: Krauss was working in empirical mode (describing the physics of
vacuum states) while making claims that require interpretive mode
(addressing the metaphysical question of why physical laws exist at
all). The question “Why is there something rather than nothing?” asks
about the ontological status of existence itself, including the
existence of quantum fields and physical laws. Answering this question
by describing processes within an already‑existing physical framework
simply relocates rather than resolves the philosophical puzzle.
Krauss’s
toggle failure becomes explicit in his treatment of what counts as
legitimate inquiry. He repeatedly asserts that philosophical questions
lacking empirical answers are meaningless or uninteresting—a
quintessentially philosophical claim about the nature of meaningful
inquiry that cannot itself be empirically tested. His position
exemplifies quantitative fundamentalism: the assumption that because
physics successfully employs mathematical rigor and empirical testing,
all meaningful questions must be answerable through these methods.
Stephen Hawking demonstrated a similar pattern in The Grand Design
(2010), opening with the declaration that “philosophy is dead” because
it “has not kept up with modern developments in science, especially
physics.” Yet the book immediately proceeds to defend model‑dependent realism—a
philosophical position about the nature of scientific knowledge—and
makes claims about the unreality of history before observation that
depend entirely on interpretive choices about how to understand quantum
mechanics. Hawking rejects philosophy while doing philosophy, unable to
recognize when his discourse has shifted from empirical physics (where
his expertise is unquestionable) to metaphysical speculation (where
philosophical analysis becomes essential).
What
makes this failure so recurrent is that physicalism often presents
itself as the absence of metaphysics, when in fact it begins with
metaphysical axioms of its own—e.g., that all is “matter and
energy”—whose boundary conditions are rarely made explicit.
Historically, the content of “the physical” has been repeatedly revised:
the graveyard of ontologies is real (ether disappears; the furniture of
the world is re‑described), and even our best frameworks remain
unreconciled in key places (quantum mechanics and relativity). Under
these conditions, “physical” can function less like a stable criterion
and more like a standing authorization to reclassify anomalies as
“physical” whenever the mathematics or the research program demands it.
Dark
matter is useful here not as a conclusion but as a diagnostic. One live
possibility is that we are “detecting” something real but not yet
characterizable; another is that the anomaly is a measure of ignorance
or a signal of theory failure (for example, in the gravitational
framework). The toggle failure is to treat the label “matter” as a
metaphysical solvent that dissolves the problem in advance—to subsume
the anomaly under “the physical” before we can even say what it would
mean for the anomaly to count against the operative categories. This is
precisely the sort of unmarked shift—empirical inquiry sliding into
ontological closure—that the concept of quantitative fundamentalism is
designed to expose.
Both
examples reveal the structure of quantitative fundamentalism’s toggle
failure. These physicists possess extraordinary competence in
mathematical formalism and empirical investigation. Their failure lies
not in technical understanding but in recognizing when their mode of
inquiry has reached its legitimate boundaries. They cannot toggle from
empirical/quantitative mode (appropriate for physics) to
interpretive/philosophical mode (necessary for questions about the
ontological status of physical theories themselves) because they do not
acknowledge the latter as a legitimate epistemic domain.
The
consequences extend beyond individual confusion. When prominent
scientists dismiss philosophical inquiry while making philosophical
claims, they model toggle failure for broader audiences—suggesting that
quantitative rigor alone suffices for all meaningful questions, that
interpretive frameworks are merely subjective preferences rather than
essential tools for navigating domains where empirical evidence
underdetermines conclusions.
Successful Toggling: Feynman’s Epistemic Humility
Richard
Feynman provides a striking counter‑example of successful toggle
competence in precisely the domain where Krauss and Hawking falter.
Feynman made foundational contributions to quantum electrodynamics, work
that required extraordinary mathematical sophistication and rigorous
empirical grounding. Yet he maintained consistent epistemic humility
about interpretive questions that exceeded available evidence.
Feynman
famously remarked, “I think I can safely say that nobody understands
quantum mechanics,” and advised, “I can live with doubt and uncertainty
and not knowing. I think it’s much more interesting to live not knowing
than to have answers which might be wrong.” This was not
anti‑intellectual defeatism but clear‑eyed recognition of the limits of
current inquiry. Feynman worked rigorously with quantum mechanical
formalism—developing path integrals, contributing to the Standard Model,
calculating predictions with extraordinary precision. He remained
firmly in quantitative/empirical mode for these technical achievements.
Yet when asked about what quantum mechanics means—whether
the wave function represents objective reality, whether measurement
collapses genuinely occur, whether hidden variables might restore
determinism—Feynman toggled to interpretive/agnostic mode. He
acknowledged that these questions, while fascinating, exceeded what the
mathematical formalism and experimental evidence could decide.
Different interpretations (Copenhagen, Many‑Worlds, Pilot Wave) make
identical empirical predictions; choosing among them requires
philosophical commitments about ontological parsimony, the nature of
probability, and what counts as explanation—commitments that cannot be
resolved through further calculation or measurement.
This posture is best described as weak metaphysical agnosticism:
a refusal to treat any currently available ontology as authoritative,
while leaving open—in principle—the possibility that better future
theorizing (conceptual and empirical) could warrant stronger
metaphysical commitments. Weak agnosticism is not the same as strong
anti‑realism; it does not infer from “we lack a mirror of nature” that
“no mirror is possible.” On the contrary, the strong anti‑realist
negation invites a performative paradox: to know that no “mirror” can
exist in any form would seemingly require exactly the kind of
standpoint—comparison to reality “in itself”—that the anti‑realist
declares unavailable.
Feynman’s
epistemic humility exemplifies successful toggle competence because it
recognizes the legitimate boundaries of different modes of inquiry. In
quantitative/empirical mode, quantum mechanics is spectacularly
successful—its predictions match experimental results to extraordinary
precision. In interpretive/philosophical mode, questions about what the
theory represents remain genuinely open, requiring suspended judgment
rather than premature closure through philosophical preference
masquerading as scientific conclusion.
This
capacity for productive uncertainty—for dwelling in questions without
rushing to answers—represents precisely the toggle competence that
quantitative fundamentalism lacks. Feynman could shift fluidly between
rigorous technical work (demanding mathematical precision and empirical
rigor) and philosophical modesty (acknowledging that some questions
exceed current methods’ reach). He neither dismissed interpretive
questions as meaningless (Krauss’s error) nor treated them as decidable
through technical virtuosity alone (the temptation of mathematical
Platonism).
Implications for AI Governance and Human Agency
These
examples from physics illuminate why toggle competence proves essential
for responsible AI interaction and why it resists the quantification
that algorithmic systems privilege. The structure of the challenge
remains consistent across domains: practitioners must learn when to
immerse themselves in productive collaboration (whether with quantum
formalism or AI outputs) and when to step back into critical reflection
about what that collaboration can and cannot achieve.
In AI contexts, toggle competence operates through the dynamic management of what this framework has termed the user’s illusion—the
tendency to treat AI outputs as intentional, reasoned, and meaningful.
As discussed earlier, this illusion is not mere pathology but a
precondition for productive engagement. Users cannot interact
effectively with AI systems while constantly reminding themselves of the
underlying mechanics; doing so would be like watching a film while
obsessing over projector mechanisms. The intentional stance enables
collaborative flow, allowing users to build on AI‑generated suggestions,
explore alternative framings, and develop ideas through iterative
exchange.
Yet
uncritical immersion in the intentional stance leads to the
“labor‑saving” mode of interaction where users treat AI outputs as
genuine deliberation rather than sophisticated simulation—outsourcing
judgment to algorithmic optimization while retaining only the subjective
experience of choice. Toggle competence requires recognizing when to
pull back from immersive collaboration into critical awareness that the
“space of pseudo‑reasons” differs fundamentally from genuine
reason‑giving, that pattern‑matching lacks intentionality despite
producing linguistically fluent outputs.
The
parallel to Feynman’s approach becomes clear. Just as Feynman worked
productively with quantum formalism while maintaining philosophical
agnosticism about ontological interpretation, AI users must engage
productively with algorithmic outputs while maintaining awareness of
their non‑sentient, optimization‑driven nature. Just as Krauss’s toggle
failure led him to conflate empirical physics with metaphysical
resolution, AI users who lose toggle competence conflate statistically
plausible outputs with genuine understanding, convenience with wisdom,
optimization with deliberation.
The connection to dramatic rehearsal
proves particularly significant. Dewey’s concept captures the
distinctively human capacity for imaginative exploration of possible
actions and consequences before commitment—a process involving the whole
person, not just analytical cognition, and inherently social in its
consideration of others’ responses. AI deployment often undermines
conditions necessary for genuine dramatic rehearsal: algorithmic
solutions’ speed and apparent convenience can short‑circuit deliberative
processes, encouraging acceptance of outputs without fully exploring
their implications. The opacity of AI systems makes it difficult to
imagine meaningfully what delegation involves. As systems become more
sophisticated at predicting preferences, they may reduce the felt need
for dramatic rehearsal by providing solutions that appear obviously
optimal.
Toggle
competence becomes the mechanism for preserving dramatic rehearsal in
AI‑mediated contexts. By maintaining capacity to shift from immersive
collaboration to critical reflection, users can catch themselves before
accepting algorithmic outputs that bypass genuine deliberation. The
toggle moment—pulling back to ask “What am I outsourcing here? What
understanding am I losing? What alternatives am I foreclosing?”—creates
space for the imaginative exploration that dramatic rehearsal requires.
This
also clarifies why toggle competence resists the metrics‑based
assessment that much AI governance discourse demands. One cannot
quantify “toggle frequency” in meaningful cross‑context ways because
what counts as appropriate toggling varies by domain, practitioner, and
situation. A “redirect” in one interaction might be a
“misunderstanding” in another; coding such moments requires interpretive
judgment that is itself path‑dependent and context‑sensitive. More
fundamentally, toggle competence operates at a phenomenological level
that may be accessible to practitioners themselves but not reliably
detectable through behavioral analysis.
The
quantitative fundamentalism critique thus circles back to illuminate
the AI governance challenge. Just as Krauss and Hawking could not
recognize philosophical questions as legitimate because their
epistemology privileged only empirically testable claims, AI governance
frameworks that demand metrics for all meaningful capacities risk
optimizing what can be measured while neglecting what matters most.
Toggle competence, dramatic rehearsal quality, narrative coherence, and
the capacity for genuine reason‑giving are real capacities essential to
human flourishing, even if they resist the quantification that
algorithmic systems privilege.
A
mature framework must embrace complementary modes of
knowledge—quantitative where appropriate, interpretive where necessary,
and attentive to tacit dimensions that resist explicit articulation.
This methodological pluralism does not reject measurement but recognizes
its limits, acknowledging that some of the most crucial capacities for
preserving human agency in AI‑mediated contexts cannot be reduced to the
metrics that computational systems can process. Toggle competence
itself exemplifies this insight: it is the learned capacity to recognize
when measurement suffices and when situated judgment must transcend
available quantification—making it simultaneously essential for AI
governance and irreducible to the technical frameworks such governance
often privileges.
--------------------------------------------------------
Summary paragraphs for separate spinoff paper on interpretive space in light of the prob of Quant. Fundamentalism:
Sum para for sep. paper on interpretive space: A spinoff for sep. paper
Toggle
competence names a practical form of wisdom needed whenever we work at
the boundary between quantitative and interpretive domains. It is the
learned capacity to shift fluidly between immersive engagement with
powerful formal or computational systems and critical reflection on what
those systems can and cannot do. In semantics and interpretation, this
means recognizing that many of our most important concepts—justice,
harm, common good, even ordinary color talk—do not admit of a single,
metric “bullseye” definition, yet are neither arbitrary nor merely
private. They function instead as structured interpretive spaces: we
broaden them when we need big‑tent cooperation or institutional
flexibility, and narrow them when action, adjudication, or precise
coordination requires sharper edges. No algorithm can pre‑decide when to
broaden or narrow without reinstating, at a higher level, the very
quantitative fundamentalism the view resists.
The
spin‑off for a theory of interpretation is that meaning is better
thought of as navigability within such spaces than as successful aim at a
unique point. Interpretive competence involves managing this
“breathing” of concepts—knowing when to tolerate ambiguity as productive
and when to discipline it as obfuscating, when to demand more precise
criteria and when doing so would be a category mistake. Toggle
competence, in this setting, is the reflective awareness that some
questions genuinely are suited to metric resolution, while others remain
irreducibly interpretive yet still objective in the sense of being
publicly arguable, evidence‑responsive, and better or worse justified.
It is precisely this non‑algorithmic sense of when to stay with
formalism and when to lean on tacit, situated judgment that any serious
semantics or hermeneutics will have to account for, especially under
conditions where computational systems tempt us to treat all meaning as
if it were ultimately quantifiable.