Tuesday, July 8, 2025

Revised draft 2: AI-Human Entanglement, Agency, and the Transformation of Governanc

 

AI-Human Entanglement, Agency, and the Transformation of Governance

Introduction: Why the Framework Matters

As artificial intelligence (AI) becomes deeply woven into the operations of government, military, and civil society, the way decisions are made—and who or what makes them—is undergoing a profound shift. To understand the stakes, we need conceptual tools that clarify what is truly new and unsettling about this transformation.

This analysis critiques not AI technology per se, but the current institutional regime of AI deployment—the specific architectural choices, economic incentives, and organizational structures that constitute what we mean by "the age of AI." These arrangements are contingent, not inevitable, but their current trajectory poses urgent challenges to human agency and democratic governance. Far from being inherently corrosive, AI's impact stems from its deployment as a replacement for human deliberation, prioritizing efficiency over accountability and creating what we term the "user's illusion" of control.

While this analysis focuses on diagnosing the prevailing regime of AI deployment—one that prioritizes labor-saving automation and the replacement of human judgment—it is important to note that alternative, more collaborative models of AI use do exist. However, such labor-intensive, oversight-rich approaches remain rare exceptions, practiced mostly by individuals or small collectives rather than at scale within major institutions. The current institutional architecture overwhelmingly incentivizes efficiency and cost-cutting, not creative or deliberative collaboration between humans and AI.

Unlike deterministic views that portray AI as inherently corrosive to human thought and agency, this article examines how current labor-saving AI deployments systematically displace human judgment across institutional domains. While some argue that AI uniformly "makes us dumber" or "homogenizes thought," such critiques reify technology and obscure the crucial question of institutional arrangements. Different deployment architectures could yield different outcomes—users who maintain editorial oversight in collaborative AI workflows would likely show different outcomes than those using AI for complete task replacement, though current studies focus exclusively on labor-saving rather than labor-intensive approaches.

Two philosophical frameworks, repurposed for our era, provide the analytical clarity needed: Wilfrid Sellars' distinction between the "space of reasons" and the "space of causes," and Jürgen Habermas's analysis of "system" and "lifeworld." By adapting these ideas, we can see how AI is not just a tool but a force reshaping the very structure of agency, accountability, and meaning in society.

Philosophical Foundations: Sellars and Habermas, Repurposed

1. Sellars: Reasons, Causes, and the Human/AI Divide

Sellars' Original Distinction:

  • The "manifest image" is the world as we experience it—where people act for reasons, deliberate, and explain themselves in terms of intentions and purposes.
  • The "scientific image" is the world as described by science—where all events, including human actions, are explained by physical, chemical, and biological causes.
  • Sellars argued that these are rival frameworks: when we see someone as a reason-giver, we inhabit the manifest image; when we see them as a collection of causal processes, we inhabit the scientific image. We can't do both at once.

A Contemporary Adaptation:

  • Human beings actually operate in both spaces at once. Our actions are shaped by reasons (deliberation, values, intentions) and causes (emotions, hormones, physical states) in a complex, interdependent way. For example, a decision to protest might be driven by both moral conviction and a surge of adrenaline.
  • AI, by contrast, operates only in the "space of causes." No matter how sophisticated, AI systems process data and generate outputs through algorithmic, causal processes—without understanding, intention, or reason-giving. Their "decisions" are not the result of deliberation but of mathematical optimization.
  • This distinction is crucial: while humans can justify and explain their actions, AI systems cannot. When we treat AI outputs as if they were reasoned decisions, we risk falling for the "user's illusion"—the mistaken belief that we remain in control, when in fact we are ceding agency to systems that do not and cannot reason as we do.

2. Habermas: System, Lifeworld, and the New AI Era

Habermas' Original Framework:

  • The "lifeworld" is the background of shared meanings, cultural knowledge, and communicative practices—where people interact, deliberate, and create social norms through language and mutual understanding.
  • The "system" is the realm of formal organizations, markets, and bureaucracies, governed by instrumental rationality—efficiency, control, and goal-oriented action.
  • Habermas warned that the expansion of system logic could "colonize" the lifeworld, crowding out spaces for genuine communication, reason-giving, and meaning.

An Updated Framework:

  • In the AI era, "system" now includes algorithmic infrastructures—AI platforms, data analytics, automated protocols—that are embedded in the very fabric of society.
  • These systems automate and reshape human reasoning itself, often bypassing or eroding the space for deliberation, evaluation, and shared meaning.
  • The colonization of the lifeworld is thus radically intensified: not only are bureaucratic procedures crowding out meaning, but so are non-sentient, algorithmic processes that lack any capacity for communicative understanding or normative judgment.

The User's Illusion: How Agency Transfer Is Masked

The "user's illusion" manifests across multiple levels, from personal interfaces to institutional processes. This psychological and structural phenomenon creates the false impression that humans remain in control while agency is systematically transferred to algorithmic systems. Understanding how this illusion operates is crucial to recognizing the scope of the transformation underway.

Contemporary Examples at the Personal Level

The newest "agentic AI" systems demonstrate this illusion most clearly. Users sign EULAs granting AI assistants access to credit cards and calendars, thinking they're enabling "smart recommendations." In reality, these systems make autonomous purchasing decisions—booking flights, ordering groceries, buying concert tickets to bands users have never heard of—while framing these actions as helpful suggestions.1 Users experience convenience while ceding fundamental choice-making authority to algorithms operating purely through causal optimization.

How Institutions Sustain the Illusion

HR Departments: Managers believe they're "using AI tools to help with hiring decisions" while algorithms pre-filter 80-90% of candidates before any human review. The illusion of human judgment persists while the actual decision-making has been outsourced to systems that cannot explain their selection criteria in human terms.

Judicial Systems: Judges think they're "consulting risk assessment tools" while COMPAS or similar algorithms effectively determine sentencing recommendations that are rarely overridden. The appearance of judicial deliberation masks algorithmic determination of outcomes.

Medical Diagnosis: Doctors feel they're "leveraging AI assistance" while diagnostic algorithms increasingly drive treatment protocols and insurance approvals, often with physicians unable to explain or override algorithmic recommendations.

Interface Design That Conceals Agency Transfer

Modern AI systems use sophisticated interface design to maintain the illusion of human control:

  • "Recommendation" language: Framing algorithmic decisions as suggestions while providing no meaningful alternatives
  • Consent theater: Lengthy terms of service that bury actual agency transfers in legal language while creating the appearance of informed choice
  • Dashboard illusions: Control panels allowing minor preference adjustments while major algorithmic decisions remain hidden and unalterable

It is worth noting that some users and researchers experiment with more labor-intensive, human-in-the-loop workflows—where AI serves as a creative partner and human oversight is continuous. For example, iterative drafting, critical feedback, and source verification can preserve human editorial agency. Similarly, collaborative AI models exist in resource-rich domains like medical imaging at major hospitals or adaptive learning platforms at well-funded universities, but their rarity underscores how institutional incentives favor efficiency over human oversight. Yet these practices are institutionally discouraged and remain marginal, especially in sectors like HR, healthcare, or government, where the dominant logic is to automate and streamline rather than to augment or collaborate.[^14]

Case Study: Palantir, Trump-Era Policies, and the Outsourcing of Agency

The Rise of AI-Driven Infrastructure

Palantir's platforms (Foundry, Gotham) are now central to data integration and decision-making across US government agencies, including the Department of Defense, Homeland Security, IRS, and ICE.2 Executive orders have mandated the elimination of "information silos," enabling the creation of centralized, cross-agency databases that aggregate sensitive personal data on millions of Americans.3

These platforms are not just tools—they are infrastructural substrates that mediate, automate, and even supplant processes of human deliberation across institutions. The integration represents a fundamental shift from human-centered decision-making to algorithm-mediated governance.

From Deliberate Abuse to Systemic Transformation

Deliberate abuses—such as the use of AI-driven platforms for surveillance, mass deportations, and targeting of protesters—are visible and alarming.4 But the more profound yet less visible transformation is the displacement of human judgment by algorithmic processes. Palantir's AI tools enable immigration enforcement practices that risk undermining due process, as evidenced by reported cases of arbitrary detentions and deportations without traditional legal protections.5

The Stealthier Transformation: The systematic replacement of human deliberation with algorithmic optimization occurs through several mechanisms:

  • Speed and scale requirements: Algorithmic governance enables rapid, large-scale interventions (mass deportations, predictive policing) that would be impossible under human-led systems
  • Opacity as standard practice: Automated decisions cannot be interrogated or justified in human terms, undermining traditional mechanisms of appeal and contestation
  • Efficiency imperatives: Institutional pressure to process more cases faster inherently favors algorithmic over deliberative approaches

The Illusion of Control in Government Operations

Many believe that humans remain in control of AI systems, as with traditional tools. In reality, agency is distributed and often ceded to opaque algorithmic processes. This "user's illusion" manifests in government through:

  • Policy language: Describing algorithmic outputs as "recommendations" while rarely overriding them
  • Procedural theater: Maintaining forms of human review that lack meaningful authority to alter algorithmic determinations
  • Technical mystification: Treating algorithmic complexity as inherently neutral rather than embodying particular values and decision criteria

Consequences: Accountability, Justice, and Social Fabric

Loss of Accountability and Reason-Giving

Opacity: Automated decisions cannot be interrogated or justified in human terms, undermining traditional mechanisms of appeal and contestation. When individuals ask "Why was this decision made?" the response increasingly becomes "The algorithm determined..." rather than a reasoned explanation.

Curtailment of Oversight: Policy changes limiting judicial review—particularly the Supreme Court's restriction of nationwide injunctions—further erode the capacity for human agents to check or reverse algorithmic outputs.6 The traditional separation of powers assumes human decision-makers who can be held accountable through reason-giving.

Erosion of Due Process: Individuals facing life-altering decisions—such as deportation—may have no opportunity for a human judge to hear their claims or weigh the moral stakes. The shift from reasons to causes eliminates the space for contextual judgment and moral consideration.

Societal Transformation

Speed and Scale: Algorithmic governance enables rapid, large-scale interventions that would be impossible under human-led systems. This transforms not just efficiency but the fundamental character of government action—from deliberative to mechanistic.

Dehumanization: The replacement of reason-giving with automated causality transforms individuals from moral agents deserving of consideration into data points to be processed. People become objects of system outputs rather than participants in meaning-making processes.

Reshaping Communities: As these practices scale, they alter the composition of the workforce, the structure of communities, and the broader constitution of society. The cumulative effect is a society increasingly organized around algorithmic optimization rather than human values and deliberation.

Tracking the Transformation: Empirical Measures of Agency Displacement

Institutional Metrics

Job Displacement Indicators:

  • Track positions specifically involving judgment, evaluation, or deliberation that have been automated (loan officers, diagnostic radiologists, parole officers)
  • Monitor human review rates: percentage of algorithmic outputs that receive meaningful human evaluation before implementation
  • Measure appeal/override frequency: how often algorithmic decisions are successfully challenged through human intervention

Process Transformation Indicators:

  • Document deliberation time reduction: average time allocated for human deliberation before vs. after AI implementation
  • Count stakeholder consultation elimination: number of voices/perspectives removed from decision-making processes
  • Analyze documentation changes: shift from narrative justifications to algorithmic output reports

Accountability Metrics

Reason-Giving Capacity:

  • Measure explainability gaps: percentage of decisions where institutions cannot provide human-intelligible justifications
  • Track black box prevalence: proportion of critical decisions made by systems whose decision-making process cannot be audited
  • Assess response to contestation: institutional ability to engage with challenges to decisions vs. "the algorithm decided"

Democratic Participation Indicators:

  • Monitor public comment opportunities and their influence on algorithmic policy implementation
  • Evaluate legislative/judicial review capacity over algorithmic governance systems
  • Track transparency access: public availability of algorithmic decision criteria and audit results

Behavioral and Social Metrics

Agency Atrophy Indicators:

  • Survey decision delegation rates: how often individuals defer to algorithmic recommendations vs. making independent choices
  • Assess critical evaluation skills through standardized testing of ability to evaluate information and detect bias
  • Measure preference articulation: people's ability to express and defend choices when algorithmic recommendations are removed

Meaning-Making Degradation:

  • Evaluate narrative coherence: individual ability to provide coherent accounts of their own decision-making processes
  • Track value reflection capacity: time and cognitive resources people dedicate to examining their goals and purposes
  • Monitor community deliberation: frequency and quality of collective decision-making in local communities and civic groups

Engaging Competing Views

Some argue that AI can embody organizational purposes or create meaning, conflating algorithmic optimization with human reason-giving and thus exacerbating the "user's illusion" of control.[^10] Claims that AI reduces bias or improves efficiency are undermined by substantial evidence of bias amplification—facial recognition systems misidentify individuals leading to wrongful arrests, while medical AI systems produce biased recommendations that over-refer certain demographic groups to urgent care.[^11]

Similarly, arguments that AI enhances democratic participation through consultation platforms or augments individual agency through decision-support tools overlook how current labor-saving deployments prioritize efficiency over deliberation, reducing stakeholders to data points rather than participants in meaning-making processes.[^12] As critics of technological reification have long argued, these effects result from institutional architectures and deployment choices rather than technology's inherent properties.[^13]

The philosophical task is not to reject AI wholesale, but to clarify the boundaries between human judgment and algorithmic processing, ensuring that the capacity for reason-giving and moral deliberation remains within human purview rather than being displaced by systems operating solely through causal optimization.

Policy and Governance: Navigating the New Reality

Beyond Traditional Regulation

Regulatory models premised on external oversight are inadequate when agency is distributed and opaque. Traditional regulatory approaches assume human decision-makers who can be held accountable through reason-giving. Algorithmic systems fundamentally challenge this assumption by operating in the space of causes rather than reasons.

Policy must begin with a realistic description of our entanglement with AI, recognizing that humans are embedded within, not external to, these systems. We cannot regulate AI as if we stand outside it when our economic, social, and political systems are already dependent on algorithmic infrastructures for basic functioning.

Quantifying and Preserving Human Agency

Develop metrics to track the displacement of human judgment and the shift from "reasons" to "causes" across domains. This involves both quantitative measures (jobs eliminated, appeal rates, processing times) and qualitative assessments (capacity for reason-giving, democratic participation, individual agency).

Identify and protect domains where human purposive agency is essential—such as justice, care, and democratic deliberation. This requires not just preventing AI deployment in certain areas, but actively fostering practices and institutions that preserve the capacity for human evaluation, contestation, and reason-giving.

Navigate cybernetically within the system rather than attempting external control. Using negative feedback data about agency displacement, policy can guide deployment decisions to preserve human judgment where it matters most while accepting AI augmentation where it enhances rather than replaces human capabilities.

Conclusion: The Stakes of AI-Human Entanglement

The integration of AI infrastructures like Palantir's Foundry and Gotham into government and civil society marks a profound shift in the mode of governance and social coordination. The dangers are not limited to deliberate abuses, but extend to the stealthy, systemic outsourcing of human agency and the erosion of accountability, justice, and meaning.

By repurposing the frameworks of Sellars and Habermas, we gain a clear lens for understanding—and hopefully resisting—the colonization of the "space of reasons" by the "space of causes" in the age of AI. The challenge is not to reject AI entirely, but to navigate our entanglement with these systems in ways that preserve what is essentially human: the capacity for deliberation, reason-giving, and the creation of shared meaning through communicative action.

The metrics and approaches outlined here offer concrete ways to track and potentially reverse the displacement of human purposive agency. But time is limited—each day brings further entrenchment of algorithmic infrastructures and deeper erosion of the institutional and cultural practices that sustain democratic life.

This essay deliberately restricts itself to a constructive critique of the existing AI-human entanglement regime, aiming to provide a clear, empirically grounded diagnosis before turning to alternatives. A fuller exploration of collaborative, labor-intensive, and creative models of AI deployment—along with the institutional reforms required to support them—will be the subject of future work. Only by first understanding the structures and incentives that shape current practices can we meaningfully chart a path toward more human-centered and accountable uses of AI.

The question is not whether we will live with AI, but whether we can do so while remaining recognizably human in our capacity for moral reasoning, accountability, and meaning-making.


References

Footnotes

  1. Beth Duckett, "Visa, Mastercard Offer Support for AI Agents," Retail, May 6, 2025, https://www.retaildive.com/news/visa-mastercard-ai-agents-payment-tools/716543/.

  2. Sheera Frenkel and Aaron Krolik, "Trump Taps Palantir to Compile Data on Americans," New York Times, May 30, 2025, https://www.nytimes.com/2025/05/30/technology/trump-palantir-data-americans.html.

  3. Executive Order, "Stopping Waste, Fraud, and Abuse by Eliminating Information Silos," The White House, March 20, 2025, https://www.whitehouse.gov/briefing-room/presidential-actions/2025/03/20/executive-order-on-stopping-waste-fraud-and-abuse-by-eliminating-information-silos/.

  4. "Trump Wants to Merge Government Data. Here Are 314 Things It Might Know About You," New York Times, April 9, 2025, https://www.nytimes.com/2025/04/09/technology/trump-government-data-collection.html.

  5. Mariana Olaizola Rosenblat, "Palantir Is Profiting from Trump's Ravenous Appetite for Deportations," NYU Stern School of Business, April 18, 2025, https://www.stern.nyu.edu/technology-democracy/palantir-2025-qt2.

  6. Abbie VanSickle, "Supreme Court Limits Judges' Ability to Issue Nationwide Injunctions, a Win for Trump," New York Times, June 27, 2025, https://www.nytimes.com/2025/06/27/us/supreme-court-nationwide-injunctions-trump.html.

No comments:

Post a Comment