AI-Human Entanglement, Agency, and the Transformation of Governance
Introduction: The Stakes of Entanglement—Why This Framework, Why Now?
The advance of artificial intelligence is driving a quiet revolution—one that refashions not only how institutions operate, but also how meaning, authority, and agency are experienced in daily life. Unlike the dramatic disruptions that capture public attention, this transformation proceeds through subtle displacements: the gradual outsourcing of judgment to algorithmic systems, the erosion of spaces for deliberation, and the systematic replacement of human reason-giving with optimized outputs that simulate deliberation without providing its substance.
Artificial intelligence is less a wave of discrete technological tools and more a web of infrastructural conditions that quietly reshapes how we act, think, and value. Not only are institutions reconstituted around algorithmic mediation and optimization, but the very fibers of daily life—decision, meaning, and critique—are renegotiated inside this surrounding web. If we are to navigate this landscape well, we need clear concepts for the different ways agency is now distributed, transferred, and sometimes atrophied.
This essay offers a framework to diagnose the structural and lived consequences of AI deployment, drawing on philosophical traditions that we repurpose for contemporary challenges. Our analysis builds on Wilfrid Sellars' distinction between the "space of reasons" and the "space of causes," Jürgen Habermas's account of system colonization of the lifeworld, and John Dewey's insights into purposive agency through "dramatic rehearsal." To these established frameworks, we add our theoretical innovations, including the concept of a "space of pseudo-reasons" and expanded attention to how AI operates across multiple scales—from individual psychology through primary group dynamics to institutional transformation.
Our analysis critiques not "AI" in the abstract, but the institutional regime of AI deployment—the specific architectural, economic, and organizational arrangements that reward the displacement of deliberation in favor of efficiency. The danger we identify is not inevitable, but grows from incentives that privilege streamlined automation while masking agency transfer behind what we term the "user's illusion" of control.
As artificial intelligence systems become more sophisticated and pervasive—particularly with the emergence of "agentic AI" that can act autonomously across digital platforms—the stakes of this analysis intensify. We are not merely witnessing the automation of specific tasks, but the transformation of the fundamental conditions under which human agency operates. The question is not whether we will live with AI, but whether we can do so while preserving what is essentially human: the capacity for deliberation, reason-giving, and the creation of shared meaning through communicative action.
A Typology of Agency in Human-AI Systems
To understand these transformations, we must first distinguish between different types of agency that have emerged through the historical development of AI systems. Rather than treating "artificial intelligence" as a monolithic category, we propose a four-part typology that maps both onto technological capabilities and their chronological development:
1. Human Purposive Agency
At the heart of human life is a capacity for purposive agency—for deliberation, creative projection, and normative evaluation, what John Dewey called "dramatic rehearsal." Human persons are not mere bundles of impulse or products of optimization. We inhabit what Wilfrid Sellars termed the "space of reasons"—a domain where intentions, narratives, and justifications are forged and negotiated. We do this not as pure rational calculators, but as beings whose futures are shaped in part by imaginative rehearsal, memory, affect, and the lived negotiation of values.
When humans act, they typically do so for the sake of an end-in-view—surviving, competing, creating, connecting. Crucially, humans are generally aware of their purposes and can engage in what Dewey called "dramatic rehearsal"—the imaginative exploration of possible actions and their consequences before committing to a particular course.
This process involves the whole person, not just analytical cognition. When considering whether to change careers, individuals don't simply calculate costs and benefits. They imaginatively inhabit different possible futures, exploring how it might feel to do different kinds of work, how their relationships might change, what kinds of meaning they might find. This embodied, social exploration of possibilities is central to what makes human agency distinctively human rather than merely computational.
This is not a remote philosophical ideal but a functional prerequisite: without such capacities, human cooperation, normativity, and meaning would not be possible. Importantly, human agency operates simultaneously in what Sellars distinguished as the "space of reasons" and the "space of causes." Unlike Sellars' original formulation, we recognize that human actions are shaped by both deliberation and biological/social causation—hormones, emotions, social pressures all influence our "reasoned" choices. A decision to participate in a protest might be driven both by moral conviction and by adrenaline or social conformity. Human agency emerges from this dynamic interaction rather than from pure reasoning.
2. Direct Artificial Agency
By contrast, AI systems operate solely within what Sellars called the "space of causes." Their "decisions"—no matter how sophisticated—are outputs of causal optimization, devoid of normativity or reflective grasp of meaning. The earliest AI systems developed in the 21st century—autonomous vehicles, weapons systems, predictive policing algorithms—were designed to execute specific tasks with direct effects on the physical world. These systems operate exclusively in the space of causes through algorithmic processes, without understanding, intention, or genuine reason-giving. Their "decisions" result from mathematical optimization designed to achieve specified objectives within defined parameters.
A legal autonomous weapons system, for instance, can identify and engage targets based on programmed criteria, but it cannot engage in moral reasoning about whether such action is justified. It operates through chains of efficient causation—sensor data, pattern recognition, targeting algorithms —with no access to the space of reasons that would allow for ethical deliberation. For all the power of machine learning, current AI does not access, or even approximate, the space of reasons: its agency is direct, causal, and indifferent to meaning, justification, or value.
3. Indirect Artificial Agency: The Interpretive Turn
The emergence of large language models (LLMs) and sophisticated recommendation systems created a new form of artificial agency which operates indirectly through human interpretations of outputs (be they in the form of text, sound, image, suggestion, explanation et al.). Indirect Artificial Agency resides in the human act of interpreting any AI output—critically or superficially, reflectively or uncritically. Whether the LLM offers a "suggestion," a rationale, a diagnosis, a poetic completion, or a mundane prompt, it becomes agentically consequential only when a person reads, appropriates, and acts upon it—assigning it significance, credibility, or skepticism. The decisive moment is not at the point of output, but in the interpretive labor that follows.
[Footnote 1: Of course, there are cases where an AI output—like an autogenerated medical report or legal brief—receives no individual reading or reflection. In such situations, it might seem there is no interpretive act at all, and thus that the AI itself exerts “real and direct” agency akin to autonomous systems acting physically in the world. But in fact, the interpretive act is not missing; it has shifted from the individual to the institutional level. Institutional protocols and conventions treat absence of review as tacit approval. Unread outputs become consequential not by their mere production, but because the system has been designed to interpret “not read, not flagged” as “approved” or “fit for action.” The responsibility for outcomes thus lies not in the machine’s output alone, but in the organizational translation of inaction into permission—a transfer of interpretive agency to rules, defaults, and workflow artifacts. To mitigate the normalization of this unintended default, institutions must enact protocols that actively check and counteract negligence—ensuring that absence of review does not become tacit approval.]
When an AI assistant suggests a restaurant or writes a business report, it operates through what we term "indirect agency"—the system itself cannot directly book a table or send the report, but its outputs often guide human actions that do so. The human retains the final step of interpretation and implementation, but the quality and depth of that interpretation varies dramatically.
Agency, then, is not a property of the content; it is enacted in the loop of interpretation, judgment, and appropriation. "Labor-intensive" uses—recursive drafting, reflective analysis, repair of hallucinations— preserve and exercise purposive agency. These approaches treat AI outputs as raw material for further deliberation, maintaining human authority over meaning-making and decision-making processes.
Superficial, "labor-saving" uses, by contrast, short-circuit critical engagement, risking the atrophy and outsourcing of meaningful agency. The user does not simply become "passive"; they enter a new relation, where the locus and quality of agency shifts fundamentally. A student who prompts an AI to write an essay and submits it without careful reading represents an extreme case of labor-saving usage—taking themselves almost entirely "out of the loop" and treating AI as substitute rather than supplement.
4. Hybrid Artificial Agency: The Emergence of Infrastructural Entanglement
The newest development in AI systems—agentic AI platforms like advanced browsing agents— combines recommendation capabilities with direct execution powers. These systems can not only suggest actions but also carry them out: booking flights, making purchases, scheduling meetings, managing communications across platforms.
Almost all present-day AI deployment is hybrid: agency pulses and flows in dynamic assemblages spanning persons, machines, institutions, and platforms. This hybridization is not merely additive. It is infrastructural, akin to highways, power grids, or digital backbones: it enables, constrains, and distributes agency and value in ways that no longer map neatly onto the categories of "tool" or "user."
Hybrid systems represent a qualitative shift because they collapse the mediation step that previously allowed humans to maintain deliberative distance from AI outputs. When users can give permission for an AI system to "handle my travel planning" or "manage my social calendar," the system moves from offering suggestions to taking direct action in real-time and space, though still technically under human authorization.
We increasingly "navigate" these infrastructures rather than controlling them from without—adapting, responding, and contesting their effects from within. The question is not whether entanglement will happen, but how it will be structured, and to what ends.
This typology reveals a clear historical trajectory: from systems that act directly on the world without human mediation, to systems that influence human action through symbolic outputs requiring interpretation, to systems that combine both capabilities. Each type creates different patterns of agency displacement and requires different analytical approaches.
Methodological Note: Epistemic Humility and Live Hypotheses
Before proceeding to detailed analysis, we must acknowledge the limitations of current knowledge about human-AI interactions, particularly regarding the newest agentic systems. Much of our analysis of individual and small-group experiences with agentic AI represents live hypotheses rather than empirically established findings.
Agentic AI systems are so new that meaningful ethnographic data, longitudinal studies, and systematic surveys of user experiences simply do not yet exist. Our discussions of "interaction rituals," "frame shifts," and "responsibility attribution patterns" are theoretical constructs designed to guide empirical inquiry rather than settled conclusions about how these systems actually function in daily life.
This reflects our pragmatist commitment to ongoing, revisable, fallibilist inquiry. We offer these concepts as tools for organizing research and navigating emerging phenomena, with the explicit intention of updating and revising our framework as new information becomes available. The rapid pace of AI development requires this kind of theoretical scaffolding for empirical work, even as we remain humble about what we do and don't yet know.
Philosophical Foundations: Revised and Integrated
Sellars: The Space of Reasons, Causes, and the New "Pseudo-Reasons"
Wilfrid Sellars' classic distinction between the "manifest image" and the "scientific image" provides foundation for understanding agency displacement. In Sellars' framework, the manifest image represents the world as we experience it—where people act for reasons, deliberate about choices, and explain themselves in terms of intentions and purposes. The scientific image describes the world as science reveals it—where events, including human actions, are explained through physical, chemical, and biological causes.
Our contemporary adaptation recognizes that humans actually operate in both spaces simultaneously, in complex interdependence that Sellars did not fully anticipate. Human actions are shaped by both reasons (deliberation, values, intentions) and causes (emotions, hormones, social pressures). AI systems, by contrast, operate exclusively in the space of causes through algorithmic processes— without understanding, intention, or genuine reason-giving.
However, contemporary AI deployment has given rise to a third category: the "space of pseudo- reasons." This domain encompasses AI-generated outputs that simulate deliberative reasoning through natural language or structured explanations, but derive from causal optimization processes lacking intentionality or normative judgment.
When AI systems offer "recommendations" complete with justifications, present "smart suggestions" that appear tailored to individual preferences, or provide "explanations" for their outputs, they create the appearance of operating in the space of reasons while remaining firmly within the space of causes. This simulation is not accidental but engineered—contemporary AI systems are explicitly designed to mimic human-like reasoning and communication.
The space of pseudo-reasons becomes particularly significant when humans treat AI outputs as if they were backed by genuine deliberation. This phenomenon—the "user's illusion"—occurs when people interact with AI systems as if they were reason-giving agents capable of genuine understanding and judgment. The more convincing these simulations become, the more effectively they transfer agency from human deliberation to algorithmic optimization while maintaining the appearance of collaborative reasoning.
Habermas: System, Lifeworld, and Algorithmic Colonization
Jürgen Habermas's analysis of "system" and "lifeworld" provides crucial insight into how AI deployment transforms social life. The lifeworld represents the background of shared meanings, cultural knowledge, and communicative practices where people interact, deliberate, and create social norms through language and mutual understanding. The system encompasses formal organizations, markets, and bureaucracies governed by instrumental rationality—efficiency, control, and goal- oriented action.
Habermas warned that system logic poses a threat to human freedom when it begins to "colonize" the lifeworld, crowding out spaces for genuine communication and shared meaning-making. In the AI era, this colonization has been radically intensified. Algorithmic infrastructures now extend system logic throughout virtually every sphere of social life, embedding instrumental rationality not only in formal organizations but in the most intimate spaces of daily experience.
This "System 2.0" operates differently from traditional bureaucratic encroachment because it penetrates directly into the micro-processes of daily life. Where traditional bureaucracies maintained relatively clear boundaries, AI systems integrate seamlessly into personal routines, family decisions, and intimate relationships. The colonization becomes invisible precisely because it presents itself as helpful assistance rather than institutional control.
Most significantly, algorithmic colonization operates through non-sentient processes that lack any capacity for communicative understanding or normative judgment. Traditional bureaucracies, however impersonal, were ultimately staffed by humans who could potentially be held accountable through reason-giving. Algorithmic systems cannot engage in communicative action at all—they can only simulate its appearance while operating according to optimization imperatives.
Dewey and the Preservation of Dramatic Rehearsal
John Dewey's concept of "dramatic rehearsal" captures what is most at stake in AI deployment. For Dewey, thinking is embodied experimentation—the imaginative exploration of possible actions and their consequences before committing to a course. This process is "dramatic" because it involves the whole person, not just analytical cognition, and is inherently social—people rehearse not only their own actions but others' responses.
AI deployment often undermines the conditions necessary for genuine dramatic rehearsal. The speed and apparent convenience of algorithmic solutions can short-circuit the deliberative process, encouraging people to accept AI outputs without fully exploring their implications. The opacity of AI systems makes it difficult to imagine meaningfully what delegation involves. As AI systems become more sophisticated at predicting preferences, they may reduce the felt need for dramatic rehearsal by providing solutions that appear obviously optimal.
The preservation of dramatic rehearsal thus becomes crucial for maintaining human agency in an AI- mediated world. This requires not only protecting spaces for deliberation but actively cultivating the imaginative and social capacities that make such deliberation meaningful.
Cybernetic Navigation: A Methodological Foundation
Rather than attempting to control AI systems from an imagined external position, we need what Andrew Pickering calls "cybernetic navigation"—learning to steer within the complex entanglements we already inhabit. Drawing on Stafford Beer's cybernetic theory of organization, this approach uses feedback loops to guide adaptive responses rather than trying to predict or control outcomes.
The User's Illusion: A Double-Edged and Context-Dependent Resource
The phenomenon of the "user's illusion"—the tendency to treat AI outputs as intentional, reasoned, and meaningful—is no longer a mere bug or pathology. It is both a precondition and a risk in productive human-AI engagement.
Simulated Deliberation
AI systems increasingly present their outputs using the linguistic and structural forms of human reasoning. They offer "explanations," provide "recommendations," and engage in "conversations" that mimic deliberative discourse while operating purely through causal optimization. Users experience these interactions as collaborative reasoning when they are actually engaging with sophisticated simulations of reasoning.
Retained Subjective Control
Users maintain the subjective experience of choice and control—they can accept or reject AI suggestions, ask for alternatives, customize parameters. This preserved sense of agency masks the deeper transformation occurring: the gradual transfer of the substantive work of preference formation, option evaluation, and decision-making to algorithmic processes.
Context-Dependent Assessment
The user's illusion functions differently across contexts:
-
In labor-intensive, creative, or critical interaction, strategically adopting the "intentional stance" (Dennett) toward AI outputs allows us to interpret, repair, and integrate them within our own projects. The illusion sustains the space of reasons, even when we know, at some level, it is a fiction.
-
In labor-saving, high-stakes, or inattentive uses (medicine, law, governance), this illusion can mask the displacement of real agency—giving algorithmic outputs the appearance of deliberative justification, while risking oversight, accountability, and value reflection.
Thus, contextual mindfulness is paramount. When we "toggle" between game frames (full suspension of disbelief, as in RPGs or entertainment) and justice frames (demanding oversight), the crucial question becomes: when is the user's illusion a creative asset, and when does it threaten to erode the very capacities that define and protect human life together?
Distributed Agency
Rather than simple replacement of human by artificial agency, we observe the emergence of what we term "distributed agency": the dynamic capacity for meaningful action that emerges from ongoing negotiations between human deliberative processes, algorithmic optimization systems, and institutional structures, where agency is constituted through their interactions rather than possessed by individual entities.
In distributed agency systems, meaning and intention exist only in the human components, but the capacity for effective action increasingly depends on algorithmic mediation. A person using an AI assistant to plan a vacation experiences agency and makes meaningful choices, but the range of options, evaluation criteria, and implementation pathways are substantially shaped by algorithmic processes they cannot fully understand or control.
This distribution creates new forms of vulnerability. When the algorithmic components of distributed agency systems fail, are manipulated, or operate according to hidden objectives, human users may find their capacity for effective action compromised in ways they cannot easily detect or remedy.
Personal Space and Group Dynamics: New Scales of Encroachment
The Intimate Revolution
Contemporary AI deployment marks a qualitative shift because it penetrates directly into the intimate spaces of daily life. Whereas previous AI primarily displaced human judgment in institutional settings, agentic AI systems embed themselves in the micro-processes through which people coordinate their personal lives and relationships.
Families use AI assistants to coordinate schedules, plan meals, and manage household routines. Friend groups consult AI systems for restaurant recommendations, entertainment choices, and social coordination. Intimate partners rely on algorithmic platforms for relationship advice, gift suggestions, and communication prompts. Each interaction may seem trivial, but their cumulative effect transforms the fundamental conditions under which human relationships develop.
Social Validation of Pseudo-Reasons
One significant development is how pseudo-reasons become socially validated through group interaction. When an AI assistant suggests a restaurant for family dinner, individual members might initially treat this as merely informational. However, as such suggestions prove convenient and satisfactory, they gradually acquire the status of legitimate input into family decision-making processes.
This progression from individual acceptance to social validation occurs through "interaction effects"— family members observe each other treating AI outputs as meaningful guidance and begin to mirror this behavior. Children learn that "asking Alexa" is normal family decision-making. Parents discover that AI suggestions can resolve conflicts by providing apparently neutral alternatives.
Frame Shifts and Interaction Rituals
Drawing on Erving Goffman's frame analysis, we can identify several ways that primary groups learn to interpret AI system involvement:
Tool Frame: AI systems are treated as sophisticated instruments providing information or executing commands without autonomous agency. "Let me check what the weather app suggests."
Social Actor Frame: AI systems are attributed quasi-human characteristics and treated as participants in social interaction. "Alexa thinks we should try that new restaurant."
Mediator Frame: AI systems serve as neutral arbiters helping resolve conflicts or provide authoritative guidance. "Let the AI decide since we can't agree."
These frame shifts often occur rapidly within single interactions and create new "interaction rituals"— routinized patterns generating solidarity and shared identity among group members. Families develop habits around when to consult AI assistants, how to interpret suggestions, and what decisions warrant algorithmic input.
Accountability Negotiation
AI integration into group dynamics complicates responsibility and accountability structures. When an AI-recommended restaurant proves disappointing, family members must negotiate whether this reflects poor human judgment in trusting the algorithm, algorithmic failure, or bad luck. These negotiations reveal how responsibility becomes distributed across human-AI networks in ways that can obscure rather than clarify moral accountability.
Note: These analyses of group dynamics represent theoretical hypotheses based on our framework rather than empirically established patterns. Systematic ethnographic research on how families and friend groups actually integrate AI systems into their decision-making processes remains to be conducted.
Societal Transformation: Institutional, Systemic, and Democratic Stakes
The Institutional Displacement of Deliberation
At the institutional level, AI deployment accelerates transformation extending far beyond simple automation. Contemporary organizations increasingly embed AI systems as infrastructural elements that reshape how decisions are made, problems are defined, and success is measured. This represents the "infrastructuralization" of AI—its evolution from discrete application to fundamental organizing principle.
Government agencies exemplify this transformation. Platforms now serve as central nervous systems for data integration and decision-making across multiple departments. These systems do not simply automate existing processes but reconstitute governance itself around algorithmic optimization. Traditional bureaucratic procedures, however imperfect, maintained space for human judgment, appeal, and revision. Algorithmic governance systems embed optimization imperatives directly into institutional decision-making structures.
Opacity and the Erosion of Democratic Accountability
Traditional democratic governance depends on holding public officials accountable through reason- giving. Citizens can demand explanations for policy decisions, challenge institutional logic, and vote officials out when their reasoning proves inadequate. This presumes that decisions are made by humans who can articulate and defend their reasoning in public discourse.
Algorithmic governance fundamentally disrupts these accountability mechanisms because AI systems cannot engage in genuine reason-giving. When citizens ask "Why was this decision made?" responses increasingly become "The algorithm determined..." rather than reasoned explanations that can be evaluated and challenged. Even when AI systems provide explanations, these typically consist of correlational patterns rather than principled reasoning that democratic accountability requires.
The opacity problem extends beyond technical inscrutability to "institutional opacity"—the inability of public officials themselves to understand or explain algorithmic decisions they implement. When immigration enforcement relies on AI systems to identify deportation targets, officials may be unable to provide substantive justification beyond pointing to algorithmic outputs. This creates situations where democratic accountability becomes structurally impossible rather than merely difficult.
Dehumanization and Narrative Coherence
Perhaps the most profound consequence of large-scale AI deployment is "systemic dehumanization"—the gradual transformation of individuals from moral agents deserving consideration into data points to be processed efficiently. This operates not through explicit cruelty but through systematic replacement of human-centered processes with optimization algorithms that treat people as variables in mathematical functions.
Immigration enforcement provides a stark example. When AI systems identify individuals for deportation based on algorithmic risk assessment, they reduce complex human stories to computational variables. The system cannot consider depth of community ties, nuance of family circumstances, or moral weight of separating children from parents. These human factors become externalities to be managed rather than central concerns guiding policy implementation.
This erosion of agency contributes to "narrative incoherence"—the inability of individuals and communities to provide meaningful accounts of their experiences and choices. When major life decisions are increasingly influenced by algorithmic mediation, people struggle to construct coherent stories about their agency and responsibility. The space of pseudo-reasons provides apparent explanations but not the substantive reasoning that supports authentic self-understanding.
Metrics: Tracking the Shifts in Agency
To empirically anchor our theoretical analysis, we propose a comprehensive framework for measuring agency displacement across multiple domains:
Institutional-Level Indicators
Job Displacement in Judgment-Based Roles: Track positions requiring human evaluation, deliberation, or moral reasoning that have been automated or eliminated. Distinguish between augmentation (humans retain final authority) and replacement (algorithmic outputs determine outcomes).
Human Review and Override Rates: In contexts where AI systems nominally assist rather than replace human decision-makers, measure how often humans meaningfully evaluate algorithmic outputs and exercise override authority. Low rates may indicate ceremonial rather than genuine human oversight.
Deliberation Time Allocation: Measure average time allocated for human deliberation in institutional processes before and after AI implementation. Significant reductions may indicate that thoughtful consideration is being sacrificed for algorithmic efficiency.
Appeal and Contestation Success Rates: Track how often appeals succeed and whether success rates change as institutions rely more heavily on algorithmic decision-making. Declining success rates may indicate that appeals processes are becoming ineffective against algorithmic determinations.
Individual and Group-Level Indicators
Decision Delegation Frequency: Track how often individuals defer to algorithmic recommendations versus making independent choices across different life domains, from entertainment to major life transitions.
Narrative Coherence Assessment: Evaluate individuals' ability to provide coherent, substantive accounts of their decision-making processes. Can people explain their choices in their own words and construct meaningful narratives about their agency?
Dramatic Rehearsal Capacity: Assess individuals' ability and inclination to imaginatively explore alternative courses of action before making decisions. Declining capacity may indicate that algorithmic optimization is short-circuiting deliberative processes.
Group Consultation Rituals: Measure frequency, formalization, and emotional investment in AI consultation practices within families and friend circles. Elaborate consultation rituals may indicate deep integration of algorithmic mediation into social identity.
Democratic and Civic Indicators
Public Deliberation Quality: Measure depth and substantiveness of public discourse about policy issues, tracking whether democratic debate maintains focus on values and principles or becomes dominated by technical discussions of algorithmic optimization.
Civic Efficacy Beliefs: Survey citizens' beliefs about their ability to influence government action and their understanding of how institutional decisions are made. Declining civic efficacy may indicate that algorithmic governance is undermining democratic participation.
Institutional Transparency: Measure the extent to which public institutions can provide meaningful explanations for their actions when challenged by citizens or oversight bodies.
Policy and Governance: Toward Adaptive, Empirical Vigilance
Beyond Traditional Regulatory Approaches
Contemporary policy discussions about AI governance often assume that traditional regulatory frameworks can be adapted through familiar mechanisms like transparency requirements and oversight agencies. However, our analysis suggests such approaches may be fundamentally inadequate because they presume human decision-makers who can be held accountable through reason-giving.
Algorithmic systems present novel challenges because they operate in the space of causes rather than reasons, making traditional accountability mechanisms structurally irrelevant. An AI system cannot meaningfully respond to demands for justification, modify its behavior in response to moral argument, or be held responsible for actions in ways that democratic governance requires.
This does not mean AI systems should be ungoverned, but rather that governance approaches must be redesigned around the distinctive characteristics of algorithmic agency. Rather than making AI systems accountable in human terms, policy should focus on the institutional contexts within which AI systems operate and the human decisions to deploy them.
Targeting Pseudo-Reason Production
One promising direction involves regulating the production and deployment of pseudo-reasons rather than attempting to govern AI systems themselves. This recognizes that the primary policy concern is not algorithmic accuracy or fairness in the abstract, but the systematic replacement of genuine reason-giving with simulated deliberation.
Interface Disclosure Requirements: Mandate clear disclosure when AI systems are simulating deliberation rather than providing genuine reasoning. Users should know when explanations are post- hoc rationalizations of optimization outcomes rather than principled justifications.
Limits on Anthropomorphic Design: Restrict design elements that encourage users to attribute human-like reasoning to AI systems. This might include limitations on conversational interfaces that simulate human dialogue and requirements for clearly identifying AI-generated content.
Algorithmic Explanation Standards: Rather than requiring AI systems to provide human-like explanations, mandate specific types of technical information that help users understand the causal processes behind algorithmic decisions.
Preserving Deliberative Spaces
A second priority involves actively protecting and fostering domains where human deliberation remains central to institutional functioning. Some contexts require genuine reason-giving and cannot be adequately served by even highly sophisticated algorithmic optimization.
Critical Domain Protection: Designate certain areas—judicial decision-making, democratic deliberation, educational assessment, healthcare consultation—as requiring human judgment and thus inappropriate for algorithmic optimization.
Deliberative Capacity Building: Actively support development of human deliberative capacities through education, institutional design, and cultural programming. This might include civic education curricula emphasizing critical thinking, professional training programs maintaining substantive expertise, and public media programming modeling thoughtful deliberation.
Institutional Design Innovation: Support experimentation with new institutional forms that maintain human agency while incorporating AI capabilities appropriately. This might include deliberative assemblies using AI for information processing while reserving decision-making authority for human participants.
Empirical Vigilance and Adaptive Governance
Most importantly, AI governance requires "empirical vigilance"—continuous monitoring of how AI deployment affects human agency and democratic participation, with governance frameworks that can adapt as these effects become better understood.
Dynamic Metric Systems: Rather than fixed regulatory standards, incorporate the kinds of metrics we have outlined to track agency displacement in real-time, allowing governance to evolve as AI capabilities and deployment patterns change.
Sunset Clauses and Experimentation: Include automatic expiration dates for AI deployment in critical domains, requiring explicit reauthorization based on empirical evidence of effects. Treat AI deployment as experimental intervention that must prove its value.
Democratic Feedback Mechanisms: Include robust mechanisms for incorporating citizen input about experiences with AI-mediated institutions, focusing not only on satisfaction with outputs but on citizens' sense of agency and democratic efficacy.
Research Agenda: Questions for Empirical Investigation
Our framework generates numerous questions for empirical researchers and ethnographers as agentic AI systems become more prevalent:
Individual and Household Studies
-
How do families actually negotiate decisions about AI delegation across different life domains?
-
What factors predict individual resistance to or embrace of algorithmic mediation?
-
How do AI consultation practices vary across cultural, economic, and generational lines?
-
What are the long-term effects of AI-mediated decision-making on individual deliberative capacities?
Group Dynamics and Social Networks
-
What interaction rituals emerge around different types of AI systems in various social contexts?
-
How do responsibility attribution patterns change as groups become more reliant on AI mediation?
-
What role do AI systems play in conflict resolution and social coordination within primary groups?
-
How do social networks adapt when AI systems become central nodes in communication and planning?
Institutional and Democratic Processes
-
Which governance interventions most effectively preserve human deliberative capacities while incorporating AI capabilities?
-
How do citizens experience and respond to increasing algorithmic mediation of government services?
-
What new forms of democratic participation emerge in response to algorithmic governance?
-
How do different institutional designs affect the preservation of human agency in AI-mediated
contexts?
Longitudinal and Developmental Questions
-
How do children socialized in AI-rich environments develop different relationships to agency and decision-making?
-
What are the generational differences in adaptation to agentic AI systems?
-
How do deliberative capacities change over time in individuals and communities with varying
levels of AI integration?
-
What cultural and educational interventions most effectively preserve human agency across generations?
A Minimal Humanism, Pragmatically Grounded
This framework stakes out a middle ground—acknowledging the distributed, non-autonomous, and entangled character of contemporary agency (with affinities to posthumanist accounts), but insisting on preserving spaces for dramatic rehearsal, reflection, and reason-giving. These are not metaphysical axioms or mere empirical regularities; they function as hypothetical imperatives and "thick concepts": if we are to continue living, cooperating, and creating meaning as humans always have, such capacities cannot be optional—they are functional bedrock.
Conclusion: On Remaining Human
The stakes in this new regime of AI-human entanglement extend far beyond technical questions of algorithmic accuracy or efficiency. At issue is nothing less than the preservation of human agency, meaning, and democratic participation in an era of increasingly sophisticated algorithmic mediation.
With each advance in agentic AI capability, the boundaries between human reasoning and algorithmic processing, between genuine deliberation and optimized simulation, between meaningful choice and efficient manipulation are redrawn. These boundary shifts occur not through dramatic disruption but through gradual displacement—the subtle outsourcing of judgment, the quiet erosion of deliberative spaces, the systematic replacement of genuine reason-giving with convincing simulation.
Our analysis has sought to provide conceptual tools for recognizing and responding to these displacements while they remain malleable rather than entrenched. The four-part agency typology, the concept of pseudo-reasons, the user's illusion, and the framework of distributed agency are not merely academic abstractions but practical instruments for diagnosing contemporary transformations and imagining alternative possibilities.
The framework suggests that the primary challenge is not learning to live with AI systems per se, but learning to navigate our entanglement with these systems in ways that preserve what is essentially human. This requires maintaining the capacity for genuine deliberation in an environment increasingly dominated by optimized simulation. It requires preserving spaces for authentic reason-giving in contexts where pseudo-reasons often prove more convenient and efficient. It requires sustaining the imaginative and social capacities that support dramatic rehearsal when algorithmic solutions promise to eliminate uncertainty and effort.
Most fundamentally, it requires recognizing that the choice is not between accepting or rejecting AI technology, but between different ways of organizing human-AI relationships. The current regime of AI deployment prioritizes efficiency, convenience, and optimization while systematically undermining the conditions that support human agency and democratic participation. However, alternative approaches remain possible—ways of incorporating AI capabilities that enhance rather than replace human deliberative capacities, that extend rather than constrain opportunities for meaningful choice, that support rather than substitute for the social processes through which communities create shared meaning.
Ultimately, AI is not just another tool; it is a pervasive infrastructure. Like bridges, highways, and electrical grids, AI conditions the field of attainable action and the shape of agency itself. We cannot simply opt out. The challenge is to develop pragmatic wisdom about when to insist on human oversight and reflection, how to modulate and distribute trust, and where to cultivate new norms and institutions to keep the space of reasons open and alive.
Realizing these alternatives requires both theoretical clarity about what is at stake and practical commitment to the difficult work of institutional and cultural change. It requires developing new forms of democratic participation appropriate to contexts of algorithmic mediation. It requires creating educational and social institutions that cultivate rather than erode deliberative capacities. It requires designing AI systems that serve human flourishing rather than merely optimizing defined objectives.
The empirical vigilance we have advocated is not merely a policy recommendation but a form of collective self-awareness appropriate to a moment of fundamental transition. Only by carefully tracking how current AI deployment affects human agency and democratic culture can societies make informed choices about the kinds of human-AI relationships they want to sustain and develop.
Our analysis suggests grounds for both concern and hope. The current trajectory of AI deployment poses genuine threats to human agency and democratic culture that deserve serious attention and active resistance. However, these threats are not inevitable consequences of technological development but products of particular institutional arrangements and deployment decisions that remain open to modification.
By understanding how AI systems actually operate in social contexts, how they affect human agency and relationships, and how their effects can be measured and governed, democratic societies retain the possibility of shaping technological change rather than merely adapting to it. The preservation of human agency in an age of AI is not a technical problem to be solved through better algorithms or more sophisticated optimization. It is a political and cultural challenge that requires ongoing commitment to the values and practices that sustain democratic life.
The frameworks and metrics we have outlined provide tools for this essential work, but their effectiveness depends ultimately on the willingness of individuals and communities to engage in the ongoing effort of democratic self-governance in an age of AI. The question we face is not whether we will live with increasingly sophisticated AI systems—that trajectory appears virtually certain. The question is whether we can do so while remaining recognizably human in our capacity for moral reasoning, democratic participation, and the creation of shared meaning through communicative action.
This requires not the rejection of AI technology but the development of wisdom about how to integrate such technology into human life in ways that honor rather than undermine the values and capacities that make democratic society possible. The alternative—continued drift toward a society organized around algorithmic optimization rather than human deliberation—represents not technological inevitability but collective choice. By making this choice explicit and creating institutions capable of implementing alternatives, democratic societies can preserve agency and meaning while adapting to technological change.
No comments:
Post a Comment