Tuesday, March 31, 2026

 

Self‑Optimization, Reification, and the Virtue of Toggling in the Age of Algorithmic Intimacy

In the age of Web 2.0 and beyond, self‑optimization culture is not simply narcissistic affect but a deeply institutionalized logic of adaptive self‑management, shaped by the cultural infrastructure of the digital economy. Having grown up in an era saturated with platforms like Spotify, Tinder, Instagram, and OnlyFans, many younger cohorts internalize norms of self‑evaluation and "success" through algorithmic mediation rather than through direct, deliberative reflection. The result is a behavioral style that appears mechanical, "soulless," or narcissistic to older generations, but which is in fact the product of a contingent, path‑dependent configuration of capitalist, digital, and therapeutic ideologies.

This infrastructure is not confined to exotic corners of the internet. Algorithmic mediation now structures everyday domains for nearly everyone: Spotify curates musical discovery, Tinder governs romantic possibility, LinkedIn quantifies professional worthiness, and even therapeutic self-care flows through apps that gamify mood tracking and mental health scores. The self becomes a dashboard of metrics — "sexual market value," "engagement rates," "follower growth" — where deliberation yields to optimization. OnlyFans-style interactions are not marginal anomalies but the leading edge of this infrastructural condition, the zone where the logic of abstracted, brand-like personhood becomes most visible.

Drawing on Michael Tomasello and John Searle, I argue that these norms are not imposed from the outside but are internalized as institutional facts. The key developmental foundation is joint attention — the distinctively human, biosocially grounded capacity to share a focus with others, to orient collectively toward objects and events, and crucially to hold one another normatively accountable for doing so. It is this capacity, identified by Tomasello as the evolutionary and developmental root of human culture, that makes constitutive rules possible in the first place. The model here is neither biologically reductive nor social-constructivist in a post-structuralist sense: the point is not that norms are entailed by biology, but that any normativity presupposes shared attentional and cooperative capacities that are themselves biologically grounded — a claim that is empirical and developmental, not metaphysical. Toddlers who spend long stretches kicking a ball back and forth, or attending jointly to some object or activity, are not maximizing rewards — they are exercising a species-specific capacity to find shared orientation itself meaningful. The digital infrastructure I am analyzing does not create normativity from nothing; it reorganizes these older biological-social capacities and channels them into new forms of optimization, abstraction, and reified attachment. Rules are not arbitrary conventions but products of joint attentional and normative practices absorbed before we can reflect on them — which is precisely why Searle's constitutive rules are the right analytical tool here. And it is worth noting at the outset what this biosocial model implies: joint attention, as Tomasello documents it, is a capacity of persons in reciprocal, qualitative relation. It does not apply to algorithmic systems or to management teams simulating a creator's persona — a point whose significance will become clear as the argument develops.

Searle's constitutive rules — such as "ten dimes count as one dollar" or "a pronouncement by a licensed official counts as a legally binding marriage" — are unconscious, taken‑for‑granted structures that govern social life precisely because they are instilled through shared practice, not conscious endorsement. In the neoliberal, digital‑age context, such rules are recast in the register of self‑optimization:

"Money given to a self‑optimized content creator in the context of her 'being‑herself‑brand' counts as sincere appreciation for her 'vibe,' even when the object of the transaction is not a discrete service, but a constructed, algorithmically mediated presence."

This is not a dialectical necessity, but a contingent social formation — one that emerges from the convergence of neoliberalism, platform capitalism, and therapeutic self‑care ideologies. The self is now treated as a project to be upgraded, and the brand or vibe becomes the object of desire.

But here is the key twist: the object drops out. In the 20th‑century busker example, the object is clear:

"Money given to a musician in the context of a live performance counts as appreciation for their music."

The object is concrete, embodied, and directly present. What warrants the tip is an observable, evaluable event — a performance one can judge, enjoy, or find wanting.

In the 21st‑century realm of OnlyFans and algorithmic intimacy, the object is mystified — it is no longer a specific good or service, but the self‑optimized essence of the creator, a "brand‑vibe" that is often distributed across bots, chatters, and management teams. A consumer believes that they are interacting with a real person, a concrete, embodied selfhood, but increasingly they cannot distinguish between communicating with a real human and interacting with algorithmic machines or offshore "experience‑designers" trained to simulate the creator's persona.

This is reification in a specific and compounded sense. Alfred North Whitehead identified the fallacy of misplaced concreteness — the error of treating an abstraction as though it were a concrete particular. Here that error is not merely cognitive but structurally induced: the infrastructure presents a distributed, non‑unitary, algorithmic‑human ensemble as a concrete, unitary person. The consumer attributes personhood and essence to a phantom. And this Whiteheadian error does not merely accompany the Marxian structure of commodity mystification — it accentuates it. In the classical commodity, a social relation between producers takes on the appearance of a relation between things. Here, a mediated commercial transaction between a consumer and a management apparatus takes on the appearance of an intimate relation between persons. The phantom self becomes the commodity, and the mystification is total precisely because the "object" being sold — authentic presence, genuine connection — has no empirical referent that could expose the fraud.


Underlying the Turing-spiral and the infrastructure-native substitution of behavioral output for qualitative presence is a single ideological commitment that deserves to be named directly: quantitative fundamentalism. This is not the legitimate and indispensable use of measurement and modeling — science could not proceed without those. It is something more ambitious: the ontological claim that everything in the universe, from stars and particles to societies, persons, feelings, and relationships, is ultimately constituted by bits. The bits take different forms across different theoretical traditions, but the structure is identical in each case:

  • Physicalism: the bits are subatomic particles; everything else supervenes on or reduces to arrangements of them, with properties like consciousness arising as emergent effects of sufficiently complex organization

  • Information-theoretic ontology: the bits are mathematical units; the universe is fundamentally computational — what John Archibald Wheeler called "it from bit," and what Stephen Wolfram develops as a computational universe

  • Functionalism and Strong AI: the bits are states of a formal symbol-manipulator; what matters is the pattern of operations, not the substrate — so the same computation running on silicon, neurons, or any other medium is equally minded

What unifies these positions is the commitment that qualitative properties — consciousness, felt experience, meaning, the "what it is like" to see red or feel grief or hear a minor chord — are not distinct kinds of being but emergent arrangements of quantities, arising when bits are organized with sufficient speed and complexity. The early Hilary Putnam stated the boldest version: if mental states are defined by their causal-functional role rather than their substrate, then the distinction between biological and computational minds collapses in principle. A philosopher I encountered in the 1980s pressed this against Searle by asking: "But how do we know we aren't already that machine?" — and then added, with serene confidence, that "we can set emotions to one side for the moment," as though feelings were a minor administrative detail rather than the central explanandum. This is the ontological heart of the matter. Searle's Chinese Room argument was designed precisely to block this move: a system can manipulate symbols according to rules without understanding what any of them mean, which shows that syntactic operations — however fast, however complex — do not add up to semantics. The mechanism that would bridge the two has never been proposed, because on quantitative fundamentalist commitments, no mechanism is required: the emergence is simply assumed to follow from sufficient complexity.

Its consequences in the present context are direct. If consciousness is just computation of sufficient speed and complexity, faster and more complex AI systems will eventually acquire sentience as an emergent property. If sincerity is just behavioral output, a well-calibrated chatter provides genuine connection. If the self is just a pattern of outputs, the distributed management ensemble literally instantiates the creator's essence — her authentic brand, her vibe, her self. Each inference is invalid for the same reason: it treats quantitative or behavioral sufficiency as a proxy for qualitative presence. It is as if someone proposed that ordinary milk, stirred fast enough, would become cream cheese — with cream cheese defined as the emergent property of milk at sufficient velocity. Nobody has stirred the milk into cream cheese. Nobody has bridged the computation to the quale. Yet quantitative fundamentalism has become one of the animating ideologies of our era — not because the question has been answered, but because the infrastructure has made it feel like a settled matter rather than an open one. That naturalization is precisely what makes it dangerous, and precisely what the virtue of toggling is designed to resist.


What I am calling the Turing‑spiral — the ideological extension of Turing's own procedural question ("can the outputs be distinguished?") into an ontological conclusion ("therefore there is no relevant difference") — is the governing fallacy of the techno-optimist narrative. The film Her, among many examples, treats consciousness as an emergent property of "complex enough" computation without any explanatory mechanism — a pure expression of quantitative fundamentalism at the level of cultural imagination. The chasm between human and AI agency, however, is real and structural:

  • Human agency is intentional, qualitative, and deliberative — it involves awareness, context‑sensitive judgment, and genuine attention to the other as an end, not merely as a variable

  • AI agency is statistical, pattern‑based, and predictive — it is about next‑token probability, correlation‑mining, and optimization for preprogrammed ends, not understanding

Human agency operates in what Wilfrid Sellars called the space of reasons — a domain where actions are explained by intentions, values, and deliberative judgments that can be given, evaluated, challenged, and revised. AI systems, however sophisticated, operate exclusively in the space of causes: their outputs are products of causal optimization, not normative reflection. Contemporary large language models have introduced a third domain — what we might call the space of pseudo‑reasons — outputs formatted as deliberative and responsive but deriving entirely from statistical pattern‑matching. The danger is not that users mistake LLMs for human interlocutors in some naive metaphysical sense. It is that pseudo‑reasons are functionally persuasive without being epistemically warranted — they produce the affective and behavioral effects of genuine reasoning while bypassing the accountability structures that genuine reason-giving requires. Purposive agency — the capacity to deliberate, project, and revise one's ends in light of values — is displaced not by a visible takeover but by the quiet substitution of causal optimization for normative judgment, dressed in the linguistic forms of the latter.


So what is to be done? At the conceptual level, the philosophical move here is diagnostic: the object has dropped out, and what remains is a distributed, algorithmically mediated ensemble routinely mistaken for a concrete, intentional person. But diagnosis alone does not tell us how to live inside this infrastructure. That is where toggling competence enters.

Toggling competence is a cardinal virtue of the algorithmic age — the learned capacity to move fluidly in and out of the intentional stance toward machines and algorithmically mediated personas. A clarification about the intentional stance is necessary here, because the concept as I use it differs importantly from its source. I borrow Dennett's term not as a complete theory of mind but as a phenomenological description of a particular role we adopt toward systems — treating them as if they possess beliefs, desires, and understanding. But I diverge from Dennett on two points he resists. First, the intentional stance is a performative attitude we can adopt sincerely, cynically, or reflexively — not merely a predictive strategy. Second, and more fundamentally, toggling between the intentional and design stances preserves a normative distinction that Dennett collapses: the difference between operating in Sellars' space of reasons (genuine deliberation, accountability, reason-giving) and the space of causes (causal prediction, optimization). For Dennett, this distinction ultimately dissolves — human minds are, on his view, themselves best understood as sophisticated causal systems, and qualia are in the end illusory. Dennett's illusionism — the denial that qualia are real rather than functional constructs — is precisely the move that the biosocial account forecloses. That move is a non-starter in the present framework for the same reason that Strong AI functionalism and the Turing-spiral are: it treats behavioral and computational sufficiency as a proxy for qualitative presence, without providing any bridging mechanism. The biosocial account developed here — grounded in Tomasello's joint attention and the qualitative, reciprocal character of human normativity — forecloses that dissolve from the outset. The machine convincingly mimics the intentional stance toward us, generating outputs that present as reasons or even emotionally charged intentions — the space of pseudo-reasons — while operating entirely in the space of causes through pattern-matching and next-token prediction. We must interact with these machines as if they have intentionality just to get anywhere in conversation. But we must retain the capacity to toggle away from that suspension of disbelief before the distinction between genuine deliberation and its simulation disappears from view entirely.

Toggling, then, has two distinct but inseparable dimensions. The first is epistemological: knowing that the entity one is engaging with operates in the space of causes rather than the space of reasons, that its outputs are pseudo‑reasons rather than genuine deliberations, that the "person" one is interacting with may be a distributed ensemble rather than a unitary self. This knowledge is necessary but not sufficient.

The second dimension is attitudinal, and this is where the virtue properly resides. One can know that X is a pattern‑matching system and still be behaviorally and emotionally captured by it — still act as though the connection were real, still feel the tug of the pseudo‑intimacy, still defer to the pseudo‑reasons. Knowledge without the right attitudinal relation to that knowledge produces at best an intellectualized irony that leaves behavior untouched. What is needed is something closer to what Erving Goffman identified as role distance — the capacity to inhabit a role with full performative engagement while simultaneously maintaining a reflexive, critical relation to one's own inhabiting of it. The skilled actor does not cease to perform; they perform from a position that includes awareness of the performance. Applied to our context: one steps into the intentional stance toward the machine or the persona with sufficient sincerity to make the interaction productive, and retains the ability to step back, re-read the transcript, re-classify the interaction as infrastructure-plus-performance rather than genuine interpersonal bond, and then step in again if useful.

This toggle is not a matter of belief or disbelief in the machine. It is a pragmatic, performative skill set — a form of practical wisdom that resists algorithmization precisely because it requires situated, context-sensitive judgment about when to immerse and when to surface. Like all virtues, it can be cultivated through practice and reflection; unlike algorithmic skills, it cannot be reduced to a procedure without self-contradiction.


The overall claim, then, is twofold. Conceptually: in the age of algorithmic intimacy, the object of many social transactions has been hollowed out, abstracted, mystified, and reified into infrastructure — a phantom self sold as genuine presence, a commercial apparatus presented as a personal bond. Attitudinally: the appropriate response is toggling competence — a role-theoretic, virtue-based capacity to inhabit the intentional stance when it is generative, and to suspend it critically when reflection, discernment, or resistance is required. Toggling competence is not merely a personal adaptive strategy. It is the specific cognitive and attitudinal capacity that the infrastructure is designed to erode — the elimination of every seam at which the user might surface and see the ensemble for what it is. Consider the typing indicator — the animated ellipsis signaling that someone is composing a response to you, personally, right now, whether or not any human is present. Its function is to reinstate the felt sense of live, reciprocal attention at precisely the moment a user might otherwise pause and surface. Toggle-prevention compressed into a single interface element — and it works even on users who know they may be talking to a machine. The toddler kicking a ball finds shared orientation intrinsically meaningful — which is precisely what Tomasello's comparisons with great apes establish: this is a distinctively human capacity, grounded in genuine reciprocal presence. The digital native-- for whom the infrastructure has become second nature--watching the ellipsis pulse requires no such presence. The infrastructure has learned to trigger the same biosocial wiring with a single animated element, and the feeling of being attended to arrives on cue — even when no one is there, even when the user suspects as much, and even then: they keep watching anyway. The virtue matters because there are well-resourced institutional interests in its suppression.


References

Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, 1996. [hard problem of consciousness]

Dennett, Daniel C. The Intentional Stance. MIT Press, 1987.

Goffman, Erving. The Presentation of Self in Everyday Life. Anchor Books, 1959.

Goffman, Erving. Encounters: Two Studies in the Sociology of Interaction. Bobbs-Merrill, 1961. [role distance]

Marx, Karl. Capital: A Critique of Political Economy, Vol. 1. 1867. [commodity fetishism, mystification]

Nagel, Thomas. "What Is It Like to Be a Bat?" Philosophical Review 83, no. 4 (1974).

Putnam, Hilary. "Minds and Machines." In Dimensions of Mind, ed. Sidney Hook. New York University Press, 1960.

Putnam, Hilary. "The Nature of Mental States." In Mind, Language and Reality: Philosophical Papers, Vol. 2. Cambridge University Press, 1975.

Searle, John R. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3, no. 3 (1980). [Chinese Room]

Searle, John R. The Construction of Social Reality. Free Press, 1995. [constitutive rules, institutional facts]

Sellars, Wilfrid. "Empiricism and the Philosophy of Mind." Minnesota Studies in the Philosophy of Science 1 (1956). [space of reasons/causes]

Tomasello, Michael. A Natural History of Human Morality. Harvard University Press, 2016.

Tomasello, Michael. Becoming Human: A Theory of Ontogeny. Harvard University Press, 2019. [joint attention, shared intentionality]

Turing, Alan M. "Computing Machinery and Intelligence." Mind 59, no. 236 (1950).

Turing, Alan M. "The Chemical Basis of Morphogenesis." Philosophical Transactions of the Royal Society B 237, no. 641 (1952).

Wheeler, John Archibald. "Information, Physics, Quantum: The Search for Links." In Complexity, Entropy, and the Physics of Information, ed. Wojciech Zurek. Addison-Wesley, 1990. ["it from bit"]

Whitehead, Alfred North. Process and Reality: An Essay in Cosmology. Macmillan, 1929. [fallacy of misplaced concreteness]

Wolfram, Stephen. A New Kind of Science. Wolfram Media, 2002. [computational universe]

No comments:

Post a Comment