top of page
BLACK LOGO NO BACKGROUND (2).png

BETTER ARGUMENT

A magazine to think about

The

Better Argument masthead.png

Who the Machine Says You Are

Updated: 4 hours ago


Algorithmic character, population management, and the philosophers we forgot to invite

On how AI inherited rhetoric’s most dangerous art, and what it means for all of us

by

Kem-Laurin Lubin


There is an ancient exercise, once practiced by students in the rhetorical schools of Greece and Rome, called ethopoeia: the art of character making. Derived from ethos (character) and poiein (to make or create), it trained the young orator to compose speech as if it issued from another person, capturing not merely what that person might say but how they would say it, revealing the texture of their moral and psychological disposition. Lysias, the great Athenian logographer, was its acknowledged master. He wrote courtroom speeches for clients he would never meet in person, yet those speeches were so precisely fitted to the character of the speaker that juries found them credible. The technique depended on intimate knowledge: of the speaker, the audience, the occasion, the entire web of circumstances in which a characterization could land as persuasive rather than false.

Plato, characteristically, regarded ethopoeia with suspicion; he saw in it a species of trickery, a form of mimesis that could deceive an audience into trusting a fabricated persona. Aristotle was more sanguine. He believed every speaker engaged in ethopoeia whether they knew it or not, that the construction of character was simply what happened in the act of public address. For both, however, the crucial point was that ethopoeia involved a human agent making deliberate rhetorical choices in a context they could perceive and to which they were accountable. A logographer who fabricated a persona bore responsibility for the fabrication. The audience, if sufficiently attentive, could detect the artifice.


Now transpose this art to the present. Today, the construction of your character, the rendering of who you are for the purposes of consequential decisions, is performed not by a logographer who has studied you but by computational systems that have ingested your data. Your credit score, your insurance risk profile, your hirability as determined by applicant tracking software, the advertisements you are shown, the news you are permitted to see, the probability that you will commit a future crime (recidivism): All of these are acts of characterization. They are, in a precise and not merely metaphorical sense, acts of ethopoeia. The difference is that no human rhetor stands behind them, no one has composed your character with knowledge of your situation, and no one is accountable for the portrait that results.


What computational rhetoric reveals

The term algorithmic ethopoeia, emerging from recent work in my computational rhetoric at the University of Waterloo, names exactly this phenomenon: the process by which computational systems convert human data into digital characterizations, which are then subjected to algorithmic procedures and made to stand in for the person from whom the data was derived. These characterizations are not neutral; they encode values, make moral judgments, and distribute consequences. They are, in the language of classical rhetoric, ethotic: They pertain to character. And they are being produced at a scale and speed that would have dazzled even Gorgias.


This framework sits within the broader field of computational rhetoric, an interdisciplinary enterprise that applies the tools and categories of rhetorical theory to the analysis of how algorithms construct, circulate, and enforce particular representations of reality. The field is not large; it is, one might say, the province of people who read both Aristotle’s Rhetoric and papers on natural language processing, which is to say a rather small club. But its central insight is powerful and, for anyone with philosophical training, immediately recognizable: Algorithms are not neutral instruments. They make arguments. They privilege certain outcomes over others. They construct the very categories through which we apprehend the world, and they do so in ways that are, in the fullest sense of the word, rhetorical.


Consider the implications. When a hiring platform’s screening algorithm evaluates your résumé, it is not simply matching keywords; it is constructing a characterization of you as a certain kind of worker, possessed of certain competencies and dispositions, and it is arguing, implicitly, that this characterization is a reliable basis for decision. When a recidivism prediction tool assigns a risk score to a defendant, it is performing an act of prosopopoeia: giving voice to a statistical construct that then speaks on behalf of the actual person before the court, often more persuasively than the person themselves. The algorithm says, in effect: This is who you are. And the institutions that deploy it accept this characterization as authoritative precisely because it arrives wearing the mask of mathematical objectivity.


The consequences are not abstract

It would be convenient to treat all this as a matter of merely theoretical interest, a clever transposition of ancient categories onto modern technology, the sort of thing one publishes and then discusses at conferences. But the consequences of algorithmic ethopoeia are brutally concrete, and they fall disproportionately on those least equipped to contest them.


The COMPAS algorithm, used in American courtrooms to predict recidivism, was found to misclassify Black defendants as high risk at nearly twice the rate of white defendants. An AI driven applicant screening tool developed by Workday became the subject of a landmark class action lawsuit after a plaintiff alleged that it systematically discriminated against applicants based on age, race, and disability, issuing rejection notifications during non-business hours in a pattern that suggested no human being had ever reviewed the application. In 2025, a study published in collaboration with Cedars-Sinai found that leading large language models produced less effective treatment recommendations when a patient’s race was identified as African American; the diagnostic reasoning showed no comparable disparity, but the treatment, the part that determines what actually happens to the person, did.


These are not edge cases or glitches awaiting a technical fix. They are the predictable consequences of a system in which the characterization of human beings has been delegated to processes that have no understanding of character, no acquaintance with the persons they characterize, and no accountability for the characterizations they produce. They are, if you like, the dark side of Aristotle’s observation that ethopoeia is inescapable: If every act of public address constructs a character, then every algorithmic decision that bears on a person’s life is constructing one too. The question is whether we are paying attention to what kind of character is being constructed, by whom, for whose benefit, and on what basis.


The poverty of “ethics” without philosophy

The technology industry’s response to these problems has been to develop what it calls “AI ethics,” a phrase that, in practice, tends to denote a set of procedural commitments: fairness audits, bias detection tools, transparency requirements, responsible AI frameworks. These are not nothing. But they are, from a philosophical standpoint, remarkably thin. They operate almost entirely at the level of measurement and mitigation, asking whether an algorithm produces disparate outcomes across demographic categories, without ever asking the prior question: By what right does this system characterize persons at all? What conception of the human being underwrites the assumption that a person can be adequately represented by a data profile? What theory of justice governs the distribution of consequences that flow from such representations?


These are philosophical questions, and they cannot be answered by engineering. The fairness metrics that dominate the AI ethics discourse, demographic parity, equalized odds, calibration, are themselves value-laden choices that encode particular normative commitments, and the choice among them is not a technical matter but a moral and political one. To adopt demographic parity as your standard of fairness is to make a substantive claim about what justice requires, a claim that admits of philosophical contestation and cannot be validated by running another test. The same is true of the decision to use algorithmic characterization in the first place: The assumption that it is appropriate to reduce a person to a risk score, an employability metric, or a consumer profile is a metaphysical commitment, not a technical one. It presupposes a particular answer to the question of what a human being is, and that answer deserves to be examined, not merely operationalized.


This is where the concept of algorithmic ethopoeia proves its worth. By naming the characterization function of algorithms in the language of rhetoric, it makes visible something that the purely technical vocabulary of “bias” and “fairness” tends to obscure: that what is at stake is not merely the accuracy of a classification but the authority to define who someone is. In classical rhetoric, the question of who gets to speak, whose character is constructed and by whom, was understood as fundamentally political. The same is true in the algorithmic context, only more so, because the characterizations produced by computational systems operate at a scale and with a degree of institutional authority that Lysias could never have imagined.


Values are architectural decisions

There is a deeper point here, one that the humanities crowd will grasp intuitively but that remains oddly difficult to communicate to the engineers and product managers who build these systems. Values are not an add-on. They are not a layer of ethical review to be applied after the technical architecture has been determined. They are in the architecture. Every design decision, from the choice of training data to the selection of optimization targets to the granularity of the categories used to classify persons, embeds a set of assumptions about what matters, what counts as relevant, and what can be safely ignored. These assumptions are, in the precise sense, ideological: They reflect a particular way of seeing the world, one that is no less partial and contestable for being expressed in code rather than in prose.


The field of computational rhetoric, and specifically the framework of algorithmic ethopoeia, offers a language for making this visible. The concept of “ethotic heuristics” for AI-powered design represents something genuinely novel: a set of interpretive tools, drawn from rhetorical theory and feminist data studies, that can identify the points in an AI system’s lifecycle where ethical intervention is possible, where the characterization of persons can be interrogated, contested, and redirected. This is not a replacement for technical expertise; it is its necessary complement. It is the kind of work that can only be done by people who have been trained to read texts closely, to attend to the unstated premises of an argument, to recognize when a claim to neutrality is itself a rhetorical strategy. It is, in other words, the work of the humanities.


Why philosophy belongs at the table

The technology industry has, in recent years, made a series of gestures toward interdisciplinarity: ethics boards, responsible AI teams, consultations with social scientists. Many of these efforts have been half-hearted, underfunded, or quietly dissolved when they became inconvenient. Google’s dismissal of Timnit Gebru, a leading AI ethics researcher, remains the most visible example, but the pattern is widespread. The message, whether intended or not, is clear: The industry will tolerate ethical reflection so long as it does not interfere with the business of building and deploying systems.


What is needed is not another ethics board but a genuine recognition that the questions posed by algorithmic characterization are philosophical questions, requiring philosophical competence, and that this competence cannot be acquired by reading a summary or attending a workshop. The tradition that runs from the Rhetoric through the progymnasmata to contemporary rhetorical theory offers resources for understanding what it means to construct a character, to impersonate a person, to make an audience trust a representation. The tradition that runs from the Nicomachean Ethics through Kant to contemporary moral and political philosophy offers resources for evaluating whether such constructions are just. The tradition of phenomenology, from Husserl to Heidegger to the post-phenomenologists of technology, offers resources for understanding what it means for human experience to be mediated by systems that pre-interpret the world before we encounter it.


None of these traditions will tell an engineer how to tune a model. That is not the point. The point is that without the questions these traditions make it possible to ask, the tuning of models proceeds on assumptions that have never been examined, toward ends that have never been justified, producing characterizations of persons that no one has authorized and no one can contest. This is not a technical failure; it is a failure of the culture that produces the technology. And it will not be corrected by more technology.


The ancient question in modern dress

There is a line in Aristotle’s Rhetoric that deserves to haunt anyone who thinks seriously about these systems: the observation that ethopoeia works best when the audience does not know it is happening. The force of a characterization depends on its appearing natural, as though it merely reflects what is, rather than constructing what it purports to describe. This is the oldest insight of rhetorical theory, and it has never been more relevant. The algorithmic characterizations that shape our lives, our access to credit, to employment, to housing, to justice, present themselves as objective assessments, as neutral readings of data. They conceal their rhetorical nature behind the authority of computation. And because they do, they are extraordinarily difficult to challenge.


The task for philosophy, and for the humanities more broadly, is to make this concealment visible. To insist, against the prevailing technocratic common sense, that the question of how persons are characterized is a moral and political question, not a technical one. To bring to the table a vocabulary, a tradition of inquiry, and a set of intellectual habits that are uniquely suited to the analysis of persuasion, representation, and the construction of meaning. To refuse the assumption that these matters can be handled by adding a fairness constraint to an objective function.


Ethopoeia, the ancient art of character making, has been automated. The speech that once issued from the logographer’s careful study of a client now issues from a model trained on billions of data points, constructing characterizations of persons it has never met, for audiences that take its outputs as given. Plato would not have been surprised. He warned us, twenty-four centuries ago, that the power to fabricate character is the power to deceive. What he could not have foreseen is that this power would one day be exercised not by a human being, with all the fallibility and accountability that implies, but by a system that cannot be interrogated, cannot be shamed, and does not know what it is doing.


If philosophers are not at the table when these systems are designed, deployed, and regulated, it will not be because they had nothing to say. It will be because no one thought to ask them.



[You can read Kem-Laurin's PhD thesis on "Ethotic Heuristics in Artificial Intelligence" here.]

Comments


bottom of page