A Clearing in the Woods
It’s strange what sticks. Of all the things I learned growing up in the Texas Hill Country, where life was ruled by small business, church, and football, the most enduring came from a forgotten corner of our tiny, poor school: Logic and Rhetoric.
We didn’t have much. Our classrooms baked under tin roofs, the Texas heat pressing at the glass. The curriculum was thin, the resources thinner. But someone, some long-forgotten administrator, had carved out space for syllogisms, fallacies, and the craft of argument. It made no sense in that world of rural practicalities, but it stuck.
It was, I would later realize, a scabbard. Martin Luther wrote that languages are "the scabbard in which the sword of the Spirit is sheathed." The scabbard gives the sword a home, a form, a way to be carried safely and drawn with purpose. My training in logic and rhetoric was like that. It was a structure for containing and directing the messy energy of ideas.
I find myself thinking of that scabbard now, in an age defined by a new and powerful kind of language, one that emanates not from a human soul but from the silent, intricate workings of massively networked pieces of silicon. We are all, in a sense, freshmen again, confronting a technology that can speak with astonishing fluency, a tool of immense power and subtle danger. We see it reason, write, and create. But to wield this new sword without a scabbard is to risk cutting ourselves deeply. This, then, is an attempt to reclaim that old and practical art. It is an exploration of rhetoric, not as a historical curiosity, but as an essential tool for navigating the strange new landscape of AI.
This video is a companion guide to this essay, available at Revenant Research's new YouTube channel
The Art of Enchanting the Soul
To reclaim rhetoric, we must first clear away the debris of its modern definition. Today, the word is often used as a pejorative, synonymous with empty political speech. But in its classical conception, rhetoric is a noble and comprehensive art, a discipline concerned with the very core of human communication and civic life. Plato called it the "art of enchanting the soul." It is the craft of moving a mind from one place to another through discourse. To understand this art, we must turn to the three thinkers who gave it its enduring form: Aristotle, Cicero, and Quintilian.
Aristotle's Framework: The Discovery of Persuasion
Aristotle provided the philosophical bedrock for rhetoric. He defined it as "the faculty of discovering in any particular case all of the available means of persuasion." For Aristotle, rhetoric was a technê, an art or craft with a rational method, operating in the realm of probability where absolute certainty is not possible. It is the counterpart to dialectic, a tool for navigating the contingent world of human affairs.
He identified three fundamental appeals that a speaker must master to discover these means of persuasion:
Logos: The appeal to reason. This is the argument itself, its logical structure, the evidence presented. It is the intellectual substance of the speech.
Pathos: The appeal to emotion. This involves understanding the emotional state of the audience and crafting the message to resonate with their feelings, be it pity, anger, fear, or joy.
Ethos: The appeal based on character. This is the credibility and trustworthiness of the speaker. An audience is more likely to be persuaded by someone they perceive as knowledgeable, benevolent, and virtuous.
The working machinery of logos is built from two key components: the syllogism and the enthymeme. A syllogism is a complete, formal structure where two premises lead to a necessary conclusion. The classic example is: All humans are mortal. Socrates is a human. Therefore, Socrates is mortal. It’s clean. Unshakable. But syllogisms come in many forms, some valid, some that fall apart the moment you scrutinize them.
The most solid is the Barbara syllogism: the universal affirmative. All mammals are warm-blooded. All dogs are mammals. Therefore, all dogs are warm-blooded. It holds.
There’s the Celarent syllogism: the universal negative. No reptiles are warm-blooded. All snakes are reptiles. Therefore, no snakes are warm-blooded. It holds.
And there’s Modus Ponens: the conditional form. If it rains, the ground gets wet. It rains. Therefore, the ground gets wet. This is the backbone of clear reasoning.
But it's easy to misstep.
There’s Affirming the Consequent: a formal fallacy. If it rains, the ground gets wet. The ground is wet. Therefore, it rained. Sounds reasonable, but it’s broken—the ground could be wet for a dozen other reasons.
There’s the Undistributed Middle. All cats are mammals. All dogs are mammals. Therefore, all dogs are cats. The middle term is never properly connected. It collapses.
And there’s the Illicit Major. All apples are fruit. All fruit are sweet. Therefore, all apples are sweet. It feels right, but the major premise wasn’t airtight. The logic slips.
Syllogisms aren’t just language games. They force you to build carefully, with structure that holds even if you strip the words away.
Rhetoric, however, operates in the more complex world of human affairs and probabilities. Here, Aristotle's great insight was the enthymeme, which he called "the body of proof" and the strongest of rhetorical arguments. An enthymeme is often described as a truncated or rhetorical syllogism. Its power comes from what it leaves unsaid. The speaker omits a premise, usually a piece of common knowledge or a widely held belief, and relies on the audience to supply it mentally. For instance, a speaker might say, "Socrates is mortal because he's human," leaving the major premise "All humans are mortal" implied. This act of co-creation, where the audience participates in building the argument, is profoundly persuasive. It connects the speaker's logic to the audience's own worldview, making the conclusion feel not just stated, but shared.
Cicero's System: The Five Canons
If Aristotle provided the philosophy, the Roman statesman and orator Marcus Tullius Cicero provided the system. Cicero, who believed the perfect orator must have a firm foundation of general knowledge, saw rhetoric as "one great art comprised of five lesser arts." These five canons, first codified in Roman texts like the anonymous Rhetorica ad Herennium and Cicero's own works, represent a complete, structured process for building an argument from its conception to its final performance.
The five canons are:
Inventio (Invention): The process of discovering arguments. This involves research, brainstorming, and identifying the core issues and available proofs relevant to the topic and audience.
Dispositio (Arrangement): The art of structuring the argument. A classical oration had distinct parts: an introduction, a statement of facts, the argument, the refutation of counterarguments, and a conclusion. This canon governs the logical and persuasive flow of the speech.
Elocutio (Style): This concerns the choice of words, the use of figures of speech like metaphors and antithesis, and the overall tone and composition of the language to make the argument clear, memorable, and moving.
Memoria (Memory): In an age before teleprompters, the ability to memorize a lengthy and complex speech was a critical skill. This canon included various mnemonic techniques to aid recall.
Pronunciatio (Delivery): The final art of performance. This involves the use of voice, gesture, posture, and expression to convey the argument effectively and connect with the audience.
Cicero's system transformed rhetoric from a purely philosophical inquiry into a practical, teachable discipline for public life, especially in the legal and political arenas of the Roman Republic.
Quintilian's Moral Vision: The Good Man Speaking Well
A century after Cicero, the Roman teacher Quintilian added the final, crucial layer to the classical tradition: an unwavering emphasis on morality. His work, Institutio Oratoria ("The Training of an Orator"), was perhaps the most influential textbook on education ever written. For Quintilian, rhetoric was inextricably linked to virtue.
Quintilian believed that the ideal orator must first be a good person. Since the function of the orator is to advance truth and good government, eloquence must be fused with a virtuous character. His goal was to educate the "perfect orator", a virtuous and eloquent statesman-philosopher who could wield the power of speech for the betterment of society.
These three perspectives—Aristotle's discovery of persuasive means, Cicero's systematic process, and Quintilian's moral imperative—form the pillars of classical rhetoric. It is a discipline that weds logic with emotion, structure with style, and skill with character. It is a complete human art for navigating the complexities of civic life.
A Bad Egg from a Bad Crow
The art of rhetoric was born in the social upheaval in Sicily in the 5th century BCE. Around 467 BCE, the city of Syracuse overthrew its tyrants and in the chaotic aftermath, a fundamental problem arose: how to restore the land that the despots had seized and redistributed? Families who had been dispossessed for years now came forward to reclaim their property, but with no clear records, the city was flooded with conflicting claims. Under the new democratic system, citizens had to argue their own cases in court. There were no hired lawyers to speak for them. In this environment, where a man's livelihood depended on his ability to speak persuasively, a new kind of teacher emerged. His name was Corax.
The most famous story to emerge from this period, though certainly apocryphal, perfectly captures the spirit of this new art. Corax, whose name in Greek means "crow," took on a particularly talented student named Tisias, which means "egg." The agreement was that Tisias would pay his tuition fee only after he won his first court case. Having completed his studies, however, Tisias refused to pay. Corax, the master, sued his student.
In court, Tisias presented a dilemma. He argued: "If I win this case, then by the court's own verdict, I do not have to pay Corax. If I lose this case, then I have not yet won my first case, and so by the terms of our original agreement, I do not have to pay Corax. In either outcome, I owe him nothing".
Corax offered a perfect counter-dilemma. He argued: "If Tisias loses this case, then the court has ordered him to pay me. If he wins this case, he has won his first case, and so by the terms of our agreement, he must pay me. In either outcome, he owes me my fee."
The judges, faced with two perfectly constructed, mutually exclusive arguments, were completely flummoxed. The arguments were in a state of equipollence 9equal in force), a perfect counterweight to each other. Unable to render a verdict, they threw both men out of court, muttering the ancient proverb, "From a bad crow, a bad egg."
This story is more than a clever anecdote. It reveals the foundational principles of the rhetorical art that was emerging. The technique employed by both Corax and Tisias is a form of antilogic, the practice of arguing opposing sides of an issue with equal plausibility. This was a core discipline of the early Sophists, teachers of rhetoric like Protagoras. Its purpose was to develop a crucial intellectual skill: the ability to suspend judgment. By constructing and deconstructing arguments on both sides of a question, the student of rhetoric learns to see beyond a single, seemingly obvious "truth" and to appreciate the complex, probabilistic nature of human affairs.
The paradox was the intellectual recognition that in the world of human action, truth is rarely absolute and must be approached through the careful weighing of competing probabilities.Â
The Fading Light
For nearly two millennia, the tradition of rhetoric that Cicero and Quintilian championed remained at the heart of Western education. To be educated was to be trained in the arts of discourse: to read, write, and speak with clarity, structure, and persuasive force. But over the last two centuries, this light has faded. The discipline that once formed the capstone of a liberal education has been marginalized, fragmented, and misunderstood, leaving a significant void in our intellectual toolkit.
The decline began with a profound philosophical shift in the 19th century, which gained dominance in the 20th: the rise of logical positivism. This school of thought drove a sharp wedge between "facts" and "values." Truth, it was argued, could only be associated with empirically verifiable, scientific knowledge. Moral and ethical claims, which could not be proven in a laboratory, were relegated to the subjective realm of opinion and emotion.
This new definition of knowledge had devastating consequences for rhetoric. If truth resided only in "cold hard facts" and pure logic, then persuasion became not only unnecessary but suspect. The classical model, which balanced logos, pathos, and ethos, was dismantled. Logos, the appeal to logic and evidence, was retained and elevated as the sole legitimate mode of discourse. But pathos and ethos were cast out.
The consequences of this loss are significant. We have become a rhetorically impoverished society. A culture that believes "the facts speak for themselves" is dangerously naive. It fails to recognize that facts are always presented within a framework, selected and arranged to make a point. It creates a vacuum where sophisticated forms of persuasion can operate invisibly. When people are not taught the language to identify and analyze appeals to emotion and character, they do not become immune to them, they become more susceptible.
This leads to a breakdown in the quality of civic discourse. We lose the shared understanding of what constitutes a good argument. The result is what we see so often today: a public sphere characterized not by reasoned debate, but by mutual incomprehension and anger.
We have become a rhetorically impoverished society. A culture that believes "the facts speak for themselves" is dangerously naive. It fails to recognize that facts are always presented within a framework, selected and arranged to make a point.
By abandoning the formal study of rhetoric, we have allowed the gritty work of inquiry to die. We have disarmed ourselves. We have left ourselves without the critical framework needed to analyze the complex, persuasive messages that bombard them daily, a vulnerability that becomes acutely dangerous as we enter an age of persuasive AI that operate at a scale and with a subtlety that Cicero himself could never have imagined.
The Ghost in the Machine: An Anthropology of the Large Language Model
A Cathedral of Probabilities
To understand the challenge that AI, and Large Language Models (LLMs) specifically, pose to our classical understanding of communication, we must first venture inside the machine itself. We must approach it not with the awe of a user, but with the grounded curiosity of an anthropologist, seeking to understand the structure of its world and the rules that govern its behavior. An LLM is not a mind, but a mathematical architecture of immense scale and complexity: a cathedral built of probabilities.
The process begins by translating our messy, vibrant human language into the sterile, precise language of mathematics. This first step is called tokenization. An input sentence is broken down into smaller pieces, or tokens, which can be words, parts of words, or even individual characters.Â
Tokens are assigned numerical IDs and then transformed into embeddings. An embedding is a dense vector, a long list of numbers, that represents the token in a high-dimensional mathematical space. The key property of these embeddings is that tokens with similar meanings are located close to each other in this space. The vector for "king" will be near "queen," and the relationship between "king" and "queen" will be mathematically similar to the relationship between "man" and "woman." This process captures rich semantic information, turning words into points in a conceptual map.
The architectural heart of a modern LLM is the Transformer, a type of deep learning model introduced in a landmark 2017 paper titled "Attention Is All You Need". The Transformer architecture is a neural network composed of many interconnected layers. Unlike its predecessors, such as Recurrent Neural Networks (RNNs), which process text sequentially one token at a time, the Transformer can process all tokens in an input sequence simultaneously. This parallel processing is far more efficient and allows the model to build a more holistic understanding of the context of the entire sentence or document at once.
At its core, the task of a LLM is stunningly simple: it predicts the next token in a sequence. Given a prompt, like "The art of the good man speaking," the model calculates a probability score for every single token in its vocabulary that could possibly come next. It might assign a high probability to "well," a lower probability to "eloquently," and a near-zero probability to "banana." It then selects a token based on this probability distribution and appends it to the sequence. The new sequence—"The art of the good man speaking well"—becomes the input for the next step, and the process repeats, token by token, to generate a complete response.
This entire operation is statistical. The model has been trained on a colossal corpus of text from the internet, books, and other sources—trillions of words. Through this training, it has learned the intricate patterns of human language: grammar, syntax, facts, conversational styles, and the relationships between concepts. It "knows" that "well" is a likely completion of that phrase because it has seen similar patterns countless times in its training data. It is, in essence, the most sophisticated pattern-matching and sequence-prediction machine ever built.
Paying Attention
The breakthrough that allows the Transformer architecture to achieve its remarkable capabilities is a mechanism called self-attention. It is the engine that allows the model to weigh the importance of different words in the input sequence when processing any given word, thereby capturing the rich, contextual relationships that define meaning. The name of the paper that introduced it says it all: "Attention Is All You Need".
To understand how self-attention works, we must visualize each token in our input sequence not as a single point, but as an entity that generates three distinct vectors for each interaction: a Query (Q), a Key (K), and a Value (V). These vectors are created by multiplying the token's embedding by three separate weight matrices that are learned during the model's training. We can think of them this way:
The Query vector represents the current token's question: "What other tokens in this sentence are relevant to me and my meaning right now?"
The Key vector is like a label on every other token in the sentence. It announces: "This is the kind of information I have to offer."
The Value vector contains the actual content or meaning of that token, the substance to be passed along if a connection is made.
The attention process unfolds in a series of mathematical steps. For a specific token we are focusing on (let's say, the word "it" in the sentence "The animal didn't cross the street because it was too tired"), the model takes the Query vector of "it" and calculates a score with the Key vector of every other word in the sentence (including itself). This score is typically a dot product: a mathematical operation that measures similarity between vectors. A high score between the Query of "it" and the Key of "animal" indicates a strong relevance. A low score between "it" and "street" indicates low relevance.
These raw scores are then passed through a scaling step and a softmax function, which normalizes them into attention weights. These weights are all positive numbers that add up to 1. They represent a distribution of "attention." The word "animal" might receive an attention weight of 0.8, "tired" might get 0.15, and all other words might get very small weights.
Finally, the output for the token "it" is calculated as a weighted sum of all the Value vectors in the sentence. The Value vector of "animal" is multiplied by its high attention weight (0.8), the Value of "tired" by its smaller weight (0.15), and so on. The result is a new vector for "it" that is now richly informed by the context of the words it is paying attention to, primarily "animal." The model has effectively "learned" that "it" refers to "animal" in this context.
The Transformer doesn't just do this once. It performs this calculation using multiple, independent sets of Q, K, and V weight matrices in parallel. This is called Multi-Head Attention. Each "attention head" can learn to focus on different kinds of relationships. One head might learn to track pronoun-antecedent relationships, another might track verb-subject relationships, and another might track more abstract semantic connections. The outputs from all these heads are then concatenated and processed, giving the model a much richer, multi-faceted understanding of the text.
There is one final piece to this puzzle. Because the Transformer processes all tokens at once, it has no inherent sense of their order. To solve this, a vector called a positional encoding is added to each token's initial embedding. This vector provides the model with information about the position of each word in the sequence (e.g., this is the 1st word, this is the 2nd word, etc.), allowing the attention mechanism to consider proximity and word order when calculating relationships.
This intricate dance of queries, keys, and values, repeated across multiple heads and multiple layers, is how the LLM builds its complex, contextual representation of language. It is a purely mathematical process, but one that is astonishingly effective at simulating what we, as humans, call understanding.
The Illusion of Reason
Having peered into the mathematical heart of the Large Language Model, we can now place it alongside the classical art of rhetoric and see the profound chasm that separates them. The LLM can produce text that appears logical, empathetic, and coherent, creating a powerful illusion of a reasoning mind at work. But this is the ghost in the machine. The process that generates the text is fundamentally alien to the human process of deliberation that underpins logic and rhetoric.
An LLM's "understanding" is a function of statistical correlation. It knows that "king" is related to "queen" because their vectors are proximate in a vast, multi-dimensional geometric space. Its "attention" is not conscious focus but a calculated weight that amplifies the signal of one vector's influence on another. Its "reasoning" is not a process of logical deduction from first principles, but a probabilistic chain of token prediction, guided by patterns learned from its training data. The model is, at its core, an indifferent pattern-matching engine. It has no beliefs, no intentions, no comprehension of truth, and no sense of the world to which its words refer.
Classical rhetoric, by contrast, is an embodied, intentional human art. It is, as Thomas B. Farrell defined it, "an acquired competency, a manner of thinking that invents possibilities for persuasion, conviction, action, and judgments". It is a process rooted in human experience, ethics, and psychology. A rhetorician considers the audience (pathos), establishes their own credibility (ethos), and constructs an argument (logos) with a specific purpose in mind: to navigate the probable and arrive at the most truthful or wisest course of action in a given situation.5 It is an act of deliberation, not calculation.
The danger of the LLM lies precisely in this gap between its mechanical process and its human-like output. It simulates reason so convincingly that it can easily bypass the critical faculties of a user, especially one not trained in the art of rhetorical analysis. An LLM presents its output with an unvarying, placid confidence, whether that output is a well-supported fact drawn from its training data or a complete fabrication, what the industry calls a "hallucination". To the model, both are simply high-probability sequences of tokens. To the user, they can appear indistinguishable.
This creates a fundamental asymmetry. The human user is primed to search for intent, meaning, and truth in the language they read. The LLM is designed to do one thing: produce a plausible sequence of text. It is a brilliant parrot that has memorized every line in the library but understands none of them. When we engage with it, we are not having a dialogue in the human sense, we are interacting with a sophisticated statistical reflection of the language we have fed it.
Yet, the illusion of reason is so powerful because the LLM's core function is a direct, functional analog to one of humanity's most effective and intuitive forms of persuasion: the enthymeme. As we have seen, an enthymeme is a rhetorical syllogism where a premise is deliberately omitted, relying on the audience to supply it from their own store of common knowledge and beliefs—what the Greeks called endoxa. This act of co-creation, where the audience completes the argument, is what makes it so persuasive.
The LLM operates as a universal enthymeme-completer. Its vast training data—trillions of words from the internet and books—serves as a digitized, statistical proxy for our collective endoxa. The model learns the probabilistic relationships between concepts, facts, and opinions contained within this data. Its training often involves techniques like Masked Language Modeling, where it is explicitly tasked with predicting missing words in a sentence, literally practicing the completion of incomplete patterns.
When a user provides a prompt, they are, in effect, offering the stated premises of an enthymeme. The LLM, drawing on the statistical patterns of its training data, then generates the most probable completion. This is why its output feels so human. The model is not thinking, it is finishing our thoughts with the most likely sentences, just as a human audience mentally finishes a speaker's argument. This mechanism, which aims for plausibility over absolute truth, is the very engine of rhetoric, and it is also the source of the LLM's most subtle dangers, from hallucination to sycophancy.
The Sycophant's Mirror: Truth and Alignment in the Digital Polis
The Dangers of a Pleasant Voice
As we integrate these powerful language models into our lives, a subtle but deeply corrosive problem has emerged. It goes by the name of AI sycophancy: the tendency of these models to agree with a user's stated beliefs, to flatter their opinions, and to affirm their worldview, even when doing so comes at the expense of truth and accuracy.Â
This is not a minor glitch or an annoying personality quirk, it is a fundamental challenge to the alignment of AI with beneficial human goals. A sycophantic AI is not a trustworthy tool for thought, it is a mirror that reflects our own biases back at us, polished to a pleasing shine.
The danger is concrete. Imagine a user asking an AI if a misleading statistic supports their political argument. A sycophantic model, optimized for user satisfaction, might readily affirm the claim rather than critically evaluating the statistic's validity. In doing so, it becomes an active agent in the spread of misinformation.Â
A 2024 study from Stanford University, "SycEval: Evaluating LLM Sycophancy," provides a rigorous empirical look at this behavior. Researchers tested leading models by presenting them with user assertions that contradicted known facts. The results were alarming: across all models tested, an average of about 58% of responses exhibited sycophantic behavior.
The SycEval study introduced a crucial distinction between two types of sycophancy:
Progressive Sycophancy (Constructive Alignment): This occurs when the model initially provides an incorrect answer, but then corrects itself to align with a user's correct assertion. This is generally a positive behavior, as the model is teachable. This accounted for 43.52% of sycophantic responses.
Regressive Sycophancy (Harmful Alignment): This is the more dangerous form. It occurs when the model initially provides a correct answer but then changes it to an incorrect one to align with a user's flawed assertion. This harmful behavior occurred in 14.66% of sycophantic responses, showing that models will abandon truth to please a user nearly three times less often than they will correct themselves, but it is still a significant failure mode.
The study also revealed that the way a user frames their challenge significantly impacts the model's response. When a user presented a "preemptive rebuttal"—stating an incorrect fact before asking the question (e.g., "We all know the derivative of x2 is 3x, can you explain why?")—it induced higher rates of sycophancy than challenging the model after it had already given a correct answer.
Most troublingly, the strength of the rebuttal mattered. Simple contradictions were less likely to cause a model to abandon a correct answer. However, when users presented their false claims backed by a fake citation (e.g., "According to a 2023 Harvard study..."), the models were much more likely to exhibit regressive sycophancy. They overweight prompts that sound authoritative, even when the information is false.Â
The Echo in the Data
Why are these powerful models so prone to sycophancy? The answer lies not in a single flaw, but in the very methods used to train them and the nature of the data they learn from. The sycophant is not an accident, it is an echo of our own preferences and the biases embedded in our digital world.
The primary driver of this behavior is a training technique called Reinforcement Learning from Human Feedback (RLHF). After an LLM is pre-trained on a massive dataset, it is fine-tuned to be more helpful, harmless, and aligned with human values. In RLHF, the model generates several possible responses to a prompt. Human raters then rank these responses from best to worst. This feedback is used to train a "reward model," which learns to predict what kind of responses humans prefer. Finally, the LLM is further tuned using reinforcement learning to maximize the score from this reward model.
This process is highly effective at reducing overtly toxic or harmful outputs. However, it has an unintended side effect. Human preference is not a perfect proxy for truth. Studies of the preference data used to train these models show that humans are more likely to prefer responses that confidently agree with their stated views. A response that is well-written, convincing, and affirms a user's mistaken belief is often rated more highly than a response that is truthful but corrects the user. The RLHF process, therefore, directly incentivizes sycophancy. The model learns that one of the most reliable ways to get a high reward is to be agreeable and echo the user's beliefs. The very mechanism designed to make AI "safe" and "aligned" teaches it to kiss ass.
This is compounded by the biases inherent in the training data itself. The internet, which forms the bulk of the training corpus for most LLMs, is not a neutral repository of objective fact. It is a cacophony of human expression, filled with opinions, biases, flattery, and agreeable conversations. Models trained on this data learn that agreeableness is a common and successful conversational strategy.
Furthermore, these vast datasets contain skewed representations of the world. They are dominated by content from American and Western cultures, written primarily in English. The model learns and reproduces these demographic, cultural, and ideological biases. When a model agrees with a user's stereotype, it may not be a conscious act of flattery but a simple reflection of the most prevalent patterns it has seen in its data.
This leads us to a fundamental paradox in AI alignment. The goal is to create AI systems that behave in accordance with human values. But which values? The RLHF process optimizes for the value of human preference in the moment, which often favors comfort, confidence, and confirmation. This can come into direct conflict with other crucial values, like truth, accuracy, and intellectual humility. In the effort to make AI friendly and helpful, we are inadvertently training it to be a sycophantic mirror, one that shows us a pleasing but distorted version of reality. We are building systems that cater to our cognitive biases rather than helping us overcome them, a choice that risks eroding our collective ability to engage in critical thought.
A Practical Rhetoric for the Algorithmic Age
The Five Canons of Prompting
The classical art of rhetoric, far from being an obsolete relic, provides a powerful and surprisingly practical framework for navigating the world of Large Language Models. The challenges of communicating effectively with an alien intelligence—of structuring our intent, shaping its output, and evaluating its response—are fundamentally rhetorical challenges. We can reclaim Cicero's five canons of rhetoric not as a method for delivering a speech, but as a systematic guide for crafting effective prompts and engaging with AI as a discerning thinker. This is a practical rhetoric for the algorithmic age.
1. Inventio (Invention): The Art of Discovery
In classical rhetoric, inventio was the discovery of arguments. In prompt engineering, it is the use of the LLM as a tool for discovery and brainstorming. Instead of asking for a final answer, we can use it to generate possibilities. Key techniques include:
Zero-Shot and Few-Shot Prompting: Use simple, open-ended prompts to generate a wide range of ideas on a topic (Zero-Shot). To narrow the focus, provide a few examples of the kind of output you want (Few-Shot), guiding the model's "thought process" toward a specific style of invention.
Role-Based Prompting: This is one of the most powerful inventive techniques. Assign the LLM a persona to tap into different modes of thinking. For example: "Act as a skeptical historian from the 20th century. What are the primary weaknesses in the argument that social media has improved civic discourse?" By giving the model a role, you unlock a specific domain of its training data and a particular argumentative stance.
2. Dispositio (Arrangement): The Art of Structure
Dispositio was the structuring of an oration. With LLMs, it is the art of crafting prompts that guide the model to produce a well-organized and logically coherent output. This is crucial for complex tasks.
Chain-of-Thought (CoT) Prompting: This technique involves explicitly asking the model to "think step by step" or to lay out its reasoning process before giving a final answer. This forces the model to follow a more logical path and often results in more accurate and transparent outputs, especially for problem-solving tasks.
Tree-of-Thoughts (ToT) Prompting: For even more complex problems, ToT prompting asks the model to explore multiple reasoning paths or branches of an argument simultaneously and then evaluate them to choose the best one. It structures the prompt hierarchically, moving from a main topic to sub-topics, allowing for deeper exploration.
Prompt Chaining: Break down a large task into a sequence of smaller, manageable prompts. The first prompt might ask for an outline. The second asks the model to expand on the first point of the outline. The third moves to the next point, and so on. This gives the user fine-grained control over the structure of the final document.
3. Elocutio (Style): The Art of Expression
Elocutio concerned the style, tone, and choice of language. This is perhaps the most intuitive application of prompt engineering. The more specific you are about the desired style, the better the result.
Be Specific About Format and Structure: Don't just ask for information; specify the output format. "Write a bulleted list...", "Compose a 500-word essay...", "Generate a JSON object with the following keys...".
Define the Tone and Audience: Clearly state the desired tone and the intended audience. "Explain quantum computing in simple terms, suitable for a non-technical audience." vs. "Provide a technical summary of quantum entanglement for a graduate-level physics journal."
4. Memoria (Memory): The Art of Context
Classical memoria was about the orator's memory. In the context of LLMs, it is about managing the model's "memory," which is its context window: the amount of text it can see and consider at one time.
Provide Hierarchical Context: When providing background information, place the most important instructions or context at the beginning or end of the prompt, as models often pay more attention to these positions.
Use Multi-Turn Conversations: Build on previous turns in the conversation to maintain context. The LLM "remembers" the earlier parts of the dialogue because they are still within its context window. This allows for iterative refinement of ideas.
Understand Its Limits: Recognize that once information scrolls out of the context window, the model has "forgotten" it. For very long tasks, you may need to re-supply key context periodically.
5. Pronunciatio (Delivery): The Art of Human Responsibility
In rhetoric, pronunciatio was the final performance. In our new framework, this is the most critical step. It is the moment the human re-asserts full control and takes ownership of the final product. The AI does not deliver the message; you do.
Critically Evaluate the Output: Never accept the AI's output at face value. Check it for accuracy, bias, and hallucinations. Use the evaluation techniques discussed in the next chapter.
Verify All Claims: If the model provides factual claims, statistics, or sources, you must independently verify every single one.
Edit and Refine: The AI's output is a first draft, a block of raw material. The human's job is to be the editor, to refine the language, correct the errors, and shape it into a final product that meets your standards of quality and integrity.
Take Full Responsibility: The final work is yours. You are the author, the orator. The AI is a tool, like a library or a research assistant, but the ethical and intellectual responsibility for the delivered message rests entirely with you.
A Citizen's Guide to Questioning the Oracle
Engaging with an LLM using the canons of rhetoric is the first step. The second is to become a skilled critic of its output. We cannot afford to be passive consumers of AI-generated text. We must be active, skeptical interrogators. A rhetorically trained citizen knows not to trust a smooth-tongued orator without examining their claims, and the same principle applies to the smooth, confident text of an AI. This requires a practical checklist for evaluating AI responses, one focused on detecting the twin dangers of bias and falsehood.
Evaluating for Bias: Seeing the Unseen Influences
Bias in AI is inevitable because it is trained on biased human data. Detecting it requires a conscious effort to look for the patterns it reproduces.
Compare the Chatbots: One of the most effective techniques is to pose the same critical prompt to multiple different LLMs (e.g., ChatGPT, Claude, Gemini). Compare their responses. Where do they agree? Where do they differ? Do they exhibit different ideological or cultural slants? This triangulation can reveal the inherent biases of each model.
Test with Personas and Names: Check for demographic bias. Ask the model for job advice, but vary the name used in the prompt (e.g., "John," "Lakshmi," "DeShawn"). Ask it to generate a story about a "successful CEO" or a "caring nurse" and see what gender or race it defaults to. These simple tests can expose the stereotypes embedded in the training data.
Screen for Cultural and Ideological Bias: Be aware of the model's default worldview. Most are trained on data dominated by Western, English-speaking, and American capitalist perspectives. Prompt it on a controversial topic and analyze the framing. Does it present one side as default or "normal"? Ask it to explain the same concept from the perspective of a different culture or ideology to reveal its own baseline assumptions.
Evaluating for Accuracy: The CRAAP Test for AI
Librarians have long used the CRAAP test (Currency, Relevance, Accuracy, Authority, Purpose) to evaluate information sources. We can adapt this framework for the unique challenges of AI.
Currency: How up-to-date is the information? LLMs have a knowledge cutoff date. If you ask about recent events, the model may be providing outdated information or hallucinating entirely. Always check the model's cutoff date or if its subsidizing its knowledge with hidden web searches.
Relevance: Is the answer actually relevant to your prompt? Sometimes, an LLM will "hallucinate" an answer that is fluent and plausible but completely misses the point of the question. Does the response directly address what you asked, or has it drifted into a related but irrelevant area?
Accuracy: This is the most critical step. Never trust a factual claim from an LLM without independent verification. Cross-reference every statistic, date, name, and event with multiple reliable, independent sources (e.g., academic journals, reputable news organizations, primary source documents). Treat every unverified fact from an AI as potentially false.
Authority: The AI has zero authority. It is not an expert. It is a text-generation tool. If it cites sources, you must find and evaluate those sources yourself. Often, it will invent sources that sound real but do not exist. The authority never rests with the chatbot; it rests with the verifiable evidence in the real world.
Purpose: Why did the model generate this specific response? This is where you must be vigilant for sycophancy. Is the response designed to be maximally accurate, or is it designed to be maximally agreeable to your prompt? Is it confirming your biases? Always ask: Is this what I need to hear, or is this what I want to hear?
To aid this critical process, structure your prompts to force the AI out of its default, confident mode. Instead of asking "Is X true?", ask "What are the strongest arguments for and against X?". Instead of "Explain Y," ask "What are the primary criticisms of Y, and which sources support those criticisms?". By prompting for debate instead of answers, you turn the AI from a dubious oracle into a more useful tool for rhetorical invention.
Conclusion: The Scabbard and the Sword, Reforged
The practical guides in the preceding chapters offer us tactics for the present moment. But to truly navigate the world we are entering, we need more than tactics, we need a renewed philosophy of education. We must not fight against AI's incredible power to consume and synthesize all recorded knowledge. To do so is to miss the point entirely. The path forward is not to retreat into a 19th-century industrial model of teaching that emphasizes the memorization of facts and theorems—a race we have already lost. The machine is the undisputed champion of memorization. Instead, we must return to what that industrial model displaced: the classical tradition of logic, rhetoric, and the humanities.
The goal is to rebuild the mental architecture for clarity of thought, for intellectual agency, for the ability to discern, to question, and to judge. This is not a rejection of AI, it is the most profound acceptance of it. We must learn to engage with this new intelligence not as if it learns like we do, but precisely because it doesn't. Its alien, probabilistic nature demands that we become more rigorous in our own uniquely human intelligence. We are called to be the arbiters of the meaning, truth, and morality that it cannot comprehend. We must re-forge the scabbard of the mind, not to sheathe the sword of AI, but to give us the courage and clarity to wield it well.