Appearance
About the Discourse Depot
What this is
A public workshop for experiments in using large language models as research instruments—specifically, for analyzing how language frames and constructs meaning in popular, technical and political discourse.
Why it exists
This started with a simple annoyance: the relentless anthropomorphism in AI discourse. Words like "thinks," "understands," and "learns" obscure more than they explain. As I dug into metaphor theory, framing analysis, and discourse studies, a question emerged: Could an LLM be configured to systematically apply these same analytical frameworks to texts?
The answer is this site, so decide for yourself.
What's here
Outputs from my experiments. Each analysis represents a different configuration: different prompts, different theoretical frameworks, different texts. I'm sharing them publicly because:
- It's easier than sending individual examples to colleagues via Teams (lol)
- Transparency about the process matters when we're teaching AI literacy
- Seeing the full range of outputs—successes and failures—is more pedagogically useful than polished case studies
- Many of them are interesting “texts” to read in their own right
The bigger picture
These experiments feed directly into a syllabus I'm developing called "Prompt as Interpretive Instrument." The core pedagogical move: students engineer AI systems by translating theoretical research into executable analytical instructions. The prompt becomes the artifact of assessment, a schema becomes an argument, the iteration becomes visible metacognition. This is part of ongoing AI literacy initiatives at William & Mary Libraries.
About the Outputs
Structure and consistency
Each analysis follows the structure defined by its prompt, with standardized headings that map directly to the analytical tasks. This makes the outputs auditable. You can generally trace which instruction produced which section of the analysis.
Two forms of output
- Prose analyses (what you see in the output examples): These are human-readable analyses generated by the model following the prompt's instructions. They demonstrate what the framework surfaces in a given text and serve as examples for pedagogical purposes.
- Structured data schemas (used in the web application): Each framework also has a corresponding JSON schema that enforces structured output via the API configuration. These schemas are designed to be normalized, auditable, and scalable—not just for generating a single analysis, but for building a corpus that can be queried systematically.
Why schemas matter
The structured output approach transforms the tool from "text-in, text-out" to a genuine data analysis pipeline:
User Text → Gemini API (with Schema) → Structured JSON → Database
The idea is that this creates the foundation for comparative analysis across multiple texts: tracking metaphor patterns over time, comparing framing strategies across political speeches, or querying how agency is distributed across a corpus of policy documents.
The schema design process is itself pedagogically valuable, it forces students to answer other questions: What are the essential, irreducible components of a metaphorical frame? What data type is "agency"—a string, a boolean, a relation to another object?
Extended Processing Summaries
Some outputs include an "Extended Processing Summary" section, typically at the end. These are the model's intermediate token generations before producing the final structured response—what Gemini's documentation, in semi-acknowledged metaphor framing, calls “thought summaries.”
Why I include them (sometimes): These summaries are diagnostically useful for evaluating prompt design. They show:
- How the prompt instructions were parsed and executed
- Which analytical tasks the model processed first or devoted more tokens to
- Where the generation process produced uncertainty markers or reformulations
- The computational sequence that led to the final output
This makes them valuable for evaluating prompt design, not for understanding "what the AI was thinking." Since it wasn’t thinking, there’s no thought to summarize.
The AI literacy caveat
TLDR: Why first-person? This language is a design choice by the LLM creators, not a technical requirement. The model could generate the same intermediate outputs with passive voice ("Lexical units are being extracted") or structured logs ("Task: Metaphor identification, Tokens: 1,203"). First-person framing positions the technology as cognitively sophisticated which serves competitive and marketing interests more than it serves AI literacy.
The first-person framing of these “thought summaries” is a presentation choice for the user-facing output, not an inherent property of the extended thinking mechanism. These processing summaries read like someone deliberating—"I'm carefully analyzing," "my thinking has shifted," "I need to consider." The creators of the LLM could have chose to use a passive/processing language, but instead they chose the language that feels most like intelligence. This is not evidence of cognition. It's the model generating text patterns statistically associated with analytical discourse, learned from training data that includes human reasoning chains and metacommentary.
The performance of deliberation substitutes for genuine interpretability. The model produces language that performs deliberation. We interpret that performance as evidence of deliberation. This gap between observed computational behavior and attributed mental states is exactly what this project is designed to make visible.
Why the language matters:
If I uncritically called these "thinking summaries" or "reasoning traces," I'd be perpetuating the same anthropomorphic framing the Metaphor & Anthropomorphism analyses are designed to critique. Language choices have consequences. Using "extended processing summaries" instead keeps the focus mechanistic: these are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind. By calling this out explicitly in the Discourse Depot, I’m doing what the makers of these LLMs chose not to: making the anthropomorphic design choice visible as a choice, not an inevitable feature of the technology. The question *Why didn't they avoid this language? has a clear answer: Because anthropomorphism sells, and precision doesn't.
Contact
Browse freely. Questions, feedback, and "hey, try analyzing this" suggestions welcome. TD | William & Mary Libraries| elusive-present.0e@icloud.com
Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0