Skip to main content

The Glass Box Syllabus

An image from the static

A course framework for Prompt as Interpretive Instrument

This document outlines a syllabus I'm developing for undergraduate students in a liberal arts context based on my own exploration done here on and through this site. The core pedagogical move: students engineer AI systems by translating theoretical research into executable analytical instructions. The prompt becomes a scholarly artifact, the schemas become arguments, and the iteration log becomes evidence of thinking.

I’ve always felt that AI pushes all the hot buttons of a liberal arts education because it advances questions of what it means to be human, to know, to explain, and to communicate. This course would be framed as an interdisciplinary drawing on philosophy of mind, discourse and critical theory, media studies, politics, economics, aesthetics, and metaphysics by attending to both the "Artificial" and the "Intelligence."

Course Title: Prompt as Interpretive Instrument
Format: One semester (15 weeks) for Module 1; Module 2 is an optional extension
Audience: Undergraduate students; no programming prerequisites
Primary Deliverable: A working analytical instrument (prompt + documentation) that operationalizes a critical framework


Big Ideas and Core Questions​

1. What are minds, and what are machines?

Students engage foundational texts in philosophy of mind, consciousness studies, and information theory to understand competing theories of cognition. We examine Integrated Information Theory (Tononi), computational theories of mind, embodied cognition frameworks, and critiques of the "Chinese Room" argument. This establishes the conceptual tools needed to evaluate claims about AI "understanding" or "reasoning."

2. How does language shape (and obscure) our understanding of AI systems?

Students analyze the pervasive anthropomorphic metaphors in AI discourse—systems that "think," "learn," "understand," "hallucinate," or "are creative." Drawing on cognitive linguistics and critical discourse analysis, students learn to distinguish between system behavior (observable outputs given inputs) and projected cognition (attributions of mental states). This metacognitive awareness is foundational AI literacy.

3. Can formalizing theory as executable systems deepen understanding?

The course's central pedagogical claim is that translating critical frameworks into AI prompts and data schemas forces unprecedented precision in theoretical understanding. Students discover that operationalizing concepts like "discourse," "framing," "ideology," or "positionality" requires confronting gaps in their own comprehension. The prompt becomes scholarship: a citable, testable, improvable artifact that demonstrates theoretical mastery.

Pedagogical Framework: Four Pillars​

Artificial + Intelligence - Transdisciplinary Riffs​

  • The Philosophy of Mind (The "Alien Intelligence" Track)
  • Discourse & Critical Theory (The "Power & Language" Track)
  • Media Studies & Propaganda (The "Simulation" Track)

Before the phase-by-phase breakdown, here's the theoretical backbone of this approach.

1. Formalization as Learning

Students translate theoretical frameworks from abstract concepts into precise, step-by-step logic that an AI system can execute. Academic theories are often expressed in complex, nuanced, sometimes ambiguous language. They provide lenses for interpretation, but do they provide definitive analytical methods? What does this even mean?

Deconstructing a theoretical framework into its smallest analytical units ("operational primitives") creates a mirror where learners articulate exactly what the theory instructs them to look for and how to classify it. For example: What does this theory mean by "hegemonic discourse" in terms of textual features? Does this theory actually provide a coherent, executable method of analysis, or does it only offer a lens?

2. Schema as Argument

A schema tells the model: "When you analyze this text, return results in exactly this format." By deciding which fields to include, the student is arguing: "To understand this text through Theory X, one must account for categories A, B, and C."

Approached in this way, the schema itself is a theoretical statement. It is an argument about what matters. The choice, for example, to include instances_of_neocolonial_rhetoric but not mentions_of_domestic_policy is an analytical and political choice that asserts a hierarchy of significance. Students reflect on the politics of their own data structure: What does my schema try to get this LLM to "notice," and what does it allow it to "ignore"?

3. Iteration as Metacognition

The constant need to debug both prompt and schema externalizes thinking. The trial-and-error process becomes visible, measurable metacognition. The archived sequence of failed prompts, revised schemas, and successful outputs creates an "archive of doubt."

A prompt journal from a student might read: I made this change because the output from V1 was too vague, identifying 'power' in a metaphorical way without grounding it in specific textual evidence. This opens discussion of larger issues: LLMs, authorship, opacity.

4. Provenance as Scholarship

Provenance is the detailed, auditable history of analytical work. In this syllabus, it establishes standards for transparency. This level of provenance focuses scholarly assessment on more than a final argument, but on the integrity of the method itself. The learner is accountable for the entire "system" they built, even the messy middle parts.

The process generates a complete, transparent chain of intellectual custody:

Source → Prompt → Schema → Iterations → Output

Assessment hits the librarian-ish highlights: systems thinking, architectural design, and critical awareness of tools, processes, and techniques used to produce knowledge.


The Scaffolded Question: "Can AI Help Here?"​

I want to avoid the "here are AI tools, now use them" approach and instead begin with the types of challenges that emerge naturally from intellectual work and a question students learn to ask with increasing sophistication: Given what I understand about how these systems actually work, could they help here?

That "given what I understand" is where the learning lives. Students build conceptual vocabulary along the way (structured output, schema design, probabilistic generation) developing judgment about when and how to deploy these tools. Looking at and through their interfaces.

Also, and more interesting to me, the advent of code-generating AI (vibe coding) shifts what "technical barrier" means. The question is no longer "is there an app for this?" or even "can I code this?" It's: "Can I describe what I'm trying to do with enough precision to make the code?" Conceptual clarity becomes the bottleneck and that's exactly the kind of thinking humanistic training develops.

Here are the kinds of example challenges I envision that scaffold this type of learning:

Challenge 1: Pattern Recognition at Scale

"I've close-read 5 texts and I'm noticing recurring metaphors. But I have 45 more. How do I trace this systematically without losing analytical depth?"

Opens the door to: structured output, schema design, corpus analysis
The informed question: "Can I design an instrument that applies my analytical categories consistently across texts?"

Challenge 2: Coding Drift

"I'm applying this framework to multiple texts, but my own categories keep shifting. What counted as 'agentless passive' in text 3 doesn't match what I flagged in text 7."

Opens the door to: operational primitives, schema enforcement, explicit definitions
The informed question: "Can I formalize my categories precisely enough that a system applies them consistently—and what does that formalization reveal about my own understanding?"

Challenge 3: Making Qualitative Work Comparable

"I have rich notes on each text, but they're all prose. I can't compare across them because nothing is structured the same way."

Opens the door to: JSON structure, database normalization, queryable data
The informed question: "What would it mean to model my observations as data? What do I gain and lose?"

Challenge 4: From Consumer to Builder

"A colleague wants to apply my framework to a different corpus. And I want to build a simple interface for it. The question used to be 'is there an app for this?' Now it's: 'Do I understand what I'm trying to do well enough to describe it in language that makes the app?'"

Opens the door to: prompt as artifact, provenance, natural language as specification, shareable tools
The informed question: "Is my understanding precise enough to become a working system?"

Challenge 5: The "What Am I Looking For?" Problem

"I understand this theory conceptually, but when I sit down with a text, I'm not sure what textual features actually count as evidence."

Opens the door to: formalization as learning, operational primitives, evidentiary mapping
The informed question: "Can the act of instructing a system force me to articulate what I tacitly know?"

Since I’ve been going through these scenarios myself, they are no longer hypothetical. I’ve flagged them as the actual friction points of scholarly work. The course would be designed in a way that uses them as on-ramps to technical concepts, always in service of humanistic inquiry. Moving beyond AI skills, the learning goals would be focused on developing informed judgment about a class of tools by understanding their mechanics.


Why This Course?​

The Readiness Trap​

Many educational institutions find themselves caught in a "readiness trap,” a cycle of forever-preparation that defers meaningful student engagement with AI until educators achieve some mythical state of mastery (or not). While we wait for the next professional development cycle or policy framework, our students are already accumulating countless AI experiences in spaces designed for more for entertainment and efficiency, not necessarily for deep, critical learning. They are interacting with "The Partner" (the marketing illusion) rather than "The Processor" (the technical reality).

This syllabus starts from a different premise: Reframe AI literacy from a static competency to be mastered into an ongoing interpretive practice to be cultivated.

From Black Box to Glass Box​

One goal is to teach students to go beyond using generative AI just to perform a single task. In the course there would be pervasive permission to explore the utility of these models in the process of doing complex academic work as a type of reflective experimentation.

Also, this is an overt exercise in demystification. By engaging with these systems through trial and error, system instructions and schemas, the magic of the conversational interfaces is stripped away. Students begin to understand what they (both themselves and the models) can and cannot do, moving beyond the unhelpful framings of "innovation" or "efficiency" and metaphors of a "magical" or "black box" technology.

Because the the black box is really the glass box. It is a transparent but complex bureaucracy of vectors. "Using AI" becomes an active and intellectually rich exercise in mapping the latent terrain and understanding the constraints, biases, and "sycophancy features" that shape every output.

Ultimately, the glass box is more than just the neural network (the weights); it's the entire socio-technical apparatus it is embedded in.

Beyond the "Neutral Tool"​

Calling AI a "tool" is reporting a fact, but one that is either meaningless or uninteresting, or both. A tool's meaning is embedded in a world of purposes, relations, and practices. It only makes sense within that lived system. Saying something is "just a tool" tells us nothing about how it is made, used, encountered, understood or who made it.

In the hands of a student, I’ve learned that a tool is never neutral. It has constraints, affordances, assumptions. It pushes back. That tension is the pedagogical opportunity to explore:

  • The Reality: An LLM is a non-neutral instrument. It presses back. It has "sycophancy features" that want to agree with you. It has "refusal circuits" that block certain paths. It is marketed in ways that scaffold expectations about it.
  • The Pedagogy: The tension between the student's intent and the machine's optimization is a site of learning. This invites students to notice not just what they made, but how the machine shaped the making. Did the machine "impute" a generic idea because it was statistically safe? Did it "clamp" a controversial topic?

These class of “tools” are more than just things we use, they are things that shape how we make sense of the world. We can invite students to notice that too. What they've made and how they got there. What kind of thinking it required. What kind of thinking it made possible.

Pedagogical Principles Over Platform Mastery​

Instead of waiting for impossible-to-achieve platform mastery, this syllabus is organized around a core pedagogical principle: using the design of a prompt as a site for scholarly inquiry.

This approach allows an instructor to position themselves as co-investigators. The focus shifts from knowing, for example, the exhaustive details of a transformer architecture to guiding students through a fundamentally human process: asking good questions, applying analytical frameworks, and critically evaluating evidence.


Part I: The Course - Super Draft Mode​

Module 1: The Humanities Loop​

Research → Blueprint → System Instructions → Deployment

This is the complete, semester-long experience. Students move through a series of disciplinary readings and from selecting a critical framework to deploying a working analytical instrument. No programming required though students will encounter technical concepts (JSON structure, API playgrounds) as tools in service of humanistic inquiry.


Phase 1: Framework Selection & Deep Research​

Weeks 1–3

Goal: Students identify a critical framework and build genuine scholarly understanding.

Learning Objectives:

  • Identify primary theoretical texts within chosen framework
  • Articulate the framework's core concepts, tensions, and debates
  • Distinguish between the theory's claims and its operational logic

Deliverables:

Framework Selection Memo

  • Which framework? Why this one?
  • What makes it suitable for operationalization?
  • What are its known limitations/blind spots?

Annotated Bibliography

  • Primary theoretical texts
  • Applied examples (how others use this framework)
  • Key concepts with preliminary definitions

Assessment Checkpoint: Does the student demonstrate reading comprehension AND critical awareness of the framework's scope?


Phase 2: Deconstruction into Operational Primitives​

Weeks 4–6

Goal: Translate abstract theory into discrete, executable analytical actions.

Learning Objectives:

  • Convert noun-based concepts into verb-driven tasks
  • Map each action to specific textual evidence types
  • Identify what the framework "sees" vs. what it ignores

Deliverables:

Operational Blueprint (structured document)

  • List of core concepts from the framework
  • For each concept:
    • Action verbs: What does one DO to apply this concept?
    • Evidentiary mapping: What textual features verify this?
    • Edge cases: When does this concept NOT apply?

Self-Assessment Reflection

  • Which concepts were hardest to operationalize? Why?
  • What did the operationalization process reveal about gaps in your understanding?

Assessment Checkpoint: Are the primitives specific enough to be machine-executable? Do they capture the framework's nuance or flatten it?


Phase 3: Prompt Engineering as Scholarship​

Weeks 7–11

Goal: Design system instructions that embody the theoretical framework.

Learning Objectives:

  • Construct effective persona/role assignments for LLM
  • Write negative constraints that prevent generic outputs
  • Design chain-of-reasoning requirements for transparency
  • Iterate based on diagnostic output analysis

Deliverables:

System Instruction Document (v1)

  • Persona assignment
  • Contextual grounding
  • Operational primitives (from Phase 2) as explicit instructions
  • Negative constraints
  • Required output structure (even if not JSON yet)

Test Run Documentation

  • Sample text analyzed
  • Full output from LLM
  • Diagnostic notes: What worked? What failed? Why?

Prompt Journal (iterative, ongoing)

  • Version log with rationale for each change
  • Example: "V2: Changed 'identify power' to 'quote specific language that legitimizes institutional authority' because V1 outputs were too vague"

Assessment Checkpoint: Does the prompt demonstrate theoretical synthesis? Are constraints precise? Is there evidence of diagnostic iteration?


Phase 4: Deployment & Documentation​

Weeks 12–15

Goal: Package the prompt as a reusable tool and document its intellectual/theoretical commitments.

Learning Objectives:

  • Deploy prompt in API playground OR no-code tool (Custom GPT, Claude Project, Google AI Studio, etc.)
  • Articulate what the system highlights and what it obscures
  • Test on multiple texts to verify consistency

Deliverables:

Final System Instruction (with deployment evidence)

  • Link to custom instrument, OR
  • Screenshot of API playground configuration

Critical Reflection Essay

  • What does your prompt force an analyst to notice?
  • What does it allow them to ignore?
  • How does this reflect the framework's inherent politics?
  • What surprised you about the outputs?

Final Assessment: Rubric TBD (4 criteria: Theoretical Synthesis, Operational Clarity, Reflective Iteration, Critical Awareness)


What Students Walk Away With (Module 1)​

By the end of the semester, students have:

  1. A working analytical instrument — a prompt that operationalizes a critical framework they researched deeply
  2. A documented design process — the prompt journal that traces their thinking
  3. A critical vocabulary — for talking about AI systems without anthropomorphism
  4. A transferable method — the Research → Blueprint → Prompt → Deploy loop applies to any framework

They have also practiced: close reading, theoretical synthesis, technical writing, iterative design, and metacognitive reflection.


Part II: The Pathway​

Module 2: Digital Scholarship Extension​

Schema → Database → Scalable Analysis

This is where the course could lead. Module 2 is aspirational scaffolding for students who want to continue—whether through independent study, a follow-on course, or self-directed learning. It builds on Module 1 by introducing structured data, API integration, and corpus-level analysis.

Who is this for?

  • Students pursuing Digital Humanities minors or certificates
  • Graduate students building research pipelines
  • Self-directed learners comfortable with technical documentation
  • Anyone who completed Module 1 and wants to go further

What additional skills does it require?

  • Comfort with JSON syntax (introduced in workshop)
  • Willingness to engage with API documentation
  • Basic data literacy (what is a database? what is a query?)

These skills are introduced through embedded workshops, but students should expect a steeper learning curve than Module 1.


Phase 5: Schema Design as Ontological Argument​

Goal: Transform qualitative primitives into a structured data model.

Learning Objectives:

  • Distinguish between entities and attributes
  • Use data types (string, array, object, boolean) to enforce rigor
  • Design nested structures that represent analytical relationships
  • Recognize schema as theoretical commitment

Deliverables:

Schema Design Document (JSON format)

  • Complete schema matching operational primitives
  • Required vs. optional fields (with justification)
  • Data type choices (with rationale)
  • Example: "Why is projection a string and not an array?"

Schema Critique Memo

  • What does this schema assume about the nature of the phenomenon?
  • What alternative schema designs were rejected? Why?
  • What would a different framework's schema look like?

Skills Workshop: JSON syntax basics, schema validation tools, mapping theory to data structures

Assessment Checkpoint: Is the schema complete? Does it enforce relationships? Can the student defend structural choices theoretically?


Phase 6: API Integration & Structured Output​

Goal: Connect system instructions + schema to API, handle responses.

Learning Objectives:

  • Configure API calls with system instructions and schema
  • Parse structured responses (JSON extraction, error handling)
  • Interpret "processing tokens" as diagnostic feedback
  • Iterate on both prompt AND schema based on malformed outputs

Deliverables:

Working API Configuration

  • Code/screenshots showing successful structured output
  • Evidence of schema validation (API accepts it)

Iteration Log

  • Document 3+ cycles of schema refinement
  • What broke? How did you fix it?
  • Example: "Schema initially made implications optional. API returned outputs without this field. Changed to required, outputs improved."

Skills Workshop: API authentication basics, reading API documentation, debugging malformed JSON responses

Assessment Checkpoint: Does the system reliably produce schema-conformant outputs? Is there evidence of prompt-schema co-evolution?


Phase 7: Multi-Text Pipeline & Comparative Analysis​

Goal: Scale analysis across a corpus, enable cross-text queries.

Learning Objectives:

  • Design database schema for storing multiple analyses
  • Understand relational vs. document databases (conceptually)
  • Perform aggregate queries (frequency, patterns, outliers)
  • Visualize findings (optional: charts, tables)

Deliverables:

Database Design Document

  • Tables/collections structure
  • Relationships between entities
  • Sample queries with expected results

Corpus Analysis

  • Batch process multiple documents
  • Store results in database
  • Run comparative queries

Findings Report

  • What patterns emerged across texts?
  • What did structured data enable that manual analysis wouldn't?
  • What did the structure miss or obscure?

Skills Workshop: Database normalization basics, SQL/NoSQL query fundamentals, data visualization principles

Assessment Checkpoint: Does the pipeline work at scale? Are queries theoretically meaningful? Does the student recognize what quantification affords/forecloses?


Phase 8: Capstone Project (Optional Extension)​

Goal: Build a shareable web application OR publish research findings.

Option A: Web Application

  • Frontend UI for inputting text
  • Backend API integration
  • Results display with export functionality
  • Deployment (Vercel, Netlify, etc.)

Option B: Research Publication

  • Use pipeline to analyze significant corpus
  • Write findings as conference paper / blog post
  • Share schema + prompts as reproducible methods

Deliverable: Either deployed app OR published research with full methodological transparency

Final Assessment: Expanded rubric (adds: Data Modeling, Technical Execution, Scalability, Methodological Transparency)


What This Course Is Not​

A few clarifications to preempt misreadings:

  • This is not a course in "prompt engineering for productivity." The goal is not to make students more efficient users of AI. The goal is to make them more critical thinkers about AI and about the frameworks they use to interpret the world.

  • This is not a course that assumes AI is good or bad. It assumes AI is a socio-technical phenomenon worth understanding on its own terms, without either utopian hype or dystopian panic.

  • This is not a programming course. Programming skills are introduced as needed (JSON, API calls, database queries), but always in service of humanistic inquiry. Students who want to go deeper into code can; students who don't can still complete Module 1 with no-code tools.

  • This is not about teaching students to trust or distrust AI outputs. It's about teaching them to read those outputs critically as rhetorical artifacts produced by probabilistic systems shaped by training data, prompt design, and hidden instructions.


Closing Note​

I'm not interested in telling students "AI is bad, don't use it." I'm more interested in participating in a discussion that frames AI as a tool with specific capabilities and limitations and, by the way, here's one way I'm trying to use it responsibly while maintaining some critical awareness.

The technology behind generative AI tools is of course innovative and interesting. But I’m more interesting in the practice of making prompt architecture and output schema the primary artifacts of assessment for a particular set of tools at a particular time in their development. The idea is to make the learner's entire analytical apparatus visible. Assessment focuses on the integrity of their designed system: coherence of logic, rigor of structure, sophistication of constraints, the why and the how.

This is what the glass box reveals: there is no mind inside, just a system. And systems can be understood.


This syllabus is a work in progress. Feedback welcome.


Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0