Skip to main content
Discourse Depot

Discourse Depot

Experiments in LLM-driven discourse analysis.

Primary Framework

Metaphor, Anthropomorphism & Explanation Audits

Analyzing how cognitive metaphors—“hallucination,” “learning,” “reasoning”—construct the illusion of mind and obscure the mechanistic reality of AI systems. This is the core work of Discourse Depot: a sustained audit of the language we wrap around machines that predict text.

View Audits đź““

Resources & Deep Dives

Corpus-level extractions, pedagogical frameworks, and tools for thinking critically about AI discourse.

Corpus Libraries

Thematic extractions across all audits: reframings, source-target mappings, accountability patterns, and critical observations. Queried from the database and consolidated for cross-document exploration.

Browse Libraries →

What Survives?

The deconstruction experiment: strip the metaphors and see what remains. Each text receives a verdict: Preserved, Reduced, or No Phenomenon.

Explore →

Glass Box Syllabus

A course framework treating machine instructions as scholarly inquiry. Schema as argument. Iteration as metacognition. Provenance as scholarship.

View Syllabus →

Educator's FAQ

"Isn't this just plagiarism?" "Does AI really understand?" A working archive of recurring questions and not so recurring answers.

Read FAQ →

Slippage Tools

Interactive learning objects: walk through LLM training and inference step-by-step, seeing the gap between what the language implies and what systems actually do.

Try Tools →

Experimental Frameworks

Side explorations applying LLM-driven discourse analysis to other domains. These are works-in-progress, not the main event.

Political Framing

Deconstructing how political actors use language to shape policy agendas, define national interests, and manufacture consent.

View Frames 📝

Critical Discourse

A dual-track forensic/activist analysis of power relations, agency, and structural ideology embedded in corporate and media texts.