Skip to main content

AI Literacy in RTS

info

See my Discourse Depot for more about this...

RTS approaches AI literacy as a form of interpretive inquiry. Instead of framing large language models as tools to be mastered, I’m moving toward a framing of them as producing texts to be read, questioned, and reflected upon. This shifts the center of gravity: from control to attention, from performance to interpretation.

The philosophical heart of RTS — the tension between illusion and interpretation — deserves its own pedagogical surface. A place where students are asked, explicitly, to wrestle with what they're experiencing. The interpretive labor has to be invited, staged, scaffolded. Otherwise, the tension between illusion and inquiry remains implicit. And the pedagogical payoff of the core insight — the synthetic output as mirror, not message — remains unrealized.


What Is AI Literacy in RTS?

AI literacy in RTS is defined as:

The capacity to read, interpret, and interrogate the outputs of generative models as artifacts or texts. Not for what they reveal about the machine, but for what they reveal about us as readers of machine language.

This framing resists instrumental approaches that reduce literacy to "how to prompt effectively" or "how to spot a hallucination." In RTS, literacy is rhetorical, reflective, and deeply concerned with the habits of mind we bring to mediated language.


Key Pedagogical Commitments

Interpretation Over Operation

RTS reframes literacy from how to use a model to how to read it, centering interpretive practice. Nowhere are students being trained to extract better outputs from the model. They're being invited, at moments, to examine why the outputs feel coherent, persuasive, or authoritative — and what that reveals about their own sense-making.

Literacy here means being able to ask: What am I seeing? Why does it feel coherent? What am I projecting?

Metaphor Awareness

Language like "thinking," "understanding," or "hallucinating" isn't neutral. These terms import human-centered assumptions into technical systems. They do ideological work and carry significant baggage. RTS encourages students to notice these metaphors, reflect on their implications, trace how they function, decide when they conceal more than they reveal, and recognize when metaphor begins to substitute for explanation.

For the full treatment of how RTS handles metaphor at the interface level, see AI Observability.

Tension as a Pedagogical Feature

RTS makes no attempt to resolve the contradiction between "this model is not intelligent" and "its outputs feel meaningful." Instead, it treats that dissonance as productive and builds around it. Students are encouraged to inhabit the tension and reflect through it. The illusion of authorship is not denied, but interrogated. The goal is necessarily focused on tryting to eliminate dissonance but to teach and learn through it. AI literacy as more than a disclaimer kind of vibe.

Reading as Method

The student in RTS is positioned as a reader: of the model's outputs, of their own reactions, of the system's framing choices. This places AI literacy within a humanistic tradition, closer to media studies, library and information science, literary theory, and rhetoric than to software engineering.

Reflexivity as Outcome

The desired outcome of AI literacy in RTS is reflective awareness. A student with strong AI literacy in RTS won’t be able to say "I understand how GPT works." They might say something closer to:

"I'm becoming more aware of how I make sense of things, and how easily coherence can be simulated."

The goal is to come away with greater clarity about how we make meaning in the presence of simulated fluency. This is a type of critical reflexivity.


The Knot: Synthesis as Material, Not Understanding

On the one hand, RTS argues that AI is not "thinking," not authoring, not understanding. On the other, it uses AI to generate syntheses and invites students to reflect on them as meaningful. That tension is not a contradiction I care to resolve since it is a it's the terrain the project is mapping.

So in this way, the synthesis that is generated at the end of movement 1 is not flexing as some kind of claim. It's not a truth. It's not a demonstration of AI cognition. It's a provocation. A structure students can read, question, disagree with, revise. The fact that it came from a system without intention is part of what makes it interesting to read closely. What RTS stages is a relationship between interpretability and synthetic coherence.

Exposing token counts, prediction traces, and synthetic outputs doesn't reveal "how the model works." RTS is not a transparency engine. It's a frame-maker. It shows students what kinds of information we treat as evidence, what metaphors we use to narrate system behavior, and what illusions of intention we project onto coherent output. The claim isn't that the model is legible. The claim is that we can study how and why we experience it as if it were. That is a literacy problem. That is a reading problem.

RTS doesn't reveal how the model "really works." It doesn't claim to. Token counts, thought summaries, and prediction metadata are not keys to the black box. They're traces. They don't explain much. Maybe they disturb. They interrupt the illusion of seamless fluency and remind students that what they're reading is the output of statistical correlation, not intention. The syntheses the model generates are presented as texts themselves, as artifacts worth reading not because the model understands the student's thinking, but because the act of reading them reveals how coherence is constructed in the absence of understanding.

The goal is not to collapse the difference between human and machine. It's to slow it down. To let students ask not just "what does this synthesis say?" but "what kind of sense does this output make, and why do I believe it?"


How AI Literacy Is Scaffolded in RTS

Reflection Rounds

Students respond to AI-generated follow-up questions across four recursive rounds. Each response shapes the next. The goal is to practice inhabiting evolving questions and noticing how meaning accumulates. For the full pedagogical design, see Movement 1: Spark to Stakes.

Synthesis as Mirror

The model's synthesis is a provocation. It reflects back the student's language in a new form. Students are asked to read this output critically, as something that feels meaningful, and therefore must be questioned. For how synthesis is constrained architecturally, see A Note on AI Use.

The Mirror Check (In Development)

After viewing the synthesis, students would complete a module called the Mirror Check. It prompts them to:

  • Reflect on whether the synthesis felt authored, echoed, or generated
  • Evaluate whether the model "understood" them — and what makes them feel that way
  • Highlight language that felt "human," and explain why

This a pause. A moment to surface the illusion and to examine its force.

Metadata Exposure

RTS displays token counts, AI thought traces, and summary metadata. Since I can’t make anything truly transparent (that’s Google’s job they are not doing), I can at least destabilize the illusion of fluency. Students see that these outputs are statistical correlates, not signs of understanding. For the full implementation of metadata panels and progressive disclosure, see AI Observability.

Interface as Pedagogy

The RTS interface itself embeds AI literacy through deliberate design choices:

  • Mechanistic loaders that visualize computation rather than implying cognition
  • Round-aware spinner text that teaches through waiting — each loading state carries a micro-lesson about statistical pattern-matching
  • Round Education Cards that prime critical reading before students engage with AI-generated questions, progressing from token-prediction framing to human agency sovereignty
  • Non-anthropomorphic UI language maintained through a centralized registry, replacing phrases like "AI is thinking" with "Processing" and "AI reasoning" with "Statistical inference trace"
  • AI Literacy Lens annotations in exported reports that guide critical reading of every AI-generated element
  • Progressive disclosure of metadata through expandable panels, rewarding curiosity with deeper technical insight

These elements work as a layered system: the education card primes critical stance, the loader reinforces it during generation, the metadata panel offers evidence for inspection, and the export annotations carry the literacy framework beyond the app. For the complete design philosophy and implementation details, see AI Observability.


Why I think it matters

Because students are already encountering AI-generated language — in search, in productivity apps, in institutional systems. Teaching them to use these systems without teaching them to read them is insufficient.

Because fluency simulates understanding. And in an educational context, that simulation can become dangerously convincing. Language that sounds like it understands can easily become language that we trust. And trust, unexamined, is the squishy spot where illusion becomes belief.

Because literacy has always been about more than decoding symbols. It's about knowing when language is performing and for whom.

Because AI literacy is not really about demystifying the model. It's about demystifying ourselves.

RTS is not concerned with eliminating illusions. It aims to become one more space to study it. And in doing so, to make visible the habits of mind we bring to machines that speak.


Further Reading