Skip to main content

A Note on Using AI

Orchestrating Inquiry: How RTS Uses AI to Support Student Storytelling​

Most educational AI tools are often characterized as falling into familiar categories: homework shortcuts, plagiarism risks, or surveillance systems. But what if generative AI could serve a different purpose entirely?

RTS explores a fourth possibility: using AI as what we might call a sequencing mechanism to generate prompts and organize responses in predetermined patterns. So you might call RTS, among other things, an experiment in constraints. And like all experiments, it raises as many questions as it answers.

The Core Question: Can AI Be Pure Process?​

Putting aside the problematic outcomes of using AI as a source of truth or to verify things, RTS funtions as a prompt generation and response organization system within a carefully designed pedagogical framework. My aim is to constrain the model to specific, bounded operations and make all its processes visible and traceable (as possible).

But here's the contradiction we're sitting with: even when constrained to mechanical operations β€” generating questions, reorganizing text β€” we often interpret the outputs as pedagogically meaningful.

We can't escape the fact that these computational outputs get read as if they contain insight, even when we frame them as simple text processing.

What RTS Tries To Do​

If we are not expecting the model to produce insights or verify truth claims, can we constrain it to some basic mechanical operations:

  • Prompt generation β€” producing questions according to predetermined categories
  • Text reorganization β€” reformatting student responses in new configurations
  • Pattern recognition β€” identifying recurring terms and themes in student writing
  • Context accumulation β€” incorporating previous responses into subsequent prompts

Whether this constitutes a useful "thinking tool" without being framed as becoming a "thinking partner" is what we're investigating.

How It Works: Three Constrained Operations​

Every AI interaction in RTS serves one of three specific functions, each structurally limited and (generally) traceable:

1. Question Generation​

At the beginning of the journey, when a student enters a research topic, the model generates 10 questions organized into five predetermined categories:

  • Spark of Inquiry β€” Personal connections and early fascinations
  • Inquiry as Story β€” Narrative potential and human stakes
  • Stakes and Significance β€” Broader implications and relevance
  • Puzzles and Unknowns β€” Contradictions, gaps, unexpected angles
  • Listeners and Lens β€” Audience-centered thinking for podcast medium

The Context-Building Problem​

The model doesn't generate questions in isolation. I am feeding it an accumulating "reflection trail":

const userContext = isFirstRound 
? `Topic: ${sessionData.topic_text}`
: `Topic: ${sessionData.topic_text}\n\nReflection Trail:\n${trail}`;

This creates what appears to be responsiveness and continuity. But is this meaningful contextual processing, or sophisticated pattern matching operating on accumulated text? I’m not making claims either way β€” I’m interested in what happens when we treat computational text processing as a thinking tool rather than a thinking partner.

Structural Constraints​

Questions must conform to a rigid JSON schema:

const FollowupQuestionsSchema = {
type: Type.OBJECT,
required: ["Interpretive Summary", "Follow-up Questions"],
properties: {
"Follow-up Questions": {
required: ["Spark of Inquiry", "Inquiry as Story", "Stakes and Significance",
"Puzzles and Unknowns", "Listeners and Lens"],
}
}
};

This could be about tyring to get β€œbetter" questions, but I’m not that interested in that. It's about seeing what happens when we constrain the model to predictable formats that serve specific pedagogical functions.

2. Synthesis Generation​

After multiple reflection rounds, the model produces what we call a "synthesis" β€” text that reorganizes and reflects student ideas back in a new configuration.

The prompt explicitly constrains the model to:

  • Work only with what the student has written
  • Identify recurring themes and tensions
  • Highlight potential narrative opportunities
  • Pose one forward-looking question

The central tension: We call this "synthesis" and treat it as pedagogically useful, but it's generated by computational text processing. What is interesting is to ask why we respond to these syntheses as if they contain insight. Sometimes they seem to.

I’m not claiming the model produces understanding, but I’m also not dismissing the possibility that constrained text processing can function as a useful thinking tool.

3. Role Maintenance​

All interactions are bounded by strict prompts and output formats. The model cannot:

  • Engage in open-ended conversation
  • Add external information or research
  • Deviate from predetermined response structures
  • Access previous sessions or other students' work

Whether this successfully creates a "thinking tool" rather than a "thinking partner" depends partly on how students and instructors interpret and use the outputs.

The Technical Implementation: Making Constraints Visible​

RTS makes all AI operations traceable and transparent (as possible by the model by the model’s configuration that is), partly for pedagogical reasons and partly because we want to study what's actually happening.

The Living Ledger​

Every AI interaction creates a permanent, structured record:

const questionsToInsert = Object.entries(followupQuestionsData).flatMap(
([category, questionsArray]) =>
questionsArray.map((question) => ({
session_id,
user_id: user.id,
movement_number: 1,
round_number,
category,
question_text: String(question),
model_used: ACTIVE_MODEL,
}))
);

This serves multiple purposes:

  • Traceability: Every output linked to its input and context
  • Reproducibility: Interactions can be analyzed and replicated
  • Transparency: Students see the full computational trace (as much as possible)
  • Research: Data for studying constrained AI interactions

Making AI Operations Visible​

Rather than hiding AI processes, RTS aims to expose them:

return res.status(200).json({
interpretive_summary: parsedOutput["Interpretive Summary"],
followup_questions: questionsForClient,
token_usage: {
prompt: usageMetadata?.promptTokenCount || 0,
response: usageMetadata?.candidatesTokenCount || 0,
thinking: usageMetadata?.thoughtsTokenCount || 0,
total: usageMetadata?.totalTokenCount || 0
},
thought_summary: thinkingContent,
model_used: ACTIVE_MODEL,
workbench_mode: !!model_override
});

Students see the computational cost of their questions, traces of the model's processing, and explicit markers of which AI system generated each response. This transparency doesn’t aim for demystification β€” we can't fully demystify these systems. It's about making the interaction legible enough to study and question.

The Pedagogical Experiment​

What RTS Tests​

This approach attempts to serve several pedagogical goals simultaneously:

  • Student Agency: The model never initiates content generation β€” it only responds to student prompts and uses student-generated material
  • Learning Support: Generated questions and text reorganization may help students continue their inquiry, though we can't separate this from the effects of sustained attention and structured reflection
  • Learning Transfer: Students practice moving from research to narrative through guided reflection, though the "guidance" is computationally generated
  • Critical AI Literacy: All AI processes that can be made visible are visible, supporting reflection on human-AI interaction patterns

What We Don't Know​

Whether constrained computational text processing constitutes useful pedagogical support or elaborate theater remains an open question. Students report finding the questions and syntheses useful, but this could reflect:

  • Genuine utility of the computational operations
  • The inherent value of structured, sustained reflection
  • Cognitive biases that make any systematic prompt seem insightful
  • Some combination of all three

RTS does not claim to resolve this uncertainty. I’m more interested in designing systems that make it possible to study.

A Working Example: Orchestration in Practice​

Consider a student researching "the impact of social media on political polarization":

Round 1: Model generates initial questions based only on the topic Round 2: Model incorporates the student's Round 1 reflection to generate new questions Round 3: Model works with accumulated context to generate deeper questions Round 4: Model synthesizes the full journey and poses forward-looking questions

At each step, the model processes increasingly complex textual inputs to generate increasingly contextual outputs. Whether this constitutes useful computational assistance or something more complex depends on how we interpret pattern-based text processing.

The student experiences continuity and responsiveness. The system logs computational operations. The relationship between these two descriptions is part of what I’m investigating.

The Constraint Paradox​

The technical constraints that make RTS possible also highlight its central contradiction:

const config = {
thinkingConfig: {
thinkingBudget: -1, // Allow internal processing
includeThoughts: true, // Make processing visible
},
responseMimeType: 'application/json', // Enforce structure
responseSchema: FollowupQuestionsSchema, // Prevent deviation
systemInstruction: SYSTEM_INSTRUCTION // Define operational boundaries
};

I constrain the model to prevent it from acting as an autonomous agent, but these same constraints require the model to generate contextually relevant questions and text reorganizations. The boundaries between computational tool and interpretive "partner" become difficult to maintain.

What I’m Learning​

RTS doesn't actually resolve questions about AI in education β€” it makes certain questions more visible:

  • Can computational text processing be pedagogically useful without attributing internal dispositions to them?
  • What happens when we treat AI outputs as mechanical operations rather than knowledge sources?
  • How do students interpret constrained computational assistance, and does the framing matter?
  • What forms of computational text processing support learning without replacing human inquiry?

I’m building RTS as a way to explore these questions empirically, not to prove a particular point about AI capabilities or limitations.

Basically, the model generates questions and text reorganizations that students find useful. RTS constrains these operations as tightly as it can while preserving their apparent utility. Whether this constitutes a successful thinking tool or something more complex still remains an open investigation.