Skip to main content

Research to Story (RTS) Framework

Welcome to Research to Story​

This is the documentation home for Research to Story (RTS): a prototype web application wrapped in a pedagogical framework designed to intervene at the metacognitive level of research and to help learners think like researchers who communicate. This site documents what is pedagogically, technologically, and philosophically under the hood.

Imagine a student enters RTS with a research question about oyster decline in the Chesapeake Bay. They're not asking RTS to scan databases or find datasets since that's work for library search tools. Instead, RTS asks them a different set of questions:

  • What personal connection do you have to this environmental issue? Why does this matter to you specifically, not just academically?
  • If you were telling someone about oyster decline over coffee, what would you want them to understand that the data alone can't convey?
  • What assumptions are you bringing to this research? What are you hoping to find or afraid to find?

Why these questions instead of "where can I find more sources"?

Well this is the whole point: Because these questions open up the cognitive and rhetorical space that makes research storyable. They help students see their inquiry as more than just some information to compile, but as experience to shape, perspectives to bring, contradictions to surface, and stakes to articulate.

Through Movement 1's four reflection rounds, the student engages with AI-generated Socratic questions based on their own evolving thinking. So instead of approaching the interaction wondering what an AI model "knows" about Chesapeake Bay ecology, the focus is on what the student has noticed, wondered about, and struggled with. Each round builds on the previous one: the student selects a question, writes a reflection, and the (heavily instructed) AI uses that reflection to generate deeper questions for the next round. They document what they're observing in their inquiry process, what feels uncertain, and what threads they want to carry forward. Alongside this core work, they experiment with small audio production tasks, normalizing technical friction and help-seeking.

After completing the four rounds, they receive an AI-generated synthesis which is designed to be less of a summary of their topic and more of a reorganization of their own thinking that reveals patterns, tensions, and narrative possibilities they've developed. This becomes the foundation for transforming their inquiry into a multimodal narrative that is:

  • Grounded in research
  • Structured for listening
  • Bringing new insights
  • Surfacing what matters
  • Connecting to human stakes

They've created a complete, documented trail of their thinking process, evidence of becoming a researcher who communicates. That's the whole point of RTS.

What RTS Is (and Isn't)​

RTS is not:

  • A search engine or database discovery tool
  • An academic research assistant that locates sources
  • A literature review generator or citation finder
  • A way to outsource the "finding stuff" part of research

â €RTS is:

  • An intervention in habits of mind at the early stages of undergraduate research
  • A scaffold for developing research dispositions: curiosity, persistence, rhetorical awareness, comfort with uncertainty
  • A system for helping students think like researchers who communicate
  • A framework that treats the journey from curiosity to public narrative as intellectually rich work and not just "presentation skills"

If a student enters "I'm researching oyster decline in the Chesapeake Bay," RTS won't provide literature scans. Instead, it will generate questions that cultivate dispositions and open narrative possibility:

  • Personal Connection (Making it storyable through lived experience): What brought you to this topic? What story would you tell if you had to explain why oysters matter using only your own experience? What do you want to understand better?*
  • Research Potential (Making it storyable through structure and tension): What tensions or contradictions have you noticed in how this issue is discussed? What are you assuming about causes that you haven't yet verified? Are there perspectives missing from the dominant conversation?*
  • Audience Awareness (Making it storyable through sensemaking and conversation): Who needs to care about this beyond environmentalists? How would you make someone who eats oysters understand the stakes differently than someone who doesn't? What might change if people knew this?*

These aren't better or worse than library research tools, they're just doing different pedagogical work. RTS scaffolds the thinking that happens before, during, and after source-gathering: the metacognitive work of framing, questioning, and narrating that transforms research from compilation into communication.

The Research-to-Story Bridge​

Research results should be rigorous, dependable, clear, complete, and impactful. Compelling stories should capture attention, bring new perspectives, open minds, reveal truths, and connect to lived experience.1

RTS exists in the space where these two sets of qualities meet.

The framework recognizes that story isn't decoration added to research. It's how we arrange ideas, evidence, and perspectives in a way that makes sense to an audience. Story is:

  • Sensemaking - organizing complexity for understanding
  • Structure - highlighting tension, contradiction, stakes
  • Conversation - locating voice in relation to others
  • Experience - shaping pacing, tone, mood, surprise for listeners

Traditional research pedagogy often treats these as separate skills ("do your research, then present it"). RTS treats them as interdependent from the start. The questions that make research rigorous—What assumptions am I making? What contradictions exist? Who benefits from knowing this?—are the same questions that make stories compelling.

A Note on AI Integration​

RTS uses generative AI (currently Google Gemini) as an orchestration engine. Through intentional system instructions, the model is generating questions, reorganizing student reflections, surfacing patterns in the student's own thinking.

The system instructions for the AI model are elaborate instruments focused on habits of mind and come from my own reflective practice of working with undergraduate students in research communication courses. They're designed to function like I often do with students, as a persistent Socratic interlocutor who asks:

  • What do you notice that others might not?
  • What feels hard about this, and why might that difficulty be productive?
  • How would you explain this to someone who isn't already invested?
  • Where's the tension, contradiction, or surprise in what you've discovered?

Every AI interaction is logged with as much transparency as possible: token counts, computational costs, model versions used, even traces of the model's internal processing. Students learn to read AI outputs as statistical artifacts, instead of authoritative answers. When they see a synthesis of their reflections, they also see: "This reorganization required 12,847 tokens of processing. Here's what the model's chain-of-thought looked like. This is pattern-matching, not understanding."

This attempt at transparency, an intentional UI language that resists anthropomorphism, builds critical AI literacy through direct engagement with computational evidence, not abstract warnings about “hallucinations”.

Example UI language from loader text in RTS
  • "Generating a summary by identifying the most statistically significant sentences..."
  • "This process mimics human synthesis by finding thematic patterns in your writing."
  • "The model is generating a summary designed to read like a human interpretation."
  • "Read this upcoming summary critically. It's a statistical reflection of your words, not an understanding of your ideas."

About This Site​

This site iteratively documents the flows, processes, and pathways of Research to Story (RTS). It emerges from my experience supporting undergraduate researchers as they navigate the often invisible space between research and storytelling, a space rich with narrative frameworks, audience awareness, rhetorical decision-making, and the challenging of false divides between "making" and "thinking."

I've concluded that in early research communication courses, creativity flourishes when it's tied to inquiry. Students often feel overwhelmed if we just tell them 'be creative.' What works best is a structured approach to research-driven storytelling where the story comes from the discoveries they're making. That's where research, writing, and media all reinforce each other.

The Goal of Communicating Research

Take your audience on the same journey of discovery you experienced, making them feel the stakes, understand the complexities, and arrive at the conclusion with a sense of earned insight.

While each movement within RTS has been tested formally and improvisationally through coursework and workshops, the current phase of development focuses on leveraging what AI can do for orchestration—sequencing and moving inquiry data dynamically across movements—while being explicit about what it cannot do: think, understand, or know.

It is important to note

This project is not just a teaching framework bolted onto some uncritical AI enthusiasm. It is a learning experiment. I am learning alongside the system—through its gaps, frictions, and possibilities.

Why Research to Story?​

Rather than treating research and storytelling as separate phases, RTS approaches them as interdependent acts: Inquiry shapes story. Story reveals inquiry.

RTS aims to scaffold the research process as a lived, recursive, and relational process full of probing, discovery, medium-specific negotiation, and public contribution. It encourages students to move:

  • From what research is about
  • Toward what research can do

â €Through a structured but flexible progression, students are guided to:

  • Connect personal curiosity to evolving research questions
  • Embrace friction and uncertainty as productive
  • Critically engage audiences, infrastructures, and mediums
  • Shape inquiry into communicative artifacts that retain complexity, tension, and rhetorical awareness
  • See "technology" as neither anti-intellectual nor mystical, but as another way to structure evidence, sequence ideas, and make rhetorical choices

â €

How the Framework Works: Five Domains of Growth​

RTS scaffolds research-to-communication development through interwoven dimensions that map loosely onto domains of maker growth:

1. RTS Movements​

Progressive inquiry stages from personal connection to public storytelling, with prompts that encourage analytical and rhetorical exploration, expansion, and synthesis.

Example: In Movement 1, students complete four rounds of reflection where AI generates 10 Socratic questions based on their research topic and previous reflections. Questions like "What personal experience sparked your curiosity about this topic?" or "What contradictions have you noticed that traditional sources don't address?" Students select questions, write reflections, and the cycle repeats—each round building on the last.

After completing Movement 1's four-round cycle, students can choose "deep dive" extensions—focused explorations through specific inquiry lenses like "Spark of Inquiry" or "Puzzles and Unknowns." Each deep dive follows the same 4-round structure but narrows attention to a particular dimension of their thinking.

Emphasis: (structure, pacing, message) + (insights, evidence, significance)

2. Audience and Medium Awareness​

Throughout RTS movements, questions continuously refocus students on context:

  • Who needs to hear this story?
  • Where might your audience encounter this work?
  • How does audio change what you can communicate compared to a written essay?

Students engage with questions about sonic affordances, listening practices, and how medium shapes meaning. A constant reminder that this not just "presenting research" but understanding audio storytelling as a distinct rhetorical mode.

Emphasis: audience, platform, timeliness, stakes

3. Black Box Micro-Engagements (BBME)​

Small production tasks within each movement that normalize technical friction, demystify tools, nurture creative resilience, and foster digital scholarship. Example tasks: Record a 30-second research summary. Layer two audio sources. Create a transition between clips. After each task, students complete four-part reflections:

  • Action Step Completion - What did you make?
  • Personal Reflection - Tools used, frustrations encountered, problem-solving approaches
  • Relational Reflection - Who or what helped? (YouTube tutorial? Roommate? Library staff?)
  • Source Documentation - Cite/link the help you received

This builds habits of documenting production as a social, iterative process—not a mystical individual skill.

Emphasis: planning, tools, challenges, problem-solving

4. Reflection Journal Companion​

A parallel process-tracing scaffold where students surface what they notice, name tensions, and carry forward living questions, thereby building a visible story of inquiry itself.

At each movement, students respond to three prompts:

  • What I Am Noticing — Observations, sparks, emerging insights
  • What Feels Hard or Unsettled — Points of discomfort, contradiction, or doubt
  • What I Want to Carry Forward — Threads of inquiry, tension, or discovery to nurture in future stages

These accumulate across movements, creating a metacognitive record of how thinking evolves: evidence of risk-taking, response to feedback, and growth as a researcher.

Emphasis: creative choices, feedback response, growth trajectory

5. Generative AI as Orchestration​

A generative AI model is used at the API level where the model can be configured to interact with structured data stored in a database thereby functioning as a living ledger where every reflection, AI-generated question, synthesis, and tool interaction is recorded.

This creates a complete audit trail of computational operations, enabling research into how students interact with AI-scaffolded inquiry. Again, the system isn’t using a model to generate answers about research topics, rather it uses AI to generate questions about the student's own thinking.

AI scaffolds forward movement through structured prompting, reframing, and tension-surfacing, but never introduces new content. It can only work with what students have already written.

Emphasis: documentation, iteration, traceability

What RTS Does vs. What Other Tools Do​

To clarify RTS's distinctive pedagogical position:

Tool TypeWhat It DoesExample
Search/Discovery ToolsFind sources, databases, literatureGoogle Scholar, library catalogs
Research AssistantsSummarize sources, generate citationsZotero, EndNote, some AI tools
Writing AssistantsGenerate or improve proseGrammarly, ChatGPT for drafting
RTSScaffold metacognitive habits that make research storyableGenerate questions about your thinking that open narrative possibility

RTS assumes students will use other tools for finding and managing sources. What RTS aims to provide is the thinking infrastructure that makes source work meaningful, the habits of mind that turn information into insight, insight into story, and story into impact.

Who Is This For?​

For Students:​

  • Undergraduates developing research projects into public-facing narratives
  • Anyone learning to compose in multimodal formats (RTS is currently focused on audio essays/stories, aka “podcasts”)
  • Students building critical AI literacy alongside research skills
  • Those who want to understand their own inquiry process, not just produce an end product

⠀For Instructors:​

  • Writing program faculty integrating multimodal composition
  • Librarians teaching research as iterative inquiry
  • Educational technologists exploring AI enhanced pedagogy
  • Anyone interested in scaffolding research as a lived, documented process

⠀For Researchers:​

  • Those studying human-AI interaction in educational contexts
  • Scholars of digital rhetoric and composing processes
  • Anyone investigating AI literacy frameworks and pedagogical applications
  • Researchers interested in complete audit trails of student-AI interaction

â €

Aligning RTS with the ACRL Framework

RTS lands firmly within the ACRL Framework for Information Literacy in Higher Education's conceptual territory. Rather than treating the six frames as isolated outcomes, RTS designs its "movements" as lived, recursive practices that help students embody the dispositions and knowledge practices the Framework describes.

Research as Inquiry: RTS treats inquiry as an unfolding posture across every movement. Structured prompts, recursive follow-up questions, and synthesis activities mirror the ACRL emphasis on research as an iterative process of asking, refining, and reframing questions.

Scholarship as Conversation: Through reflective journaling and audio micro-engagements, students see themselves as contributors to ongoing conversations. Movements invite students to situate their voice in relation to others—who has said what, what's missing, what matters.

Authority Is Constructed and Contextual: RTS foregrounds rhetorical stance and audience awareness, helping students notice how credibility and authority shift depending on context, medium, and purpose. Movements that focus on friction, bias, and language framing explicitly ask students to interrogate assumptions about sources and categories.

Information Creation as Process: By integrating Black Box Micro-Engagements, RTS positions tools, media, and production choices as generative parts of research rather than neutral containers. Students experience firsthand how different modes of creation shape meaning and value.

Searching as Strategic Exploration: Movements that guide keyword expansion, metadata awareness, and source mapping help students recognize searching as a rhetorical and interpretive act, aligning with the ACRL frame that values exploration, flexibility, and discovery.

Information Has Value: RTS emphasizes attribution and acknowledgment practices in its micro-engagements and reflective prompts. Students learn to value the labor of knowledge production, recognize information privilege, and document the relational networks that support their projects.

In short, RTS does not replicate the ACRL frames as a checklist. Instead, it animates the spirit of the Framework by treating research as an interpretive, rhetorical, and creative process. Students are invited to see themselves as learners, makers, and contributors whose work carries both intellectual and public stakes.

Current Implementation Status

Fully Operational:

  • Movement 1 (4-round Socratic reflection system with AI synthesis)
  • 6 Deep Dive categories (focused extensions of Movement 1)
  • AI “transparency” features (token analytics, thought summaries, model tracking)
  • Reflection Journal Companion infrastructure
  • Black Box Micro-Engagement framework

â €In Development:

  • Scaling Movement 1 patterns to Movements 2-6
  • Enhanced instructor analytics and course management
  • Cross-movement synthesis capabilities

The system architecture established in Movement 1 (session management, AI interaction patterns, metadata capture) provides the proven template for creating a multi-movement framework.

A Note on What's up

The attentive reader will notice that the pedagogical moves described in RTS, scaffolding personal connection to research, surfacing tensions and contradictions, centering audience awareness, could live in many forms. A well-designed course with intentional prompts, structured class discussions, and reflective journaling could accomplish much of what RTS does as software.So why build it as a web application? Three reasons: First, I'm genuinely curious about what generative AI can and can't do for education—specifically, whether constrained computational text processing can scaffold sustained inquiry without replacing student thinking. Second, I'm fascinated by the fact that rapid prototyping of pedagogically grounded web apps is now feasible in ways that would have required significant technical resources just a few years ago. Third, the complete audit trail created by a database-backed system enables research into student-AI interaction patterns that would be impossible to capture otherwise.This is both a teaching framework and a research instrument. The documentation that follows treats it as such.


#rts/docs/intro

Footnotes​

  1. Source: Presenting Research Results: https://shiny.stats4sd.org/PresentingResults_Book/story.html#scientific-results-and-story-telling ↩