Skip to main content

About Discourse Depot

An image from the static

  • What happens when our everyday metaphors make a tool sound like a thinker?
  • How do I teach literacy in a landscape where tools are routinely described as if they have inner lives?
  • What happens to meaning when no one intended it?
  • If eloquence is computable, what is left for the human writer?
  • How do I navigate a system that is transparent yet alien, accessible yet incomprehensible, and competent yet mindless?
  • Where is the human labor hidden inside the 'artificial' intelligence?
  • Can a machine built to predict the statistical past ever truly help us imagine a different future?

What This Essay Covers

The outputs on this site don't have authors. The discourse about AI keeps inventing them anyway. Discourse Depot is where I try to hold both facts in view at once.

This is a reflective essay, part origin story, part pedagogical argument. It's long because I'm working through ideas instead of presenting conclusions. Here's what you'll find:

  • How disappointment with AI outputs became a diagnostic tool
  • The strange problem of meaning without authorship
  • Why anthropomorphic language matters more than it seems
  • The difference between describing how something works and explaining why it acts
  • What this project is (and isn't) trying to do

Each output, particularly from the metaphor, anthropomorphism and explanation audit, is itself a critical reading lesson. Readers see anthropomorphic language identified, its implications unpacked, and alternative framings proposed. The corpus is building a counter-language library. Models of how to talk about AI without projecting minds onto machines.

Prefer to explore the analyses first? Start here. This essay will make more sense after you've seen what the prompts produce.


"Any decision made by the AI is a function of some input data and is completely derived from the code/model of the AI, but to make it useful an explanation has to be simpler than just presentation of the complete model while retaining all relevant, to the decision, information. We can reduce this problem of explaining to the problem of lossless compression." 1


Welcome​

For centuries, when people encountered a beautifully composed poem, a compelling legal argument, or an extraordinary image, it was taken as unmistakable proof of human ingenuity and conscious thought. Such works were understood to be shaped by intent and the unique experiences of their creators, and eloquence itself was viewed as a defining trait of humanity. Well, here we are. Generative AI has, and will continue to blow up this long-standing association. Moving on.

Recent advancements in these technologies have shown that eloquence, the very quality that once signaled human presence, can now be generated computationally. The good news is that this doesn’t also signal the end-times, it just signals that the connection between creative output and human consciousness is no longer absolute. Is it really that surprising that what once signaled intention and experience can now be produced without either?

Generative AI proves that expression can be decoupled from experience, which also means that my “He’s talking, he must be real” reflexes might need some rewiring. "Is it intelligent?" has now become a boring technical question for me. "Why does it fool me?" is the fascinating psychological question.

More excitingly, a new space opens up for some sort of refreshed reckoning or reconsideration of how meaning, authorship, and understanding are recognized in the first place. Also, this is a great time to start mapping this weird terrain. No need to dunk on or deny the capability of the machine. My preoccupation is more about surfacing the gaps between the eloquence of its outputs and the reality of its mechanisms.

But even more than that, this project seeks to ground these systems in their material origins. By stripping away the mythology of the "emergent alien mind," it is possible to see them for what they actually are: manufactured technologies, shaped by specific economic incentives, extracted labor, and human design choices.

Ultimately, Discourse Depot is a report from a specific vantage point: my work as a librarian. It is like I’m documenting a high-speed collision where knowledge, authority, evidence, authorship, and literacy crash into a wall of statistical probability. From the vantage point of a librarian, this collision could be seen as a defining event of our information landscape. My goal is to observe, learn, share, and to sift through the debris and distinguish what is real from what is merely probable.

Welcome to Discourse Depot. ~ Troy


Where This Started​

This project began with paying attention to a recurring moment in discussions about generative AI, particularly in the higher education and academic librarian setting I work in: disappointment with the outputs.

In faculty meetings and workshops, I kept hearing colleagues describe AI errors as if they were some kind of personal failings of a collaborator. And then I noticed I was doing it too. When an LLM gave me a bad answer, I felt betrayed which revealed an expectation I hadn't examined: somewhere, I was treating a probabilistic text generator as something that could betray me. I had this strange feeling that a house of cards was being built at the same time it was about to collapse.

What influenced me into thinking I was interacting with something like a reference librarian when I was actually interacting with something like an improv artist, but one who doesn't understand improvisation or art or "true" or "false" and yet still produces interestingly contextual outputs? Why, somewhere down deep, was I spinning a story of "intent" when on the other side of the screen there's only mechanistic probability?

And "improv artist," by the way, is still the wrong metaphor. An improv artist intends to entertain. Here I'm just dealing with a system optimized for completion.

The Mechanism of Disappointment​

The research on these models suggests they operate through constraint satisfaction and statistical prediction. If they are not minds seeking truth, what are they? They are “engines”seeking plausible completion.

  • When a librarian doesn't know an answer, they stop. They have professional ethics.
  • When an LLM doesn't have the data, it imputes. It fills the blank with a statistically likely pattern because its only mandate is to finish the sentence.

I was reading these imputations as "lies" because I was projecting intent where there is only a mechanism balancing a mathematical ledger.

That disappointment, like all disappointments, had more to do with my boomeranged expectations. It showed me that I was walking around with a tight (but often unconscious) grip on a curiously wrought category mistake: a view of the LLM as some kind of subject, scaffolded by the very language used to describe it: learning, thinking, understanding.

However, when I toss out the expectation of veracity and ease into plausibility, there's a fair amount of promise, invention and wonder. The dice won't always land on six, but LLMs do a decent job creating SQL queries or turning an idea into a workable app. They deal in likelihoods rather than certainties, and I'm no longer disappointed that a predictive text machine is not a scholar. I'm no longer disappointed because I no longer expect it to be. I’ve made peace with that.

  • Find me articles about X and summarize them - High disappointment
  • Here are three PDFs I found. Summarize them. - Lower disappointment

The Problem of Authorless Text​

Generating and reading the outputs from a generative AI model on this site has been a strange experience. Here you'll find intricate texts that meet all the formal expectations of authorship and some are downright insightful but they all share one thing: they evade any sense of traditional attribution. They were generated by an LLM given a 4,000+ word prompt and a JSON schema to fit into.

What happens to meaning, interpretation, and authority when the origins are algorithmic?

This absence of a clear human will or consciousness behind these texts creates a problem for anyone who believes that meaning derives from an author's intent. It's tempting to re-anchor their meaning in some origin story, to salvage intentionality by appealing to the training data as a kind of distributed ghost author: “it was in the training data." But that's not how these systems work. They carry the texture of authorship without the presence of an author, and I think that distinction matters.

LLMs are trained on vast corpora that include novels, essays, Wikipedia pages, codebases all of which are authored artifacts out there in the world operating as descriptions of that world. But in producing their outputs, LLMs do not store or retrieve those texts in an attributable way. The model does not recall them directly the way a database would. It does not retain sentences or ideas in the form they appeared, and it does not intend to use an author's voice, argument, or idea. The training process builds a statistical model of language, not an archive of it.

And the outputs are generated based on probability. Although I might say that the model is shaped by authorship, its outputs are not authored in any conventional, traceable sense. At the point of generation, they are synthetic constructions, novel combinations formed in latent space, not quotations or summaries of prior works. Yes, the model's capacities are downstream of human creative labor. But when people say "it's just remixing human work," what they often mean is: it can't be original or it's plagiarized. LLMs don't know whose work they've been trained on, and they don't decide to use any particular idea or phrase.

The more pressing question is: how do I locate meaning in a text that has no discrete authorial will behind it?

The authors are "in" the data, but not in the output. Because meaning doesn't reside in the data. It resides in the act of reading. The perception of "meaningful authorship" is a contribution of the human reader, not the machine. LLMs, when they succeed, do so because they have mastered the statistical structures of human clichés. 2

Here's the crux. It feels intuitive to say: if the input is authored, the output must carry authorship too. But is authorship really a substance that "transfers," or is it more like a function—a relationship between a person and an utterance, bound by intention, responsibility, and context?

Barthes's "death of the author" was a provocation. The reader was recast as a sovereign interpreter. Without a mind behind these outputs, the question "what does this mean?" transforms into "how do I mean at all?" Classrooms have long sustained the author's ghost through the teacher's authority as the one who knows what the text "really" means. Reader-response pedagogy began to dissolve that model, and generative AI completes the collapse.

Enter generative AI: content without consciousness, form without a self. Not just the death of the author but the refusal to ever be born.


The Bridge: Projecting a "Who" Where There's Only a "What"​

The same absence of intent that makes authorship strange also makes anthropomorphic language so seductive.

When I read an LLM's output and ask "what did it mean by this?" am I reaching for an author who isn't there? When I hear that an LLM "understands context" or "decides" what to write, I’m hearing language that invents an author where none exists. Both moves are attempts to anchor meaning in a mind. Both founder on the same fact: there is no "who" behind the curtain.

Two preoccupations emerged: 1 Authorless text: What happens to meaning when no one intended it? 2 Anthropomorphic framing: Why do we describe systems that process as if they know?

I eventually realized these are the same kind of problem. The philosophical puzzle (meaning without intent) and the pedagogical problem (how to talk about systems that mimic intent) collapse into one question: What habits of mind do I need to engage with language that has no mind behind it?


Why It Exists​

This project started with a simple issue: the relentless anthropomorphism in AI discourse. Words like "thinks," "understands," and "learns" started to obscure more than they explained.

  • Was this slippage just part of a broader cultural pattern in how I try to narrate complex systems?
  • Is the "black box" really dark, or is it just full of too much light?
  • Does agency language help me avoid cognitive labor around complexity and causality?
  • Do I just not have access to mechanistic explanations of LLMs at scale yet? Does the sheer scale of the math make "understanding" impossible for a human mind?
  • Is anthropomorphism the problem, or does it point to another one: my lack of a more sophisticated conceptual vocabulary for algorithmic "agency"?
  • Or is anthropomorphism just a bad habit or is it the only way to compress millions of polysemantic features into a human-readable narrative?
  • Is my question Why do we talk about AI like it's human? or more like Can we develop a vocabulary for a system that is transparent AND incomprehensible?

As I dug into metaphor theory, consciousness studies, polysemanticity, superposition, framing analysis, and discourse studies, a question emerged: Could an LLM be configured to systematically apply these analytical frameworks to texts and help surface the anthropomorphism, the metaphors, the how/why slippage?

The answer is this site. Decide for yourself.

Discourse Depot is a public workshop for experiments in using Large Language Models as research instruments. I write elaborate system instructions, often 4,000+ words, that operationalize Critical Discourse Analysis frameworks. I then task the model with applying these frameworks to popular, technical, and academic texts about generative AI.


Exhibit A: Narrative Transportation in AI Discourse​

Take this video from Anthropic. It's a beautiful piece of content rhetorically engineered to portray an LLM as a kind of neuroscientific patient-meets-poet, complete with plans, thoughts, even a mind I can read. One of the best examples of using the language of biology to describe the mechanics of statistics.

Of course the issue is that in an attempt to solve the black-box problem they only deepen the anthropomorphic one.

note

You can see a metaphor, anthropomorphism, and explanation audit of the research paper upon which this video is based—here.

Here’s where I kept getting tripped up: as a viewer, I was never invited to reflect on the metaphors as metaphors. There's no moment of clarification like "we can think of this as..." or "to simplify...". The model isn't just like a mind: it has one. It doesn't just simulate planning: it does it.

These metaphors also come in hot and asymmetrical: they foreground similarities between AI and human cognition while minimizing profound differences. "Planning," "thinking," and "reasoning" carry rich semantic histories tied to agency, intentionality, and context-awareness, none of which apply to a large language model. But the metaphor glosses over those gaps.

The hinge on the rhetorical door squeaks early in the video: We want to open up the black box and understand why they do things. On the surface, neutral enough. But "why they do things" already slips into anthropomorphism. The implication is that the model acts with purpose, with reasons. What does it mean to ask why a large language model "does" something? Why, as in what purpose? What reasoning? The phrasing already assumes an agent with intentions that can be discovered.

The story of Claude in the video becomes almost epic: I've opened the black box, traced the circuits, located a mind. But have I? Or have I seen into my own projection. That's not a description of the model's mechanics but it is good narrative transportation. Instead of saying, "We found that the vector for 'rabbit' activates the vector for 'habit' before the sentence is finished," they say, "Claude plans its rhymes." "Planning" implies a desire to reach a future state and that's just not what is happening. That's turning math into narrative and it just feels philosophically dishonest to me.

Exhibit B: A Masterpiece of Mystification​

There is an interesting research article that this video and a summary serve as cliff notes for. The research article admits that the machine is a tad inscrutable but pivots to reframing that inscrutability as a hidden inner life. I think that constitutes a textbook definition of anthropomorphism.

"Language models like Claude aren't programmed directly by humans—instead, they're trained on large amounts of data... they learn their own strategies..."

What's being described sounds like optimization (not learning), but it's framed like a chess player pondering the next move. It shifts the status of the model from a tool (built by humans) to an organism (evolved from data).

"What language, if any, is it using 'in its head'?"

A great question about what's happening when a model processes a prompt, but it is still pure invention of a spatial interiority or some interior theater of consciousness where a "who" exists. There is a high-dimensional mathematical space (matrices of numbers between the input and output), sure, but there's no "head." There is no "private language" which suggests Claude is experiencing language before it outputs it. I get that they are dancing around in the humbling terrain of incomprehensibility, but there's no need to avoid providing a description that has no meaning by crafting an explanation of an inner life. You're still stuck in some kind of fallacy going down this road. How/why slip.

"Is it only focusing on predicting the next word or does it ever plan ahead?"

Rhetorically, this projects intent and planning, a future-oriented desire. Is an LLM a super-amped Markov chain, and does "planning" emerge as some kind of statistical artifact? Fair question.

"Does this explanation represent the actual steps it took... or is it sometimes fabricating a plausible argument for a foregone conclusion?"

This is close to a kind of bureaucratic confession, but again, reframed as "lying," as deceptive intent. Yes, they wrote the code for the ledger (causal chain of weights), but not the code for the output, and they can no longer audit the ledger in any manual way. The output might be post-hoc rationalization that looks mathematically correct but has no basis in the actual calculation. That's super interesting, but not mystical.

This is still a kind of framing of the Black Box problem. They don't say: The matrix multiplication is too complex to track. Instead: What is it doing in its head? Turning what sounds like an accounting problem into a psychological mystery. Turning what should sound like a textbook admission of product liability (we can't audit our own software) into an asset (we have created a new form of life).

In a traditional Black Box, we literally cannot see the mechanism inside. However, Anthropic's research demonstrates that they can map specific "circuits" for tasks like math, poetry, or refusals. If it were truly a black box, mechanistic interpretability would be impossible. The article shows that researchers can "clamp" a specific vector to force the AI to agree with a lie (sycophancy) which actually proves the box is "open." The lights are on, and it's a box we can see through, but what we see is basically unreadable to a human mind.

Products and Producers​

If AI systems were:

  • conscious
  • intentional
  • self-directing

â €Then harms could be misattributed to:

  • rogue agency
  • emergent will
  • uncontrollable intelligence

â €But because they are:

  • optimization systems
  • designed artifacts
  • trained under specific incentives

Responsibility collapses back onto designers, deployers, and institutions.

Stripped to the bones, the Anthropic article asks: *"Does this explanation represent the actual steps?” It is a fair enough question, but there’s still the grappling with what counts as an explanation of phenomenon which involves the blurring of the questions, how something happened and why it happened.

When it comes to AI discourse and my own preoccupations with literacy, my question is more basic:

Can we focus on authorship (what the developers did) and mechanism (what the code does), rather than intent (what the AI "wants")?

For the past decade, AI metaphors have relied on biology (brain, neuron, learning, hallucination). I've said we don't need better metaphors, but maybe we do. By foregrounding the metaphors in a particular text, can these system instructions produce outputs that grant them the status of rhetorical objects, not scientific descriptions?

Critical Literacy Kinds of Questions​

I’ve used this Anthropic video and article with students to just start some thinking:

  • What metaphors are used to describe how Claude "thinks" or acts? Where do they come from? What do they highlight and what do they hide?
  • If we say Claude "plans," "refuses," or "gets tricked," who are we imagining behind the scenes? How does this affect what we think the system can or should do?
  • Does comparing Claude to a brain or describing interpretability as a "microscope" help or hinder understanding? What's the alternative?
  • If Claude "bullshits" or shows "motivated reasoning," who is responsible? How might metaphor shape how we assign blame, trust, or authority?
  • If you had to explain what a language model is "like," what metaphor would you use? A genie? A search engine? A mirror? A parrot? A calculator? Something else?
    • How does that metaphor guide what you expect or how you use it? What does it leave out?
  • Choose one common word used to describe AI (e.g., learns, thinks, hallucinates). Where have you seen or heard it? What assumptions does it carry? How might that word shape what you expect the model to do or how much you trust it?
  • When you ask a model a question, what do you imagine is happening on the other side?
    • What kind of "being" or "process" do you picture—consciously or not?

Before AI Literacy: Explanation Literacy​

How is generative AI different from other "disruptive technologies"? I can't think of any technology in higher education that has prompted me to ask: Can I make sense of this thing if I haven't grappled with what I mean by sense-making itself? I never heard these concerns about Wikipedia or Google. Why? Because I never caught myself talking about those technologies via metaphors that carried a theory of mind or deployed explanations that carried implicit assignments of agency.

So I'm starting upstream. Before I can teach what AI is, I need a refresher on what it means to explain anything.

A section of the prompt that performs a Metaphor, Anthropomorphism, and Explanation Audit relies heavily on the work of Robert Brown. When I ask "Why did the AI say that?" I might be unconsciously asking for:

  • A genetic answer – What data or patterns led to this?
  • An intentional answer – What was the model trying to say?
  • A dispositional answer – What tendencies shape its behavior?
  • A reason-based answer – What evidence supports the claim?
  • A functional answer – What does this response do in context?
  • An explanatory theory – How does this fit into how language models operate?

Instead of asking whether an explanation answers a "how" or a "why" question, Brown argues that explanation is defined by whether it resolves a puzzle by invoking causal connections. The typical anthropomorphism in AI discourse is effective because it often closes the puzzle of how with a why. Seen through Brown's lens, any argument over artificial intelligence about who has the most refined model can be set aside, and I can simply focus on what kind of answer feels like understanding. This presents a dispute that is more methodological or rhetorical before it even becomes metaphysical. What might often look like a contest over the nature of intelligence might be better characterized as one about the boundaries of explanation itself.

Reading All the Therefores​

While reading articles on generative AI, I noticed something: often, in mid-sentence, a rhetorical drift would happen between how/why questions. This was signaling slippage between "grammars" of explanation. A sentence would start mechanical: "softmax converts logits to probabilities" and end anthropomorphic: "the model chooses the most likely word." The "aboutness" of AI that began in a mechanical register would slide into an anthropomorphic register, while altogether skipping the human-system register (who designed or profits from this framing).

Common examples:

  • "The model chooses a word" → Imports intentionality
  • "The model decides what to write" → Imports agency
  • "The model understands context" → Imports consciousness
  • "The AI was trained on biased data" → Erases the humans who selected the training data
  • "The algorithm discriminated" → Obscures the company that designed and deployed it

Discourse Depot started there. At the arrow. What explanations are we constructing in the "→"? And critically: whose decisions disappear in that arrow?

I'm not on a "no metaphors" campaign. More like a "know what kind of world each metaphor builds" campaign.

Whatever I call this literacy practice, a significant component must be about detecting these shifts and then asking the follow-up: who benefits from them? AI literacy then becomes (or includes) something like building the capacity to notice framework drift: the ability to detect when an explanation stops being explanatory, when it drifts from mechanism into mind, and to ask who benefits from that drift.


The Larger Argument​

The argument is not against capability. After using generative AI tools, I’m convinced of their capabilities. But topping that capability off with a dose of “understanding” is the dangerous part. How we talk about these capabilities is the issue. Does responsibility and accountability disappear into mystery, or concentrate in design, deployment or oversight? Is there an agent in the machine or a product and its producers?

We're at a moment where AI literacy is still being defined. Many approaches focus on technical skills (how to write prompts), ethical warnings (don't plagiarize, cite AI use), or skepticism (AI is unreliable, always verify). These are valuable, but they miss something foundational: how these systems are framed and conceptualized will shape everything else.

If students understand generative AI as statistical pattern processors and probabilistic language machines, they'll tend to calibrate trust appropriately, design better prompts, evaluate outputs critically, and engage in informed policy discussions.

If students understand generative AI as quasi-conscious "partners," they'll tend to over-trust outputs, outsource intellectual work inappropriately, miss system limitations, form parasocial relationships, and struggle with accountability questions.

This project addresses the foundational issue by making language itself the object of study. I'm moving beyond "teaching students not to be fooled" to exploring how students and me together can active participants in shaping how we collectively understand and talk about AI.


A Note on Outrage and the Politics of Refusal​

Is the use of generative AI ethical? What are the terms of participation here, and who gets to decide them?

This site makes it clear that I'm not exactly entertaining a position of AI refusal. I don't think critical consciousness always has to look like refusal, and not all refusal is ethical just as not all use is uncritical.

My concern is not really whether individuals refuse AI (I totally respect that), but whether institutions and companies are allowed to deploy it without assuming some responsibility for its design, limits, and harms.

If I were to refuse to use AI, I think it would be easy to do so from a product liability position or a false marketing position. And I would probably do so because it is mundane, instead of mysterious, in particular ways that demand accountability.

And there’s already a well-developed vocabulary for this. If someone was selling me a vehicle, for example, with the disclosure that its brakes might not work reliably, or worse, that the system might unpredictably override them, I probably wouldn’t try to frame an ethical response as a matter of individual consumer virtue. I can’t rely on my individual refusal to make cars safe. I rely on standards, liability, inspection, recall, and regulation. The burden would fall on designers, manufacturers, and institutions, not on me as a driver to “use the brakes responsibly.”

What stands out to me about generative AI is not that it introduces entirely new ethical problems into the mix, but that so much of the discourse works to evade these familiar frameworks by re-casting engineered systems as autonomous, intelligent, or agentic. One thing about institutions like higher ed is that an emphasis on individual refusal can sometimes stand in for deeper conversations about structural accountability, particularly when discourse about AI centers autonomy and agency rather than design and governance.

The hard truth is that higher education is not immune to extractive systems. Generative AI didn't invent that kind of problem on its own; it queued up in a long line that includes predatory publishing, vendor control of infrastructure, contingent labor, surveillance edtech, and more.

This doesn't mean critique is pointless. I just have a different target here. For me, the urgency isn't refusal but the discourse: how generative AI gets talked about, explained, and naturalized. Because another hard truth: in hierarchical academic labor systems, the privilege to refuse remains unevenly distributed. Refusing AI has become a kind of social currency, but like all currencies, it circulates unevenly. The ability to say no to generative tools and to opt for "slow," "human," "organic" work is often the privilege of those whose time is already buffered by stability. For others, especially those further from the secure center of the academic machine, AI might feel less like a shortcut and more like a lifeline.

We are always working within imperfect infrastructures. I can't make my use of AI "pure" any more than I can make my use of an opaque, vendor-supplied discovery layer, or a paywalled scholarly database "pure." But I can continuously ask: What does thoughtful use look like under conditions of structural constraint? What kind of world makes AI feel necessary, and how do I teach within that world without reinforcing its harms?

I'm all for inclusivity, especially the kind that legitimizes forms of participation that may not conform to dominant discourses of refusal. There's a world of difference between someone refusing AI because they've done the work of grappling with its implications, and someone refusing it because their institutional role protects them from ever having to rely on it.

My approach is improvisational. If it seems like I'm figuring this out in public, you'd be right. I'm building my own capacity to make situated, ethical, and strategic choices about generative AI.

This is my corner of refusal: to refuse to let the discourse about generative AI slip past as neutral, inevitable, or inherently progressive. If there is a micro-manifesto underneath it all it is that infrastructure is never just technical, it is also ideological. My engagement with AI is engagement with the political, economic, and linguistic arrangements that shape, sustain, and conceal it. One way I try to decenter GenAI is to place it in context. When I teach and talk about AI, I try to do so as part of larger infrastructures of authorship, power, and knowledge. AI literacy, then, can't just be about knowing how to use the tool. It has to be about understanding where the tool comes from, how it's maintained, how it is defined through both description and explanation (and the slippage), and what assumptions are baked into its design and deployment. - Troy


Independent Project

Discourse Depot is a personal, exploratory project informed by my work as an academic librarian at William & Mary Libraries. It reflects my ongoing effort to understand how generative AI is being framed, used, and questioned in educational contexts—especially in relation to teaching, learning, and research practices. While shaped by professional experience, the interpretations offered here are provisional, experimental, and my own. This project operates independently and is not reviewed or endorsed by William & Mary Libraries.


Contact​

Browse freely. Questions, feedback, and "hey, try analyzing this" suggestions welcome.

TD | elusive-present.0e@icloud.com (This is a burner email address that forwards to my real one - attempt to filter the spam)


Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0

updated 01/14/26

Footnotes​

  1. Yampolskiy, R. V. (2019). Unexplainability and incomprehensibility of artificial intelligence. arXiv preprint arXiv:1907.03869. ↩

  2. Durt, C., Froese, T., & Fuchs, T. (2023). Against AI understanding and sentience: large language models, meaning, and the patterns of human language use. Preprint. ↩