Pulse of the Library 2025
This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping) and the philosophy of social science (Robert Brown's typology of explanation). All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputs—not guarantees of factual accuracy or authorial intent.
Check it out here
Source Title: Pulse of the Library 2025
Source URL: https://clarivate.com/pulse-of-the-library/
Model: gemini-2.5-pro
Temperature: 1.15
Tokens: input=14111, output=19084, total=33195
Source Type: report
Published: 2025
Analyzed At: 2025-11-18T10:48:31+00:00Z
Framework: metaphor
Framework Version: 6.0
Task 1: Metaphor and Anthropomorphism Audit​
For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, and—most critically—what implications this framing has for trust, understanding, and policy perception.
1. AI as an Autonomous Force of Progress​
Quote: "Artificial intelligence is pushing the boundaries of research and learning."​
- Frame: AI as an exploring agent
- Projection: This metaphor projects the human quality of intentional exploration and ambition onto AI. 'Pushing boundaries' is an activity associated with conscious agents like explorers, scientists, or pioneers who actively seek to expand the limits of knowledge or territory. It suggests AI has its own momentum and a goal-oriented drive to overcome existing limitations. This is a profound consciousness projection because it attributes not just computation but a form of teleological striving to the system. It reframes the probabilistic generation of novel text strings as a conscious act of 'discovery' and 'advancement,' implying the system 'knows' where the boundary is and consciously 'intends' to move beyond it, rather than simply executing its programming on a larger scale or with more data.
- Acknowledgment: Presented as direct description.
- Implications: This framing inflates AI's perceived autonomy and inevitability. It positions AI not as a tool that humans direct but as an independent force that shapes human activity, which can lead to a sense of fatalism or diminished human agency in policy discussions. If policymakers believe AI is 'pushing boundaries' on its own, they may focus on adapting to its trajectory rather than actively shaping it through regulation. It creates unwarranted trust in the system's outputs as being inherently 'advanced' or 'boundary-pushing,' rather than as statistical artifacts of its training data. This obscures the responsibility of the developers and deployers for the system's impacts.
2. AI as a Trusted Chauffeur​
Quote: "Clarivate helps libraries adapt with AI they can trust to drive research excellence, student outcomes and library productivity."​
- Frame: AI as a trusted vehicle operator
- Projection: The metaphor projects the human qualities of trustworthiness and skillful control onto AI. 'Driving' implies a conscious agent is in control, navigating toward a destination ('research excellence') while making decisions along the way. Trust in a driver is relational and based on perceived competence, sobriety, and good intentions. By stating the AI can be trusted 'to drive,' the text projects these conscious attributes onto the software. It conflates the mechanistic process of executing code and processing queries ('processing') with the conscious, responsible act of steering a complex process toward a valuable goal ('knowing' how to get there safely). The projection suggests the AI possesses the judgment and reliability of a responsible human agent.
- Acknowledgment: Presented as direct description.
- Implications: This framing constructs trust by associating a statistical tool with a responsible human role. It encourages institutions (libraries) to cede control and oversight to the technology, believing it is a reliable 'driver' of desired outcomes. This creates significant risk by obscuring the probabilistic and often unpredictable nature of LLMs. Liability becomes ambiguous: if the AI 'driver' causes a 'crash' (e.g., provides harmful misinformation), is the passenger (the user) or the vehicle manufacturer (Clarivate) responsible? By framing the tool as a trusted agent, it shifts the perceived responsibility away from the manufacturer and fosters over-reliance on the system's outputs.
3. AI as a Human Assistant or Colleague​
Quote: "Research Assistants"​
- Frame: AI as a human employee
- Projection: This product naming convention directly projects the entire role of a human research assistant onto an AI system. A human research assistant possesses consciousness, understanding, critical thinking skills, and a sense of responsibility. They can 'know' the goals of a project, 'understand' a user's intent, and make justified judgments about information quality. By labeling the AI an 'Assistant,' the text projects this whole suite of conscious cognitive abilities onto a computational system that merely processes queries and generates statistically probable responses. This is a foundational consciousness projection that conflates pattern-matching with genuine comprehension and helpful intent.
- Acknowledgment: Presented as direct description (a product name).
- Implications: This naming convention fundamentally misrepresents the nature of the tool and creates a misleading mental model for the user. It encourages users to interact with the system as if it were a knowledgeable, intentional colleague, leading to unwarranted trust and a potential abdication of their own critical responsibilities. It inflates the perceived value of the product, suggesting a library is acquiring a quasi-employee rather than a software license. For policy, this framing makes it harder to regulate the technology as a product with clear manufacturer liability, as it anthropomorphizes it into a collaborator or partner in the research process.
4. AI as a Cognitive Guide​
Quote: "Alethea Simplifies the creation of course assignments and guides students to the core of their readings."​
- Frame: AI as a teacher or tutor
- Projection: The verb 'guides' projects the human cognitive process of pedagogy and mentorship onto the AI. A human guide or teacher consciously 'knows' the subject matter, 'understands' the student's current state of knowledge, and intentionally leads them toward a deeper comprehension ('the core of their readings'). This requires a theory of mind and an ability to make justified pedagogical choices. The AI, in contrast, processes text and generates summaries or highlights based on statistical patterns, without any conscious understanding of the content, the student, or the concept of 'learning.' The metaphor projects conscious intent and comprehension onto a mechanistic text-processing function.
- Acknowledgment: Presented as direct description.
- Implications: This framing positions the AI as an authority on par with a human educator, encouraging students to trust its outputs as pedagogically sound guidance. It creates a significant epistemic risk, as students may offload the critical task of interpreting and synthesizing information to a machine that has no genuine understanding. This can stunt the development of critical thinking and reading skills. For institutions, it suggests the tool can substitute for human instructional labor, potentially devaluing the role of librarians and teachers. It misrepresents a content summarization feature as a sophisticated educational intervention.
5. AI as a Conversational Partner​
Quote: "Enables users to uncover trusted library materials via AI-powered conversations."​
- Frame: AI as a thinking interlocutor
- Projection: This projects the human capacity for meaningful, reciprocal dialogue onto the AI. A conversation between conscious beings involves shared understanding, turn-taking based on comprehension, and the generation of novel ideas from a basis of justified belief. Attributing 'conversations' to an AI suggests it 'understands' the user's input, 'knows' about library materials, and 'formulates' responses based on this knowledge. This is a consciousness projection that replaces the mechanistic reality—processing input tokens to predict a statistically likely sequence of output tokens—with the far more sophisticated act of conscious, reasoned dialogue. It implies the system 'knows' what it is talking about.
- Acknowledgment: Presented as direct description.
- Implications: Framing the interaction as a 'conversation' primes users to lower their critical guard and engage with the system socially, extending relational trust to a computational process. This makes them more susceptible to confidently presented misinformation ('hallucinations'). It obscures the fact that the AI's responses are not grounded in knowledge or belief, but in statistical patterns from its training data. This can lead to inefficient or misleading research paths if the user believes they are 'conversing' with a knowledgeable entity. It also sets a false expectation about the system's capabilities, leading to frustration when the 'conversation' breaks down due to the system's lack of genuine understanding.
6. AI as an Evaluative Expert​
Quote: "[The Assistant] Helps users create more effective searches, quickly evaluate documents, engage with content more deeply..."​
- Frame: AI as a critical thinking partner
- Projection: The verb 'evaluate' projects a higher-order cognitive skill onto the AI. Human evaluation of a document requires conscious judgment, applying criteria, understanding context, and forming a justified belief about the document's worth or relevance. This is an act of 'knowing' what makes a source good. By claiming the AI 'helps evaluate documents,' the text suggests the system performs this conscious cognitive labor. It conflates the mechanistic process of extracting keywords, summarizing text, or flagging statistical features with the conscious act of critical assessment. The projection is of a system that not only retrieves information but also 'understands' its quality.
- Acknowledgment: Presented as direct description.
- Implications: This framing dangerously encourages users to outsource critical judgment to the machine. A user might accept the AI's implicit or explicit 'evaluation' without performing their own, eroding information literacy skills. It creates a powerful illusion of authority; the system isn't just a search tool, but an expert that can tell you which documents are worth your time. This can introduce biases from the training data directly into the user's research process, presented as objective 'evaluation.' For policy, it makes it difficult to hold either the user or the provider accountable for the use of poor-quality information, as the responsibility for evaluation was deferred to the 'intelligent' system.
7. AI as an Aid to Human Assessment​
Quote: "[The Assistant] helping students assess books' relevance and explore new ideas."​
- Frame: AI as a relevance judge
- Projection: This projects the highly contextual, subjective, and conscious process of assessing relevance onto the AI. A student assesses a book's relevance based on their specific research question, the course context, their prior knowledge, and their critical goals—all components of conscious 'knowing.' The metaphor suggests the AI can perform or assist in this complex cognitive act. It implies the AI 'understands' the student's unique intellectual needs and the book's content in order to make a judgment about their alignment. This is a significant consciousness projection, attributing a deep, contextual understanding to a system that can only correlate keywords and usage patterns.
- Acknowledgment: Presented as direct description.
- Implications: The implication is that the AI can act as a shortcut to the difficult intellectual work of determining a source's utility. This devalues and potentially deskills the core research process. Students may trust the AI's 'assessment' of relevance, leading them to overlook unconventional but highly relevant sources, or to focus on sources that are merely statistically similar to their query, not conceptually central. It creates a false sense of efficiency, where the 'processing' work of sifting through results is replaced by a premature judgment call outsourced to a non-conscious tool. This undermines the goal of teaching students how to assess relevance for themselves.
8. AI as an Agent of Discovery​
Quote: "Enables users to uncover trusted library materials..."​
- Frame: AI as a treasure hunter
- Projection: The verb 'uncover' projects the human act of revealing something hidden through effort and insight. It implies that the materials were obscured and that the AI has a special capability, akin to an archaeologist or detective, to find them. This suggests a form of active, intentional seeking, rather than the mechanistic process of matching query vectors to an index. A human who 'uncovers' something 'knows' what they are looking for and 'recognizes' it when they find it. This metaphor attributes that same intentionality and recognition to the AI, framing a database query as a moment of discovery guided by the system's intelligence.
- Acknowledgment: Presented as direct description.
- Implications: This framing makes the research process seem more exciting and magical than it is, but it also mystifies the underlying mechanics. It makes the AI's results seem more valuable, as if they were 'uncovered' rather than simply 'retrieved.' This can lead users to attribute more significance to the returned results than is warranted. It also obscures the limitations of the index; the AI cannot 'uncover' what is not in its database or what its algorithm is not weighted to find. This creates a risk that users will believe their search is comprehensive because the AI has 'uncovered' things for them, when in reality it has only accessed a fraction of available knowledge.
9. AI as a Force to Be Controlled​
Quote: "...how effectively AI can be harnessed to advance responsible learning, research and community connection."​
- Frame: AI as a natural force (like a river or horse)
- Projection: This metaphor projects the qualities of a powerful, wild, and non-human force onto AI. 'Harnessing' is what one does to a river to generate power or to a horse to pull a plow. It implies that AI has its own intrinsic energy and direction, and that the role of humans is to capture and direct this pre-existing power. Unlike a tool, which is inert until used, a harnessed force has its own momentum. This subtly attributes a form of primitive agency or energy to the system itself, separate from the human intentions that created it. It does not project full consciousness, but it does project a non-human vitality that needs to be managed.
- Acknowledgment: Presented as direct description.
- Implications: This framing acknowledges AI's power but also externalizes it, treating it as a feature of the world to be managed rather than as an artifact designed by humans. This can subtly shift responsibility. If AI is a natural force, then negative consequences can be framed as failures of 'harnessing' rather than failures of design. It encourages a focus on control mechanisms and guardrails rather than on fundamental questions about whether certain systems should be built at all. For policy, this can lead to a reactive stance, trying to 'harness' an ever-advancing technology, rather than a proactive one that sets design principles from the start.
10. AI's Cognition as Human Understanding​
Quote: "Librarians who say there is little to no institutional focus on AI literacy were significantly less likely to be implementing AI (either no plans or not actively pursuing: 58.2%)."​
- Frame: AI Literacy as Reading and Writing
- Projection: The term 'AI literacy' projects the human cognitive model of reading and writing onto the ability to use AI tools. Literacy implies a deep, generative understanding of a symbolic system (language), allowing one to comprehend meaning, create new expressions, and critically analyze texts. By applying this term to AI, it suggests that using these tools is a similarly deep cognitive skill. It subtly frames AI systems as new forms of 'text' or even 'interlocutors' that one must learn to 'read' and 'write' with. This projects a level of semantic depth and stability onto AI systems that they do not possess, equating prompt engineering with the nuanced act of linguistic communication.
- Acknowledgment: Presented as direct description; it's a widely accepted term.
- Implications: This framing elevates the skill of using AI from a technical competency (operating software) to a fundamental literacy on par with reading. While this may encourage training, it also mystifies the technology. It implies that the AI is a complex communicative agent to be 'understood' rather than a probabilistic tool to be operated and critically evaluated. This can lead to the 'curse of knowledge,' where users who master prompt engineering believe they are 'speaking the AI's language,' thereby attributing more understanding to the system than is warranted. It focuses training on interaction techniques rather than on the underlying mechanics, data, and biases of the system.
11. AI as a Cognitive Enhancer​
Quote: "Alma Specto Uncovers the depth of digital collections by accelerating metadata creation and enabling libraries to build engaging online exhibitions."​
- Frame: AI as a perception tool (like a microscope or telescope)
- Projection: This projects the human quality of deep perception and insight onto AI. 'Uncovering depth' implies going beyond surface-level information to reveal hidden meanings, connections, and significance—a conscious act of interpretation. By stating the AI 'uncovers the depth,' the text attributes this interpretive, knowledge-creating capability to the software. It conflates the mechanistic processing of accelerating a task (metadata creation) with the conscious outcome of that task (gaining a deeper understanding). The system is framed not just as a tool for efficiency, but as a partner in intellectual discovery that can perceive things humans might miss.
- Acknowledgment: Presented as direct description.
- Implications: This framing suggests the AI offers not just data but 'insight,' encouraging users to trust its outputs as meaningful interpretations rather than just processed information. This can lead to a reification of statistical patterns as profound 'depth,' potentially leading research or curatorial work in directions dictated by algorithmic artifacts rather than human expertise. It positions the AI as a source of knowledge, rather than a tool for managing it. This subtly undermines the authority and expertise of the human librarian or curator, whose job is to provide that interpretive depth, by suggesting the software can now perform that function automatically.
Task 2: Source-Target Mapping​
For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.
Mapping 1: Pioneering Explorer to AI system operation​
Quote: "Artificial intelligence is pushing the boundaries of research and learning."​
- Source Domain: Pioneering Explorer
- Target Domain: AI system operation
- Mapping: The relational structure of an explorer intentionally venturing into unknown territory to expand knowledge is mapped onto the AI's process. The source domain includes concepts like having a goal (discovery), understanding the current limits ('the boundary'), and taking deliberate action ('pushing'). This entire intentional structure is projected onto the AI's generation of outputs. This invites the inference that the AI has agency, goals, and a drive for progress, and that its outputs are not just probabilistic but are genuinely 'new' in a way that advances a frontier of knowledge. It maps the conscious state of ambition onto computational function.
- What Is Concealed: This mapping conceals the purely mechanistic and statistical nature of the AI's operation. It hides that the system has no concept of a 'boundary,' no intentionality, and no understanding of 'research' or 'learning.' It obscures the reality that the AI is simply generating high-dimensional statistical patterns based on its training data. The metaphor replaces the complex reality of algorithmic processes and massive datasets with a simple, heroic story of a conscious agent's journey.
Mapping 2: Trusted Driver to AI-powered search and retrieval​
Quote: "Clarivate helps libraries adapt with AI they can trust to drive research excellence..."​
- Source Domain: Trusted Driver
- Target Domain: AI-powered search and retrieval
- Mapping: The structure of a human driver navigating a vehicle to a destination is mapped onto the AI's function. The source domain includes elements like: the driver (agent with control), the vehicle (tool), the road (navigated environment), and the destination (goal). Trust is placed in the driver's conscious judgment and skill. This is mapped onto the AI, which becomes the trusted agent in control, 'driving' the process. It invites the inference that the AI possesses the necessary judgment, awareness, and reliability to successfully guide the user to their intellectual destination without crashing. It maps justified belief in a person's skill onto a software product.
- What Is Concealed: This conceals that the AI is not an agent separate from the tool; it is the tool. It has no consciousness, judgment, or intentions. It's not 'driving' in any meaningful sense; it's executing queries based on statistical models. The metaphor hides the system's inherent brittleness, its susceptibility to bias from training data, and the fact that its 'navigation' is probabilistic, not deterministic or based on a true 'map' of knowledge. It obscures manufacturer liability by personifying the product.
Mapping 3: Human Research Assistant (a job role) to AI Software Feature​
Quote: "Research Assistants"​
- Source Domain: Human Research Assistant (a job role)
- Target Domain: AI Software Feature
- Mapping: The entire social and cognitive role of a human assistant is mapped onto the AI. This includes the assumptions of: helpful intent, a collaborative relationship, communicative competence, and the ability to understand and execute complex, context-dependent tasks. The user is positioned as the 'researcher' and the AI as their 'assistant.' This mapping invites the user to interact with the software as if it were a person who shares their goals and possesses genuine understanding. It maps the justified belief that a human assistant 'knows' their job onto a piece of software.
- What Is Concealed: This mapping completely conceals the non-human, non-conscious nature of the system. It hides that the AI has no intentions, no understanding of the user's goals, and no beliefs or knowledge. It is a tool, not a colleague. The metaphor conceals the vast amount of human labor (data annotation, RLHF) that created the illusion of helpfulness. It also obscures the commercial relationship: this 'assistant' is a product sold by a corporation, and its operations are aligned with that corporation's interests, not necessarily the user's.
Mapping 4: Human Teacher/Mentor to AI Text Summarization/Analysis​
Quote: "Alethea ... guides students to the core of their readings."​
- Source Domain: Human Teacher/Mentor
- Target Domain: AI Text Summarization/Analysis
- Mapping: The relational structure of a teacher guiding a student is projected onto the AI's interaction with a user. The source domain implies an expert (teacher) who possesses deep knowledge and a novice (student) who needs direction. The 'guiding' action is intentional, responsive, and based on the teacher's conscious understanding of both the material and the student. This mapping invites the inference that the AI possesses expert knowledge and can intelligently direct the user's attention to the most important parts of a text, thus performing a pedagogical function based on 'knowing' what is significant.
- What Is Concealed: This conceals the mechanistic reality that the AI is likely performing statistical text analysis, such as topic modeling or summarization, without any comprehension of the text's meaning or 'core.' The AI doesn't 'know' what is important; it identifies statistically significant phrases or sentences based on its training. The metaphor hides the lack of any pedagogical model, theory of mind, or genuine subject matter expertise. It presents a statistical artifact as expert guidance.
Mapping 5: Human Conversation to User-prompt-to-system-output sequence​
Quote: "...AI-powered conversations."​
- Source Domain: Human Conversation
- Target Domain: User-prompt-to-system-output sequence
- Mapping: The structure of human conversation—a reciprocal exchange between two conscious minds involving shared context, intent, and understanding—is mapped onto the user's interaction with the AI. The mapping invites the user to see their prompts as 'utterances' and the AI's output as 'responses' from a thinking partner. It implies the AI 'understands' the user and is 'saying' something meaningful back, participating in a joint activity of making sense. It maps the cognitive state of communicative intent onto the process of token prediction.
- What Is Concealed: This conceals the one-way, non-conscious reality of the interaction. The user is thinking; the system is not. The AI does not 'understand' the prompt. It tokenizes the input and uses a massive statistical model to calculate the most probable sequence of tokens to generate next. The 'conversation' is an illusion created by pattern-matching on a vast corpus of actual human conversations. The mapping hides the absence of shared reality, belief, or consciousness.
Mapping 6: Expert Reviewer/Critic to AI-based text analysis and feature extraction​
Quote: "[The Assistant] ... quickly evaluate documents..."​
- Source Domain: Expert Reviewer/Critic
- Target Domain: AI-based text analysis and feature extraction
- Mapping: The cognitive process of expert evaluation, which involves applying criteria, making judgments, and assessing quality based on deep knowledge, is mapped onto the AI's function. The source domain implies a conscious agent with standards and the ability to form a justified opinion. This is projected onto the AI, inviting the user to believe that the system can make qualitative assessments about documents. The inference is that the AI 'knows' what constitutes a good or relevant document and can apply this knowledge on the user's behalf. It maps conscious critical judgment onto an algorithmic process.
- What Is Concealed: This conceals that the AI is not performing a qualitative evaluation but a quantitative analysis. It might be extracting metadata, counting citations, identifying keywords, or summarizing content based on statistical heuristics. It has no concept of 'quality,' 'truth,' or 'rigor.' The metaphor hides the fact that any 'evaluative' output is a proxy based on data features, not a judgment based on understanding. It obscures the biases embedded in these proxies (e.g., citation counts favoring older, established fields).
Mapping 7: Knowledgeable Librarian or Advisor to AI system matching query to document features​
Quote: "...helping students assess books' relevance..."​
- Source Domain: Knowledgeable Librarian or Advisor
- Target Domain: AI system matching query to document features
- Mapping: The source domain is a human expert (like a librarian) who engages in a reference interview to understand a student's conscious, specific need and then uses their deep knowledge of a subject and collection to recommend relevant books. This process of judging relevance is collaborative and based on a shared understanding of context. This complex, conscious social process is mapped onto the AI, suggesting it can perform a similar function of 'assessing relevance' for the student. It projects the librarian's conscious state of 'knowing the collection and the user's need' onto the software.
- What Is Concealed: This conceals that the AI has no understanding of the student's context, research question, or cognitive state. 'Relevance' for the AI is a statistical similarity score between the user's query and the text of a book or its metadata. It's a calculation, not a judgment. The mapping hides the absence of any real-world knowledge or contextual awareness, making the probabilistic output seem like a considered, expert recommendation. It erases the dialogic and interpretive nature of genuine relevance assessment.
Mapping 8: Archaeologist or Detective to Database Query Execution​
Quote: "Enables users to uncover trusted library materials..."​
- Source Domain: Archaeologist or Detective
- Target Domain: Database Query Execution
- Mapping: The source domain involves a conscious agent actively searching for something specific that is hidden or lost. The act of 'uncovering' implies insight, breaking a code, or digging through layers to find a valuable artifact. This narrative of discovery and revelation is mapped onto the simple technical process of a user typing a query and a system returning results from an index. It invites the inference that the AI has a special power of insight that allows it to find things that would otherwise remain hidden. It maps the 'aha!' moment of conscious discovery onto a standard information retrieval task.
- What Is Concealed: This conceals the mundane reality of database indexing and retrieval. The materials are not 'hidden'; they are indexed. The AI is not using 'insight'; it is using algorithms to match query terms to the index. The metaphor hides the limitations of the system—it cannot 'uncover' anything that isn't indexed or that the algorithm is not designed to find. It obscures the fact that the results are a function of the database's content and the search algorithm's parameters, not an act of intelligent discovery.
Mapping 9: Taming a Wild Animal or Natural Force to Deploying a Software System​
Quote: "...how effectively AI can be harnessed..."​
- Source Domain: Taming a Wild Animal or Natural Force
- Target Domain: Deploying a Software System
- Mapping: The relational structure of a human asserting control over a powerful, non-human entity (like a horse or a river) is mapped onto the implementation of AI. The source domain separates the agent (human) from the powerful force (AI) that must be controlled and directed. This mapping invites the inference that AI is an exogenous force with its own power and agency, which humans must struggle to manage. It projects a kind of wild, untamed energy onto the technology, making the act of controlling it seem heroic and necessary.
- What Is Concealed: This conceals that AI is not a natural force; it is an artifact, a product of human design, investment, and labor. It has no intrinsic energy or will. All of its 'power' comes from the data it was trained on and the computational resources it runs on, all of which are supplied and controlled by humans. The metaphor conveniently obscures the developers' and corporations' responsibility for the system's design and effects, reframing it as a problem of 'control' for the user.
Mapping 10: Human Literacy (Reading & Writing) to Competence in Using AI Tools​
Quote: "AI literacy"​
- Source Domain: Human Literacy (Reading & Writing)
- Target Domain: Competence in Using AI Tools
- Mapping: The mapping projects the features of linguistic literacy onto the use of AI. Source domain elements include understanding symbols, grammar, semantics, and pragmatics to both decode and encode meaning. This deep, generative, and critical cognitive ability is mapped onto the skill set for interacting with AI. This suggests that AI outputs are like 'texts' to be interpreted and that user prompts are like 'writing' that requires skill. It invites the inference that interacting with AI is a communicative act requiring a similar level of cognitive engagement as reading a book.
- What Is Concealed: This conceals the fundamental difference between language as a medium of conscious thought and an LLM's output as a statistical artifact. The AI has no semantics or meaning to 'encode.' The user is not 'communicating with' the AI but providing an input string to a function. The 'literacy' metaphor hides the system's lack of grounding, belief, or any communicative intent. It obscures the fact that critically 'reading' an AI's output requires evaluating it against external knowledge, not interpreting its non-existent authorial intent.
Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")​
This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.
Explanation 1​
Quote: "Artificial intelligence is pushing the boundaries of research and learning. Clarivate helps libraries adapt with AI they can trust to drive research excellence, student outcomes and library productivity."​
-
Explanation Types:
- Intentional: Refers to goals or purposes and presupposes deliberate design, used when the purpose of an act is puzzling
- Functional: Explains a behavior by its role in a self-regulating system that persists via feedback, independent of conscious design
-
Analysis (Why vs. How Slippage): This explanation is primarily agential, framing AI's role in terms of 'why' it acts. The first sentence presents AI itself as an agent with the purpose of 'pushing boundaries.' This is a classic Intentional explanation, attributing a goal to the technology. The second sentence reframes AI as a tool, but one whose function is explained by its purpose ('to drive research excellence'). This hybrid explanation shifts agency. First, AI is an autonomous agent of progress. Second, it is a functional component within the library system, deployed by Clarivate for the purpose of achieving excellence. The explanation emphasizes AI's role as a driver of outcomes, obscuring the mechanistic 'how' (how do statistical correlations in a model 'drive' excellence?) in favor of a teleological 'why' (it acts this way because its purpose is excellence). It completely obscures any explanation rooted in the system's technical architecture or training data.
-
Consciousness Claims Analysis: This passage heavily attributes conscious states to the AI system, blurring the line between processing and knowing. It uses no mechanistic verbs; instead, it uses agentic verbs that imply consciousness.
-
Consciousness Verbs: The key verbs and concepts are 'pushing boundaries,' 'trust,' and 'drive.' While not classic consciousness verbs like 'knows' or 'believes,' in this context they function as proxies for conscious agency. 'Pushing boundaries' implies an awareness of where the boundaries are and an intent to surpass them. 'Trust' is a conscious state extended by humans to agents they believe are competent and have good intentions. 'Driving' implies a conscious, goal-oriented agent is in control.
-
Knowing vs. Processing: The text treats AI operations as a form of 'knowing.' An AI that can be 'trusted to drive' is framed as 'knowing' the route to research excellence. It is presented as a conscious, reliable agent, not a mechanistic processor of information. The claim isn't just that the tool processes data that can lead to excellence, but that the tool itself actively 'drives' this outcome, implying a form of steering intelligence.
-
'Curse of Knowledge' Dynamic: The author understands the desired outcome of using the AI (research excellence) and projects the intentionality required to achieve that outcome onto the AI itself. The author knows what good research looks like, and they conflate this knowledge with the AI possessing a form of knowledge that allows it to 'drive' towards that goal. The system's ability to generate statistically correlated outputs that look like research excellence is mistaken for the system knowing how to produce it.
-
Concealed Distinction: The phrase 'AI they can trust to drive research excellence' obscures the actual mechanistic process. What the system actually does is: process a user's natural language query, convert it into vector embeddings, search a high-dimensional index of documents, retrieve documents with vectors statistically similar to the query vector, and generate a textual response or ranked list based on learned probability distributions. There is no 'driving' or 'trustworthiness,' only mathematical operations.
- Rhetorical Impact: This framing powerfully shapes the audience's perception of AI as an autonomous, reliable, and almost inevitable force for good. By attributing agency and trustworthiness to the AI, it encourages libraries to adopt the technology not as a mere tool but as a strategic partner. This increases the perceived value and authority of Clarivate's products. The consciousness framing (a trusted, driving agent) specifically fosters reliability. An audience is more likely to invest in and cede control to a system they believe 'knows' how to achieve their goals. A decision-maker (e.g., a library director) hearing that AI can be 'trusted to drive outcomes' might allocate budget differently, prioritizing this 'agent' over other resources, believing it offers a more direct path to success than a mere 'database' or 'tool' that requires extensive human effort to use effectively.
Explanation 2​
Quote: "ProQuest Research Assistant Helps users create more effective searches, quickly evaluate documents, engage with content more deeply, and explore new topics with confidence."​
-
Explanation Types:
- Reason-Based: Gives the agent's rationale or argument for acting, which entails intentionality and extends it by specifying justification
- Intentional: Refers to goals or purposes and presupposes deliberate design, used when the purpose of an act is puzzling
-
Analysis (Why vs. How Slippage): This explanation is entirely agential, framing the AI as a helpful human collaborator. It answers the question 'Why use this tool?' by listing the purposive actions it performs ('Helps,' 'evaluate,' 'engage,' 'explore'). This is a form of Reason-Based explanation, but from the system's perspective; it acts in order to help the user. The AI's 'rationale' is user success. This framing completely elides the 'how'—the algorithmic processes that underpin these functions. It emphasizes the intended user experience, making it seem as if the AI's actions are motivated by a desire to assist. The alternative mechanistic explanation—describing the query expansion algorithms, the summarization techniques, or the topic modeling functions—is obscured by this intentional, agentic language that focuses solely on the 'why' of helpfulness.
-
Consciousness Claims Analysis: This passage is saturated with consciousness claims, presenting mechanistic processes as conscious cognitive acts.
-
Consciousness Verbs: 'Helps,' 'evaluate,' 'engage... deeply,' and 'explore' are all verbs that, in this context, imply conscious cognitive states. 'Evaluate' implies conscious judgment and the application of criteria. To 'engage deeply' with content is an act of conscious interpretation and critical thinking. 'Helping' implies an understanding of another's needs and an intention to assist. These are presented as actions the AI performs.
-
Knowing vs. Processing: The passage consistently frames the AI's functions as 'knowing.' To 'evaluate documents' is to 'know' what makes a document good. To help a user 'engage deeply' is to 'know' how to facilitate understanding. The AI is positioned as a cognitive partner that possesses knowledge about research methods and content quality, rather than as a tool that processes text strings. The user's resulting 'confidence' is presented as a direct outcome of the AI's knowing assistance.
-
'Curse of Knowledge' Dynamic: The author knows what a human research assistant does to help a student: they help them refine searches, they teach them how to evaluate documents, they encourage deep engagement. The author then observes the AI performing functions that are superficial proxies for these actions (e.g., query suggestion, text summarization) and projects the full cognitive depth of the human analogue onto the AI. The author's understanding of the goal of the user's activity is attributed to the AI as an ability.
-
Concealed Distinction: The claim that the AI 'helps... evaluate documents' obscures the mechanistic process of the system extracting features like citation counts, author affiliations, journal rankings, or keywords, and presenting them to the user. 'Engaging more deeply' likely refers to functionalities like hyperlinking entities, generating summaries, or suggesting related topics, which are all based on statistical analysis of the text, not a comprehension of its meaning.
- Rhetorical Impact: This framing dramatically increases the perceived competence and authority of the AI. It positions the tool not as a simple search interface but as a sophisticated research partner that actively participates in cognitive tasks. This shapes the audience's (librarians, students) behavior by encouraging them to offload cognitive labor—like evaluation and deep reading—onto the system. If a user believes the AI can 'evaluate documents,' they are less likely to apply their own critical judgment, leading to a degradation of information literacy skills. It fosters an inflated sense of trust and dependency on a product whose actual mechanisms are completely hidden by the anthropomorphic language.
Explanation 3​
Quote: "Alethea Simplifies the creation of course assignments and guides students to the core of their readings."​
-
Explanation Types:
- Intentional: Refers to goals or purposes and presupposes deliberate design, used when the purpose of an act is puzzling
-
Analysis (Why vs. How Slippage): This is a purely agential explanation focused on 'why' the AI acts. Its purpose is twofold: to 'simplify' a task for instructors and to 'guide' students. The verb 'guides' is particularly intentional, presupposing the AI has a goal (leading the student to 'the core') and a method for achieving it. This framing presents the AI as an active, intelligent agent in the educational process. It emphasizes the beneficial outcome and the AI's purposeful role in achieving it. What is obscured is any sense of 'how' it works. How does the algorithm define or identify 'the core' of a reading? Is it based on keyword frequency, topic modeling, or some other statistical proxy? The agential frame makes these mechanistic questions seem irrelevant; we are simply told the AI has the pedagogical purpose of guiding.
-
Consciousness Claims Analysis: This statement makes a significant consciousness claim by attributing pedagogical understanding to the AI.
-
Consciousness Verbs: 'Guides' is the key verb implying consciousness. A guide 'knows' the destination, 'understands' the terrain, and consciously leads another. It implies not just providing information, but curating an experience toward a specific learning outcome. It is an act of knowing and teaching.
-
Knowing vs. Processing: The text frames the AI's function as 'knowing.' To 'guide' a student to the 'core' of a reading, the AI would need to 'know' what the core concepts are, 'understand' their significance, and 'believe' that directing the student's attention to them is pedagogically valuable. This is a claim of deep, contextual, and justified knowledge. The reality is a mechanistic process of text processing being mislabeled as a conscious act of knowing.
-
'Curse of Knowledge' Dynamic: The author, an educator or instructional designer, knows what it means to guide a student to the core of a reading. They are an expert in this cognitive and pedagogical task. They then see a tool that can, for instance, generate a summary or a list of keywords, and they project their own deep understanding of the pedagogical goal onto the tool. The tool's output is interpreted through the author's expert lens and then described as if the tool itself possesses that expertise. The author's knowledge about teaching is conflated with the AI 'knowing how' to teach.
-
Concealed Distinction: The claim that the AI 'guides students to the core of their readings' obscures the technical process of statistical text analysis. The system likely uses algorithms like TextRank or latent Dirichlet allocation (LDA) to identify statistically important sentences or topics within a document. It is generating a statistical summary. This computational process is reframed as the conscious, intentional act of pedagogical guidance. The system isn't 'guiding,' it is 'extracting high-scoring sentences based on a statistical model.'
- Rhetorical Impact: This framing positions the AI tool as a legitimate pedagogical agent, an assistant teacher. For an audience of instructors or library administrators, this suggests the tool can reliably handle parts of the teaching workload, increasing its perceived value. For students, it establishes the AI's outputs as authoritative guidance, encouraging them to trust its summaries or highlights as representing 'the core' of a text. This could lead students to skip reading the full text, trusting the AI's interpretation, and thereby miss crucial nuance, context, or counterarguments. It promotes a passive approach to learning, mediated by a non-conscious statistical tool presented as a wise guide.
Explanation 4​
Quote: "generative AI tools are helping learners, educators and researchers accomplish more, with greater efficiency and precision."​
-
Explanation Types:
- Functional: Explains a behavior by its role in a self-regulating system that persists via feedback, independent of conscious design
- Dispositional: Attributes tendencies or habits such as inclined or tends to, subsumes actions under propensities rather than momentary intentions
-
Analysis (Why vs. How Slippage): This explanation frames AI's role functionally and dispositionally ('how' it typically behaves within a system). The AI tools are explained by their function within the academic ecosystem: 'helping... accomplish more.' It's a Dispositional claim because it describes what these tools 'tend to do' as a general propensity. It's a mechanistic 'how' explanation in that it focuses on the outcome (efficiency, precision) rather than a deeper 'why' of intentionality. However, the verb 'helping' introduces a shade of agency. While a hammer can 'help' drive a nail, the use of 'helping' with cognitive agents (learners, researchers) personifies the tool slightly. It emphasizes the tool's positive systemic effect, obscuring alternative explanations, such as how these tools might also hinder deep learning or introduce new forms of error.
-
Consciousness Claims Analysis: This is a softer form of consciousness projection, but it still subtly blurs the processing/knowing distinction.
-
Consciousness Verbs: The key verb is 'helping.' While tools can be said to 'help,' when the task is cognitive ('accomplish more' in research and learning), 'helping' implies a degree of understanding of the task. A calculator 'helps' with math, but it's a closed system. A research tool that 'helps' implies it understands something about the research process itself.
-
Knowing vs. Processing: The statement leans towards processing but anthropomorphizes it. The AI isn't explicitly said to 'know' anything, but its ability to 'help' suggests an alignment with human goals that borders on understanding. The illusion is subtle. The AI mechanistically processes information in a way that results in efficiency, but the language suggests the AI is an active participant in the act of accomplishing, rather than a passive tool being used. The line is blurred between the system's function and a purported intent to be helpful.
-
'Curse of Knowledge' Dynamic: This is a very common example of the curse of knowledge. The author knows that when they use the tool, it helps them accomplish more. They are a conscious agent using a tool to achieve a goal. They then project the agency of the entire human-tool system onto the tool itself. Instead of saying 'Researchers can use AI tools to accomplish more,' they say 'AI tools are helping... researchers accomplish more.' The agency is transferred from the user to the tool because the author is so familiar with the outcome that they attribute the success to the tool's properties rather than the user's skillful application of it.
-
Concealed Distinction: The general claim of 'helping... accomplish more' conceals the specific mechanistic processes. These include: rapid text generation based on prompts, document summarization, code generation, and data extraction. These are computational functions that can, when used correctly by a knowledgeable human, lead to greater efficiency. The language obscures the user's essential role in directing the tool and validating its output, making it seem as if the tool itself is the source of the increased accomplishment.
- Rhetorical Impact: This framing presents AI in a positive, non-threatening light as a helpful amplifier of human capability. It encourages adoption by focusing on universally desired outcomes like efficiency and precision. It minimizes perceived risks by framing the AI as an assistant ('helping') rather than a replacement. This language is effective marketing because it aligns the technology with the user's existing goals without making overly strong claims of autonomy that might be perceived as threatening. It builds a general sense of positive utility, making audiences more receptive to the more specific, agential claims made elsewhere about 'Research Assistants.'
Explanation 5​
Quote: "Librarians understand that AI will require significant upskilling or reskilling of teams. However, structured professional development opportunities remain limited."​
-
Explanation Types:
- Empirical Generalization (Law): Subsumes events under timeless statistical regularities, emphasizes non-temporal associations rather than dated processes
- Genetic: Traces origin or development through a dated sequence of events or stages, showing how something came to be
-
Analysis (Why vs. How Slippage): This explanation is almost entirely mechanistic, focusing on the 'how' of institutional adaptation. The first sentence is an Empirical Generalization based on the survey data: it states a general condition that librarians 'understand' a need. The verb 'understand' here refers to the consciousness of the human librarians, not the AI. The explanation is about the state of the library field. The second sentence presents another empirical fact. This passage explains 'how' the situation is unfolding: there's a recognized need for skills, but a lack of opportunity. This is a rare example in the text of a non-agential explanation regarding AI. It treats AI's impact as a causal force that requires a human response, but does not attribute agency to the AI itself. It emphasizes the human side of the equation—skills, training, and development.
-
Consciousness Claims Analysis: This passage makes no consciousness claims about AI. In fact, it serves as a crucial baseline for comparison with other passages.
-
Consciousness Verbs: The verb 'understand' is used, but its subject is 'Librarians,' not 'AI.' This is a clear and accurate description of a human cognitive state.
-
Knowing vs. Processing: The passage is entirely about humans 'knowing' and 'understanding' the implications of AI. The AI itself is treated as an object or a phenomenon whose effects need to be managed. The distinction between a conscious human knower (the librarian) and the non-conscious technological object (AI) is perfectly clear in this sentence. It demonstrates that the authors are capable of making this distinction, which makes its absence elsewhere appear more strategic than accidental.
-
'Curse of Knowledge' Dynamic: There is no curse of knowledge dynamic at play here regarding AI. The description is a straightforward reporting of survey findings about human perceptions.
-
Concealed Distinction: No mechanistic process of AI is being obscured here because the passage is not about what AI does, but about how humans are reacting to it. The focus is on the social and professional context surrounding the technology, not the technology itself. This passage provides a moment of clear, mechanistic, and sociotechnical framing. This clarity is rhetorically important. By including these sober, mechanistic descriptions of the challenges, the report builds credibility with its professional audience of librarians. This credibility then lends weight to the more agential and metaphorical language used in the product descriptions, making those claims seem more plausible.
- Rhetorical Impact: This framing shapes the audience's perception of the report itself as credible, well-researched, and empathetic to their professional challenges. By accurately reflecting the anxieties and needs of librarians ('upskilling,' 'limited opportunities'), the report builds trust with its readers. This creates a receptive frame of mind for the solutions proposed later in the document—namely, the adoption of Clarivate's 'Research Assistant' products. The sober, mechanistic framing of the problem makes the highly agential, consciousness-attributing framing of the solution seem more compelling and less like marketing hype. It's a classic rhetorical move: demonstrate you understand the problem in realistic terms, then present your solution in idealized terms.
Task 4: AI Literacy in Practice: Reframing Anthropomorphic Language​
Moving from critique to constructive practice, this task demonstrates applied AI literacy. It selects the most impactful anthropomorphic quotes identified in the analysis and provides a reframed explanation for each. The goal is to rewrite the concept to be more accurate, focusing on the mechanistic processes (e.g., statistical pattern matching, token prediction) rather than the misleading agential language. Additionally, for quotes with consciousness claims (e.g., "the AI knows"), this section provides a technical reality check that explicitly states what the system actually does at the mechanistic level.
| Original Quote | Mechanistic Reframing | Technical Reality Check |
|---|---|---|
| "Artificial intelligence is pushing the boundaries of research and learning." | The application of large-scale computational models in academic work is generating outputs, such as novel text syntheses and data analyses, that fall outside the patterns of previous research methods. This allows researchers to explore new possibilities and challenges. | This statement anthropomorphizes the technology. The AI is not an agent 'pushing' anything. Instead, its underlying technology, such as the transformer architecture, processes vast datasets to generate statistically probable outputs that can be novel in their combination, a phenomenon often referred to as emergent capabilities. |
| "Clarivate helps libraries adapt with AI they can trust to drive research excellence..." | Clarivate provides AI-based tools that, when used critically by librarians and researchers, can help automate certain tasks, leading to gains in efficiency that may contribute to improved research outcomes. The reliability of these tools is dependent on the quality of their training data and algorithms. | The AI does not 'drive' excellence nor is it inherently 'trustworthy.' The system executes algorithms to retrieve and generate information. 'Trust' should be placed in verifiable processes and transparent systems, not in a black-box tool. The system processes queries to produce outputs whose statistical correlation with 'excellence' is a function of its design and training data. |
| "[The] ProQuest Research Assistant Helps users create more effective searches, quickly evaluate documents, engage with content more deeply..." | The ProQuest search tool includes features that assist users by suggesting related keywords to refine queries. It also provides extracted metadata and, in some cases, generated summaries to help users preview and filter content more efficiently. | The AI does not 'evaluate' documents or 'engage' with content. It uses natural language processing techniques to perform functions like query expansion, keyword extraction, and automated summarization. These are statistical text-processing tasks, not conscious acts of critical judgment or deep reading. |
| "[The] Ebook Central Research Assistant ... helping students assess books' relevance and explore new ideas." | The Ebook Central tool includes features that correlate a user's search terms with book metadata and content to provide a ranked list of results. It may also generate links to related topics based on co-occurrence patterns in the data, which can serve as starting points for further exploration. | The AI does not 'assess relevance' in a cognitive sense. Relevance is a judgment made by a conscious user. The system calculates a statistical similarity score between the query and the documents in its index. This score is presented as a proxy for relevance, but the system has no understanding of the user's actual research needs or the conceptual content of the books. |
| "Alethea ... guides students to the core of their readings." | Alethea is a software tool that uses text analysis algorithms to generate summaries or identify statistically prominent keywords and phrases from assigned texts. These outputs can be used as a supplementary study aid. | The AI does not 'guide' students or understand the 'core' of a reading. It applies statistical models, such as summarization algorithms like TextRank, to identify and extract sentences that are algorithmically determined to be central to the document's generated topic model. The output is a statistical artifact, not pedagogical guidance. |
| "...uncover trusted library materials via AI-powered conversations." | The system features a natural language interface that allows users to input queries in a conversational format. The system then processes these queries to retrieve indexed library materials that statistically correlate with the input terms. | The system is not having a 'conversation.' It is operating a chat interface that parses user input to formulate a database query. The AI model generates responses token-by-token based on probabilistic calculations derived from its training data of human text and dialogue. It has no understanding, beliefs, or conversational intent. |
| "Alma Specto Uncovers the depth of digital collections by accelerating metadata creation..." | Alma Specto is a tool that uses machine learning models to automate and speed up the process of generating metadata for digital collections. This enhanced metadata can improve the discoverability of items for researchers. | The AI does not 'uncover depth.' It performs pattern recognition on digital objects to classify them and extract relevant terms for metadata fields. This is an efficiency tool for a human-curated process. Any 'depth' is a result of human interpretation of the more easily discoverable materials. |
| "generative AI tools are helping learners... accomplish more..." | Learners are using generative AI tools to automate tasks such as drafting text, summarizing articles, and generating code. When used appropriately, these functions can increase the speed at which users complete their work. | The tool is not 'helping' in an agentic sense. It is being operated by a user. The user directs the tool to perform specific computational tasks (e.g., text generation). The increased accomplishment is a result of the human agent using a powerful tool, not of the tool's own helpful agency. |
| "...how effectively AI can be harnessed to advance responsible learning..." | The responsible integration of AI tools into educational workflows requires careful planning and policy development. Institutions must determine how to use these computational systems effectively to support learning goals. | AI is not a natural force to be 'harnessed.' It is a category of software products designed and built by people and corporations. Framing it as a force of nature obscures the accountability of its creators for its capabilities, biases, and limitations. |
| "[The] Summon Research Assistant Enables users to uncover trusted library materials..." | The Summon search interface allows users to find and access library materials that have been curated and licensed by the institution. The interface includes features designed to improve the discoverability of these pre-vetted resources. | The AI does not 'uncover' materials. It executes a search query against a pre-existing and indexed database of sources. The 'trust' comes from the human librarians who selected the materials for the collection, not from any property of the AI search tool itself. The AI is simply the retrieval mechanism. |
Critical Observations​
This section synthesizes the findings from the previous tasks into a set of critical observations. It examines the macro-patterns of agency slippage (the shift between treating AI as a tool vs. an agent), how cognitive metaphors drive trust or fear, and what actual technical processes are obscured by the text's dominant linguistic habits.
Agency Slippage​
The 'Pulse of the Library 2025' report exhibits a systematic and strategic oscillation between mechanical and agential framings of AI, a process I call 'agency slippage.' This is not random linguistic carelessness but a rhetorical pattern that serves to build credibility and then exploit it to promote a product. The text begins with a sober, mechanistic tone when analyzing survey data about librarians' concerns. In sections like 'What's changed since 2024?' and 'A clearer understanding of AI's challenges and risks,' AI is framed as an object: a topic for discussion, a cause of budget constraints, and something requiring 'upskilling.' The explanations here are primarily Empirical Generalizations, describing 'how' librarians feel about AI. This builds trust with the professional audience by acknowledging their reality. However, a dramatic slippage occurs when the text transitions from analyzing the problem to presenting the solution. In the introduction and especially in the 'Clarivate Academic AI' section (p. 27-28), the language shifts abruptly from mechanical to agential. The explanation type moves from Brown's Empirical and Functional categories to the Intentional and Reason-Based. AI is no longer a topic but an agent that is 'pushing the boundaries.' Clarivate's products are not described as software but as 'Research Assistants' that 'help,' 'guide,' 'evaluate,' and 'uncover.' This slippage from object to agent is foundational to the report's persuasive architecture. The 'curse of knowledge' dynamic is central to this mechanism. The authors, understanding the intended use and desired outcome of their software, project this teleology onto the software itself. They know a researcher's goal is to 'engage deeply' with a text, so they describe their summarization tool as one that 'helps' the user do so. The author's knowledge of the human user's consciousness is transferred to the non-conscious tool. The consciousness projection pattern begins by establishing a social role for the AI—the 'Assistant'—which implies a baseline of helpful intent, a conscious state. Once this foundation is laid, specific functions are described using verbs that fit this agential role ('guides,' 'evaluates'). The text establishes the AI as a 'knower' in a social sense first, which makes subsequent claims about its cognitive abilities seem natural. This systematic oscillation—mechanical realism about the problem, agential idealism about the solution—is what makes the illusion of mind so effective. It disarms the critical reader with relatable challenges before presenting a magical, personified solution.
Metaphor-Driven Trust​
The Clarivate report masterfully employs metaphorical and consciousness-attributing language to construct the authority of its AI products and build an unwarranted form of trust. The core strategy is to systematically encourage the audience to shift from performance-based trust, which is appropriate for a tool, to relation-based trust, which is appropriate for a conscious agent but dangerously misplaced when directed at a statistical system. Performance-based trust is about reliability and predictability: 'Does the tool execute the function as specified?' The report touches on this with language about 'efficiency and precision.' However, its primary rhetorical effort is focused on building relation-based trust, which relies on perceived vulnerability, sincerity, and shared intentions. This is achieved through the central metaphor of the 'Research Assistant.' An assistant is someone you have a relationship with; you trust their intentions to be helpful. The report doubles down on this by explicitly using the word 'trust' in a relational context: 'AI they can trust to drive research excellence.' This is not the trust one has in a calculator's accuracy, but the trust one has in a chauffeur's judgment and good faith. Consciousness language is the critical mechanism for this trust transfer. Claiming an AI 'helps students assess relevance' or 'guides students to the core' functions as a powerful trust signal. It suggests the AI shares our most important educational and research goals. It implies the AI 'knows' what is valuable and 'wants' to help us achieve it. This framing positions the AI as a sincere, benevolent partner. This is far more persuasive than the mechanistic claim that the 'AI processes queries to return statistically correlated documents.' The former invites relational trust, while the latter only invites performance-based testing. This conflation of trust types is perilous. When users extend relation-based trust to a system incapable of reciprocating—a system without intentions, beliefs, or sincerity—they become vulnerable to manipulation. They are more likely to accept the AI's output without verification, believing it was generated in good faith. Moments of failure are also managed through this lens. An error from a 'trusted assistant' might be forgiven as a mistake, whereas an error from a 'probabilistic text generator' is correctly seen as a systemic property. The report's language systematically encourages the former interpretation, thereby preserving trust even in the face of failure and obscuring the fundamental unreliability of non-conscious systems.
Obscured Mechanics​
The anthropomorphic and consciousness-attributing language deployed in the 'Pulse of the Library 2025' report functions as a powerful rhetorical engine for obscuring the material, technical, and economic realities of AI systems. By framing AI as a helpful, knowing 'Research Assistant,' the text systematically renders invisible the complex and often problematic mechanics that produce the illusion of intelligence. The most significant concealment is technical. When the text claims an AI 'helps students assess books' relevance,' it hides the statistical and probabilistic nature of its operations. What is obscured is the fact that the system has no concept of 'relevance'; it performs a mathematical calculation of vector similarity between a query and an indexed document. The language of 'knowing' and 'assessing' conceals the system's utter dependence on its training data, including all the biases, stereotypes, and limitations inherent within that data. It hides the absence of any ground truth verification; the AI doesn't 'know' if a source is accurate, only if it is statistically similar to other sources. This consciousness obscuration is the central magic trick. Labor realities are also erased. The 'Research Assistant' did not spring into existence fully formed. Its seeming coherence is the product of vast, often hidden, human labor. This includes the academic labor that produced the millions of articles in the training data, the low-wage labor of data annotators and RLHF workers who cleaned and structured that data, and the ongoing work of moderators who deal with harmful content. The agential frame presents the AI as an autonomous worker, making the human labor that underpins it invisible. Furthermore, the material and economic realities are masked. Describing AI as an agent that 'pushes boundaries' mystifies the massive energy consumption and environmental cost of the data centers required for its training and operation. It is not an ethereal mind but a physically demanding industrial process. The economic motive is also sanitized. The 'assistant' is framed as a benevolent partner in research and learning. This obscures its true nature as a commercial product developed by Clarivate, a publicly traded company. Its functions are not primarily designed for pedagogy but for market capture, user engagement, and maximizing shareholder value. The entire metaphorical system works to replace the messy reality of a statistical, labor-intensive, energy-hungry commercial product with the clean, appealing fantasy of a disembodied, conscious, and helpful mind.
Context Sensitivity​
The deployment of anthropomorphic and consciousness-attributing language in the Clarivate report is not uniform; it is highly context-sensitive and strategically distributed to maximize rhetorical effect. The density and intensity of these metaphors vary dramatically across different sections of the text, revealing a sophisticated understanding of audience and purpose. The report can be divided into two main registers: the 'diagnostic' and the 'promotional.' In the diagnostic sections, which analyze the survey data and discuss the challenges librarians face (e.g., pages 5, 6, 18), the language is markedly more mechanistic and reserved. Here, AI is treated as an external factor or a technical challenge. This is where the report builds its credibility, speaking the language of professional concern. The metaphor density is low, and consciousness claims are virtually absent. This technical grounding serves as a rhetorical foundation. Having established itself as a sober, realistic observer of the field, the report then 'spends' this credibility in the promotional sections. The intensity of anthropomorphism escalates sharply in the introduction (p. 7), the conclusion (p. 25), and most dramatically, in the product showcase for 'Clarivate Academic AI' (pp. 27-28). It is here that 'processing' becomes 'understanding' and the tool becomes an 'assistant.' The language shifts from describing AI's limitations to celebrating its capabilities, and this shift is always marked by a move from mechanical to agential framing. Capabilities are described in profoundly conscious terms ('guides students to the core,' 'quickly evaluate documents'), while limitations are framed mechanistically or socially ('budget constraints,' 'need for upskilling'). This asymmetry is strategic: it attributes successes to the AI's innate agency and failures to external human factors. The shift from acknowledged metaphor ('AI is like a tool') to literalized metaphor ('our AI Research Assistant guides you') also occurs in this promotional context. The product names themselves—'Research Assistant,' 'Alethea'—are the ultimate literalization. This strategic variation reveals the text's primary goal: to leverage the professional anxieties it accurately identifies in the diagnostic sections to market its products as the agential solution. The anthropomorphism is a tool for persuasion, deployed most aggressively when a purchase decision is being implicitly solicited.
Conclusion​
This final section provides a comprehensive synthesis of the entire analysis. It identifies the text's dominant metaphorical patterns and explains how they construct an "illusion of mind." Most critically, it connects these linguistic choices to their tangible, material stakes—analyzing the economic, legal, regulatory, and social consequences of this discourse. It concludes by reflecting on AI literacy as a counter-practice and outlining a path toward a more precise and responsible vocabulary for discussing AI.
Pattern Summary​
The discourse within Clarivate's 'Pulse of the Library 2025' is built upon a sophisticated and interconnected system of metaphors, dominated by two overarching patterns: 'AI AS A HELPFUL HUMAN COLLABORATOR' and 'AI AS AN AUTONOMOUS FORCE OF PROGRESS.' The first pattern is most explicitly realized in the branding of AI tools as 'Research Assistants' and in the persistent use of verbs suggesting helpful, conscious intent ('helps,' 'guides,' 'enables'). This pattern projects a social role onto the AI, inviting users to interact with it as a partner rather than a tool. The second pattern, framing AI as an agent that is 'pushing the boundaries' and 'driving' outcomes, works in tandem to create a sense of powerful inevitability. The AI is not just a passive assistant waiting for commands; it is an active force shaping the future of research. These patterns are deeply interconnected. The 'Autonomous Force' metaphor creates the context of a rapidly changing landscape, inducing a sense of urgency and a need for adaptation among librarians. The 'Helpful Collaborator' metaphor then presents Clarivate's products as the perfect solution—a friendly, intelligent agent that can help navigate this new terrain. The foundational, load-bearing pattern is the projection of consciousness that underpins the 'Helpful Collaborator.' The entire edifice rests on the conflation of mechanistic processing with conscious knowing. By naming the tool an 'Assistant' and claiming it 'understands' or 'evaluates,' the text establishes the AI as a 'knower.' This epistemic claim is the necessary precondition for all subsequent agential claims. If you remove the illusion that the AI 'knows' what it's doing, the idea that it can be a 'guide' or a trusted 'driver' collapses into nonsense. The metaphorical system would crumble if 'the AI guides' were replaced with the more accurate 'the AI extracts statistically significant sentences.'
Mechanism of Illusion: The "Illusion of Mind"​
The 'illusion of mind' in this report is constructed through a subtle yet powerful rhetorical architecture. The central sleight-of-hand is the strategic blurring of the distinction between processing and knowing, enabled by the 'curse of knowledge' dynamic. The text doesn't begin with the most extreme consciousness claims. Instead, it builds the illusion gradually. The process starts by establishing a social role for the AI: the 'Assistant.' This simple act of naming immediately primes the reader to expect human-like, intentional behavior. Following this, the text describes the AI's functions using carefully chosen verbs that are ambiguously situated between mechanism and agency, like 'helps' or 'enables.' This creates a soft foundation of personification. The 'curse of knowledge' is the mechanism that powers this entire process. The authors, being fully aware of the intended purpose of a feature (e.g., to help a user find relevant sources), project this purpose onto the feature itself. They conflate their own comprehension about the system's utility with comprehension by the system. This leads them to describe a statistical relevance ranking algorithm as a system that 'helps students assess relevance.' The author's knowledge is laundered through the description of the tool, emerging as the tool's own intelligence. This creates a causal chain of belief for the reader: once you accept the AI as a helpful 'assistant' (a social role), you are more likely to accept that it 'guides' (a cognitive action), and once you accept that it 'guides,' accepting that it 'evaluates' or 'understands' becomes a smaller leap. The explanation audit reveals how Intentional and Reason-Based explanations are used exclusively in the promotional sections to amplify this illusion, focusing on the 'why' of helpfulness while completely obscuring the 'how' of computation. The audience, already primed by anxiety about AI's impact, is particularly vulnerable to this illusion as it offers a simple, powerful, and friendly solution to a complex professional challenge.
Material Stakes​
- Selected Categories: Epistemic, Economic, Regulatory/Legal
- Analysis: The metaphorical framing in this report has concrete, material consequences across several domains. The stakes are highest in the epistemic category. When the text claims an AI 'helps students assess books' relevance' or 'guides them to the core,' it promotes an epistemically dangerous practice: the outsourcing of critical judgment. A student who believes the AI 'knows' what is relevant is less likely to develop the crucial information literacy skill of assessing sources for themselves. This leads to a degradation of research quality, an increased vulnerability to algorithmic bias presented as objective guidance, and a potential atrophy of the very critical thinking skills libraries aim to foster. The behavior that changes is the student's—from active, critical researcher to passive recipient of machine-generated 'insights.' Economically, the stakes are enormous. By framing AI as a 'Research Assistant'—a quasi-employee—Clarivate inflates the perceived value of its software subscription. Library directors are encouraged to conceptualize their purchase not as a tool but as an investment in a digital staff member that can 'drive outcomes.' This justifies higher price points and influences budget allocation, potentially diverting funds from human librarians to software vendors. The winner is Clarivate's bottom line; the losers are libraries with tighter budgets and potentially librarians whose roles are devalued by the suggestion that their cognitive labor can be automated. Finally, there are significant regulatory and legal stakes. The personification of AI as a trusted agent deliberately blurs lines of accountability. If a student relies on the AI's 'evaluation' of a source that contains harmful misinformation, who is liable? The 'Helpful Collaborator' frame makes the AI seem like a partner, obscuring the fact that it is a product. This ambiguity serves the manufacturer, making it more difficult to apply standard product liability law. Replacing 'the AI knows' with 'the AI processes' re-establishes the system as a product and places liability squarely with the manufacturer, a clarification that the current framing actively resists.
Literacy as Counter-Practice: AI Language Literacy​
Practicing AI literacy, as demonstrated by the reframing exercises in Task 4, serves as a direct counter-practice to the material risks created by anthropomorphic discourse. This precision is not mere pedantry; it is a form of intellectual resistance with tangible effects. The core principle demonstrated in the reframings is the disciplined replacement of consciousness verbs with mechanistic ones. This move from 'knowing' to 'processing' directly counters the identified stakes. For example, reframing 'the AI helps students assess relevance' to 'the AI correlates query terms with document metadata to produce a ranked list' performs a critical function: it shifts the locus of cognitive responsibility. The original quote invites the student to trust the machine's judgment; the reframed explanation forces the student to recognize the output as a statistical calculation that they, the conscious human, must still critically assess. This practice directly shores up the epistemic foundations of research that the original language erodes. Similarly, replacing 'AI they can trust to drive research excellence' with 'a tool whose outputs must be critically verified' counters the economic and legal stakes. It reframes the product from a trusted, autonomous partner to a fallible tool, undermining the justification for premium pricing based on inflated agential capabilities. Crucially, it clarifies liability. A 'tool' has a manufacturer who is responsible for its defects, whereas a 'partner' shares responsibility. The systematic adoption of such precision would require significant changes in discourse communities. Journals and professional organizations like the ALA could establish standards for AI product descriptions, requiring vendors to provide mechanistic translations for any agential or consciousness claims. Researchers could commit to a 'mechanistic-first' principle in their writing. However, this would face immense resistance. Marketing departments would protest that mechanistic language is dry and unappealing. Companies would resist the legal clarity it imposes. The anthropomorphic language serves the powerful commercial interest of selling a product by mystifying its function and inflating its capabilities. Therefore, the practice of linguistic precision is not just a scientific or academic commitment; it is a political act that challenges established market narratives and reasserts human agency and accountability.
Path Forward​
The path forward to a more responsible and transparent discourse about AI in the library community requires a deliberate and collective shift in vocabulary, supported by structural changes in professional practice. The primary audience for this reform is the community of librarians, library vendors, and academic publishers. The essential vocabulary shift is to move from consciousness-based metaphors to process-based descriptions. For example: instead of 'the AI knows,' use 'the model retrieves information based on vector similarity'; instead of 'the AI understands intent,' use 'the model classifies input prompts to generate a statistically probable response'; instead of 'the AI believes,' use 'the system assigns a high probability score to a specific output.' Adopting this vocabulary is superior because it promotes accuracy, manages expectations, and clarifies accountability. It enables librarians to make better procurement decisions and to teach patrons how to use these tools critically, and it forces vendors to be honest about their products' capabilities. To support this shift, several institutional changes are necessary. Professional library associations could develop and promote a 'Standard for Mechanistic Description of AI Products,' which vendors would be encouraged to adopt. Library consortia could make adherence to this standard a condition of purchasing negotiations. Academic journals in library and information science could require authors to justify any use of anthropomorphic or consciousness-attributing language to describe AI systems, perhaps in a supplementary 'mechanistic disclosure' statement. Educational institutions must integrate this linguistic precision into their AI literacy curricula, teaching students to deconstruct marketing claims by asking 'But what does it actually do at a technical level?' The trade-off is clear: we might lose some of the seductive simplicity and marketing appeal of anthropomorphic language. In return, we gain clarity, foster genuine critical thinking, and build a more robust framework for AI governance and accountability. This is not just about words; it is about shaping a future where these powerful computational tools are understood and used as what they are—sophisticated artifacts of human ingenuity, not nascent minds—thereby ensuring they remain subordinate to human values and conscious human judgment.
Raw JSON: 2025-11-18-pulse-of-the-library-2025-metaphor-2252c5.json
Analysis Framework: metaphor v6.0
Generated: 2025-11-18T10:48:31+00:00Z
Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0
Extended Processing Summary​
Gemini refers to the text below as “thought summaries.” This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as a just another rhetorical artifact —a way of making the model’s processing legible. The first-person framing of these “thought summaries” is a presentation choice for the user-facing output, not a window into “real”thoughts. These are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind.
Defining JSON Structure
I'm currently structuring my approach to the JSON output, ensuring it aligns perfectly with the specified schema. My primary goal is to capture the nuances of "Pulse of the Library 2025" through a critical discourse analysis. I'm focusing on metaphors, anthropomorphism, and claims of AI consciousness within the PDF. I'm working to extract relevant information.
Refining the Framework
I'm now fully immersed in the "Pulse of the Library 2025" document, meticulously identifying sections ripe for critical discourse analysis. My primary emphasis remains on extracting instances of metaphor, anthropomorphism, and claims of AI consciousness. I'm focusing on defining the critical parameters for "processing" versus "knowing" to ensure consistency in my interpretation. I am now working on identifying and organizing relevant quotes.
Analyzing the Text
I've initiated a thorough reading of the "Pulse of the Library 2025" report, marking relevant passages for analysis. My focus now is on identifying the initial set of 10-12 metaphors and instances of anthropomorphism. I'm focusing on the "projection" field to analyze claims related to consciousness and associated risks. I'm working on the initial entries for the metaphorAudit and sourceTargetMapping sections, grounding them in the critical framework. The goal is to provide a comprehensive and nuanced analysis, adhering to all schema specifications.
I've begun to flesh out the initial metaphor and source-target mappings, paying close attention to the implications of consciousness claims. I've focused on metaphors depicting AI as a helper or tool, examining how these frames shape our understanding. I'm carefully documenting the "projection" field to analyze the risks associated with these framings. My focus remains on distinguishing "processing" from "knowing".
Defining Tasks & Outputs
I'm now fully immersed in the project, meticulously mapping out each task, ensuring complete schema adherence. I'm focusing on identifying the 10-12 key metaphors, anthropomorphic instances, and consciousness claims. I've broken down each task to ensure detailed output. I'm prioritizing the epistemicClaimAnalysis to ensure robust distinctions between processing and knowing.
Creating Comprehensive Framework
I'm now deep-diving into the 'Pulse of the Library 2025' report, meticulously identifying and categorizing the core metaphors and anthropomorphic instances. I'm focusing particularly on claims of AI consciousness and the risks associated with these portrayals. My goal is to craft a complete framework to distinguish between "processing" and "knowing" to ensure each critical observation is grounded in a rigorous methodology. I am working diligently on fleshing out the metaphorAudit and sourceTargetMapping sections for each instance I identify.
Analyzing Core Arguments
I'm now deep into the critical discourse analysis of the Clarivate document, meticulously identifying and categorizing the key metaphors, anthropomorphic instances, and associated risks. I'm prioritizing the agencySlippage and metaphorDrivenTrust essays, which require the most comprehensive evidence. I'm focusing on the reframing tasks to ensure the output is of high quality.
Drafting the Metaphor Audit
I'm now deeply immersed in the PDF, compiling the initial examples for the metaphorAudit and sourceTargetMapping sections. I am focusing on identifying the primary instances of AI as a helper or assistant, closely examining the risks and associated consciousness claims. I've begun to flesh out detailed entries for these instances.
Clarifying Metaphorical Trends
I'm now fully immersed in the document, closely analyzing the recurring metaphors and anthropomorphic phrasing. My primary goal is to identify common patterns, such as AI being cast as a helpful assistant. I'm focusing on the risks associated with this framing, particularly regarding trust and the obscuring of underlying mechanisms.
Constructing JSON Output
I'm diligently progressing, starting with the initial text scan to identify instances of metaphorical language and anthropomorphism. My focus is now on the first 10-12 instances, extracting relevant quotes and laying the groundwork for the metaphorAudit and sourceTargetMapping sections. I'm focusing particularly on claims of "knowing" vs. "processing."
Developing the JSON Structure
I'm presently building the initial JSON structure, meticulously populating the metaphorAudit and sourceTargetMapping sections based on my initial text analysis. I've pinpointed key instances of anthropomorphism and consciousness claims. I'm focusing on "processing" versus "knowing" to ground my analysis. I am now working on fleshing out the projection and implications fields for each metaphor.