Pulse of the Library 2025 (Gemini 3 V)
Gemini 3.0 Pro Version - Compare with 2.5 Pro version
- About
- Analysis Metadata
- Core Finding
This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping) and the philosophy of social science (Robert Brown's typology of explanation).
All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputs—not guarantees of factual accuracy or authorial intent.
Source Title: Pulse of the Library 2025 Source URL: https://clarivate.com/pulse-of-the-library/ Model: gemini-3.0-pro Temperature: 1.0 Tokens: input=14111, output=10480, total=24591 Source Type: report Published: 2025 Analyzed At: 2025-11-18T17:34:20+00:00Z Framework: metaphor Framework Version: 6.0
This text constructs a metaphorical system dominated by two interconnected patterns: AI AS SOCIAL AGENT ('Partner,' 'Colleague,' 'Assistant') and AI AS COGNITIVE KNOWER ('Intelligence,' 'navigator,' 'uncoverer'). These patterns are not isolated; they form a 'consciousness architecture' where the assumption that the AI 'knows' (has epistemic access to truth) serves as the foundation for the claim that it can act as a 'partner' (has social agency). The 'Research Assistant' metaphor is the load-bearing structure here. It creates a bridge between the user's need for help and the vendor's software. If the AI were framed merely as a 'search tool,' the claim of 'partnership' would collapse. It is only by projecting a human-like 'knowing' mind onto the software that Clarivate can position its product as a collaborator rather than a utility. This system relies on the 'curse of knowledge,' where the authors project the professional competencies of a human librarian onto the pattern-matching capabilities of the code.
Task 1: Metaphor and Anthropomorphism Audit​
For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, and—most critically—what implications this framing has for trust, understanding, and policy perception.
1. Software as Human Colleague​
Quote: "Clarivate Academic AI... Research Assistants... Web of Science Research Assistant... ProQuest Research Assistant"​
- Frame: Model as Employee/Subordinate
- Projection: This metaphor projects the complex human social role of an 'assistant'—a conscious entity capable of understanding intent, sharing goals, and performing intellectual labor—onto a software interface. It implies that the AI possesses the consciousness required to 'assist' rather than merely 'execute functions.' By labeling the system an 'Assistant,' the text projects a state of 'knowing' onto the software; an assistant knows what you need and why you need it. It suggests a relationship of collaboration and shared agency, rather than a user-tool relationship.
- Acknowledgment: Presented as a direct product name and description (no hedging).
- Implications: Framing the AI as an 'Assistant' radically inflates trust and expectations. It implies the system shares the user's epistemic goals (truth-seeking) rather than its actual function (token prediction). This creates a liability risk where users may attribute human-level judgment to the system, expecting it to 'know' when a citation is relevant in the same way a human research assistant would. It obscures the fact that the 'assistant' is liable to hallucinate, as it has no conscious understanding of the research 'task' it is purportedly navigating.
2. Interaction as Dialogue​
Quote: "Enables users to uncover trusted library materials via AI-powered conversations."​
- Frame: Data Retrieval as Social Dialogue
- Projection: This projects the human cognitive and social capacity for 'conversation'—which requires mutual understanding, shared context, and the exchange of meaning—onto the mechanical process of prompt-engineering and text generation. It implies the AI 'understands' the user's speech acts and is 'replying' with conscious intent. It shifts the frame from 'querying a database' (processing) to 'consulting an expert' (knowing).
- Acknowledgment: Presented as direct description.
- Implications: The 'conversation' metaphor is dangerous because it masks the stochastic nature of the output. In a human conversation, truth is a norm; in an LLM output, probability is the norm. By framing the interaction as a conversation, the text encourages users to treat the AI as a 'who' rather than a 'what,' potentially leading them to trust smooth, conversational outputs over accurate but jagged data retrieval. It creates an illusion of social accountability that does not exist.
3. Data Processing as Intellectual Navigation​
Quote: "Navigate complex research tasks and find the right content."​
- Frame: Cognitive Labor as Spatial Movement
- Projection: This metaphor maps the physical act of 'navigating'—which implies a conscious agent moving through space with a destination in mind—onto the computational process of pattern matching and ranking. It suggests the AI 'knows' the terrain of knowledge and is making conscious choices about where to go. It attributes a teleological (goal-directed) consciousness to the system, implying it 'understands' the complexity of the research task.
- Acknowledgment: Presented as direct capability claim.
- Implications: This obscures the mechanical reality that the model is not 'navigating' a semantic space of ideas but rather calculating vector proximity in high-dimensional space. It implies a level of strategic oversight ('finding the right content') that the model does not possess. Users may over-rely on the system's 'navigation,' assuming it has evaluated the 'terrain' comprehensively, when it has actually only surfaced statistically probable tokens.
4. Vendor as Social Partner​
Quote: "A trusted partner to the academic community... Partnering with libraries since 1938."​
- Frame: Commercial Entity as Loyal Companion
- Projection: This projects human qualities of loyalty, shared fate, and emotional bond ('partner') onto a vendor-client economic relationship. While referring to the company (Clarivate), this frame extends to their AI products ('AI you can trust'). It projects an ethical consciousness—the capacity to care about the community's success—onto an entity (and its tools) driven by profit maximization and computational efficiency.
- Acknowledgment: Presented as historical fact.
- Implications: This conflates 'reliability' (the software won't crash) with 'trustworthiness' (the entity has your best interests at heart). In the context of AI, this is critical; it encourages libraries to outsource critical epistemic functions to a 'partner' whose algorithms are opaque. It invites relation-based trust (vulnerability) where only performance-based trust (verification) is warranted.
5. Algorithmic Output as Transformation​
Quote: "Clarivate is a leading global provider of transformative intelligence."​
- Frame: Data Processing as Intellectual Transmutation
- Projection: This maps the human capacity for 'intelligence'—specifically a kind that causes deep qualitative change ('transformative')—onto data analytics and ML outputs. It attributes a high-level conscious state (intelligence) to the system. It suggests the system doesn't just process data but 'understands' it well enough to transform it into something higher, implying insight and wisdom.
- Acknowledgment: Presented as corporate identity.
- Implications: This is the ultimate 'curse of knowledge' projection. It defines the product as 'intelligence' itself. This marketing frame makes it difficult to critique the system's errors; if the system is 'transformative intelligence,' failures are anomalies rather than structural features of statistical prediction. It encourages the purchase of 'intelligence' as a commodity, obscuring the labor and data extraction required to produce it.
6. Search as Archaeological Discovery​
Quote: "Uncovers the depth of digital collections"​
- Frame: Pattern Matching as Physical Excavation
- Projection: This metaphor maps the intentional physical act of removing covering to reveal something hidden ('uncovering') onto the statistical process of identifying metadata correlations. It implies the AI 'sees' the hidden depth and consciously reveals it. It suggests an active, revelatory agency ('uncovers') rather than a passive filtering function.
- Acknowledgment: Presented as direct description.
- Implications: This implies that the 'depth' was always there and the AI simply revealed it, hiding the fact that the AI constructs relationships that may not exist (hallucination) or reinforces specific biases in the collection. It frames the AI as an objective tool of truth-revelation rather than a probabilistic generator of associations.
7. The Guide to Truth​
Quote: "Guides students to the core of their readings."​
- Frame: Algorithm as Mentor/Teacher
- Projection: This projects the pedagogical agency of a teacher or mentor ('guide') onto an algorithm. A 'guide' must know the destination and understand the traveler's needs. This attributes conscious pedagogical intent and subject-matter expertise ('knowing the core') to the system. It suggests the AI 'understands' the central thesis of a text, rather than merely weighting frequent tokens.
- Acknowledgment: Presented as direct capability.
- Implications: This is a high-risk educational metaphor. It suggests students can bypass the cognitive work of finding the 'core' themselves by relying on the AI. It creates a dependency on a system that 'guides' based on statistical probability, not pedagogical wisdom. It conflates 'summarization' (processing) with 'identifying the core meaning' (knowing/comprehending).
8. Frictionless Creation​
Quote: "Enables instructors... to effortlessly create course resource lists"​
- Frame: Labor as Magic
- Projection: This maps the quality of 'effortlessness'—usually reserved for magic or innate talent—onto a labor-intensive administrative task. While not strictly anthropomorphic, it projects a 'magical' agency onto the tool that erases the complexity of the task. It suggests the AI 'handles' the cognitive load, implying it 'understands' the syllabus structure so the human doesn't have to.
- Acknowledgment: Presented as benefit.
- Implications: Promising 'effortless' creation devalues the intellectual labor involved in curation. It suggests that the AI 'knows' what belongs on the list, encouraging users to accept the default suggestions without scrutiny. This leads to automation bias, where the 'effortless' path is chosen over the rigorous one.
9. The Gate-Keeper​
Quote: "Libraries have a crucially important role to serve as gate-keepers... in the age of AI"​
- Frame: Technology as Invading Force
- Projection: Here, the metaphor is applied to the environment created by AI. 'Age of AI' and 'gate-keepers' frames AI not as a tool but as a historical epoch or an invading force (barbarians at the gate). It attributes a massive, collective agency to AI technologies—they are a force to be 'kept' or managed.
- Acknowledgment: Standard idiom.
- Implications: This militaristic/defensive metaphor frames AI as an inevitable wave that libraries must withstand. It subtly disempowers human agency by treating AI as a natural disaster or historical inevitability ('the age of') rather than a set of corporate product deployments that can be refused or regulated.
10. The Mental Toolbox​
Quote: "If you take a screw and start whacking it with a hammer... use the tools in your toolbox effectively"​
- Frame: Cognitive Automation as Physical Hand Tool
- Projection: This is a reverse projection (reductionism). It maps the simplicity of inert physical objects (hammer, screw) onto complex, non-deterministic probabilistic systems. While meant to be grounding, it deceptively strips the AI of its active, agentic properties (it doesn't just 'hit' where you aim; it generates content you didn't ask for).
- Acknowledgment: Explicit analogy by an interviewee.
- Implications: This is a 'containment' metaphor. It attempts to reduce anxiety by claiming AI is 'just a tool' like a hammer. However, hammers don't hallucinate, scrape copyright data, or have 'conversations.' This metaphor dangerously underplays the risks of agency and autonomy in agentic AI systems, lulling librarians into a false sense of control.
11. Driving Excellence​
Quote: "AI they can trust to drive research excellence"​
- Frame: Software as Active Force/Motor
- Projection: This projects the capacity for causal initiation ('drive') onto the software. It implies the AI is the active agent of quality ('excellence'), demoting the human researcher to a passenger. It suggests the AI 'knows' what excellence looks like and actively pushes the user toward it.
- Acknowledgment: Presented as promise.
- Implications: This creates an expectation that the software produces quality autonomously. It obscures the fact that 'excellence' is a human judgment grounded in peer review and critical thinking. Attributing the 'drive' for excellence to the tool diminishes human responsibility for the output quality.
12. Pioneering Technology​
Quote: "Eugene Power pioneered the use of microfilm... Clarivate... transformative intelligence"​
- Frame: Corporate History as Evolution
- Projection: This links a mechanical storage medium (microfilm) with generative AI ('transformative intelligence') under the banner of 'pioneering.' It projects the stability and physical reality of microfilm onto the slippery, probabilistic nature of AI. It implies a continuity of 'knowing'—that Clarivate 'knew' how to handle microfilm and thus 'knows' how to handle AI.
- Acknowledgment: Historical narrative.
- Implications: This is a legitimacy transfer. It uses the concrete, proven utility of microfilm to vouch for the abstract, unproven reliability of 'Academic AI.' It obscures the fundamental difference: microfilm preserves information exactly; AI generates probabilistic approximations. It tricks the reader into trusting the new agent because they trusted the old archive.
Task 2: Source-Target Mapping​
For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.
Mapping 1: Human Employee / Subordinate to Software Interface / LLM​
Quote: "Clarivate Academic AI... Research Assistants"​
- Source Domain: Human Employee / Subordinate
- Target Domain: Software Interface / LLM
- Mapping: The structure of a human employment relationship—delegation, competence, shared goals, and subservience—is mapped onto a software interface. This assumes the software possesses the 'mind' of an assistant: the ability to understand the 'why' behind a task, not just the 'what.' It implies the system is a 'who' that works for you.
- What Is Concealed: This conceals the lack of shared intent. A human assistant cares (or feigns care) about the outcome; the model only predicts the next token. It hides the 'black box' nature of the processing—unlike a human assistant who can explain their reasoning ('I chose this because...'), the model's 'reasoning' is a post-hoc rationalization of statistical weights.
Mapping 2: Human Social Dialogue to Command-Line Query / Response Generation​
Quote: "Enables users to uncover trusted library materials via AI-powered conversations."​
- Source Domain: Human Social Dialogue
- Target Domain: Command-Line Query / Response Generation
- Mapping: The relational structure of a conversation (turn-taking, mutual focus, exchange of meaning) is mapped onto the technical process of inputting prompts and receiving generated text. It implies the system is a conversational partner with a 'self' that is being engaged.
- What Is Concealed: Conceals the solitary nature of the interaction. There is no 'other' involved. It obscures the mechanism of 'statistically plausible text generation' behind the mask of 'speaking.' It hides the fact that the system has no memory of the conversation beyond its context window and no understanding of the concepts it 'discusses.'
Mapping 3: Physical Travel / Spatial Navigation to Database Filtering / Ranking Algorithms​
Quote: "Navigate complex research tasks and find the right content."​
- Source Domain: Physical Travel / Spatial Navigation
- Target Domain: Database Filtering / Ranking Algorithms
- Mapping: The structure of moving through a physical landscape (seeing a path, avoiding obstacles, reaching a destination) is mapped onto data processing. It implies the data is a 'territory' and the AI is a 'guide' with a map (knowledge of the whole).
- What Is Concealed: Conceals the absence of a 'map' or 'understanding' in the model. The model doesn't 'navigate'; it calculates similarity scores. It hides the bias in the 'path'—the model doesn't go where is 'best' (a conscious judgment); it goes where the training data says is 'probable.' It obscures the algorithmic constraints that limit what 'content' can even be found.
Mapping 4: Interpersonal Relationship / Marriage / Alliance to Vendor-Client Commercial Contract​
Quote: "A trusted partner to the academic community"​
- Source Domain: Interpersonal Relationship / Marriage / Alliance
- Target Domain: Vendor-Client Commercial Contract
- Mapping: The structure of a long-term emotional or strategic bond (loyalty, shared risk, mutual support) is mapped onto a transaction. It implies the vendor (and its AI) has moral agency and capacity for betrayal or fidelity.
- What Is Concealed: Conceals the profit motive. A partner shares risks; a vendor sells products. It specifically obscures the extractive nature of AI 'partnerships,' where the 'partner' (AI) scrapes the library's data to train itself. It hides the asymmetry of power and the lack of reciprocity in the relationship.
Mapping 5: Human Intellect / Wisdom / Enlightenment to Data Analytics / Statistical Prediction​
Quote: "Clarivate is a leading global provider of transformative intelligence."​
- Source Domain: Human Intellect / Wisdom / Enlightenment
- Target Domain: Data Analytics / Statistical Prediction
- Mapping: The structure of human cognitive insight (understanding, synthesis, creating new knowledge) is mapped onto computational output. It implies the product is intelligence, rather than a tool that requires intelligence to use.
- What Is Concealed: Conceals the dependency on human labor. 'Intelligence' sounds innate to the machine; in reality, it is the statistical aggregation of millions of human decisions (training data). It obscures the energy costs and the material infrastructure (servers, GPUs) required to simulate this 'intelligence.'
Mapping 6: Archaeology / Physical Excavation to Metadata Correlation / Pattern Recognition​
Quote: "Uncovers the depth of digital collections"​
- Source Domain: Archaeology / Physical Excavation
- Target Domain: Metadata Correlation / Pattern Recognition
- Mapping: The act of removing physical barriers to reveal a pre-existing truth is mapped onto the generation of statistical links. It implies the connections were always there, waiting to be found, and the AI simply removed the dirt.
- What Is Concealed: Conceals the generative and constructive nature of AI. The AI doesn't just 'uncover'; it often creates relationships based on training biases. It hides the possibility that the 'depth' revealed is an artifact of the model's training data, not a feature of the collection itself.
Mapping 7: Human Mentor / Sherpa to Summarization Algorithm / Attention Mechanism​
Quote: "Guides students to the core of their readings."​
- Source Domain: Human Mentor / Sherpa
- Target Domain: Summarization Algorithm / Attention Mechanism
- Mapping: The social role of a mentor who knows what is important ('the core') and leads a novice to it is mapped onto a summarization function. It implies the AI possesses the critical judgment to distinguish 'core' from 'periphery' (a knowing state).
- What Is Concealed: Conceals the reductionist nature of summarization. The 'core' is determined by statistical frequency and positional embeddings, not semantic understanding. It hides the risk that the AI might miss the actual nuance or subtext that a human reader would consider the 'core.' It obscures the loss of information.
Mapping 8: Magic / Supernatural Ability to Automated Data Entry / Retrieval​
Quote: "Effortlessly create course resource lists"​
- Source Domain: Magic / Supernatural Ability
- Target Domain: Automated Data Entry / Retrieval
- Mapping: The quality of 'effortlessness' (action without friction) is mapped onto administrative labor. It implies the AI dissolves the complexity of the task through a kind of technological magic.
- What Is Concealed: Conceals the transfer of effort. The effort doesn't disappear; it moves from 'creation' to 'verification.' The user must now spend effort checking the AI's work for hallucinations. It also conceals the massive computational 'effort' (energy use) occurring in the background.
Mapping 9: Simple Carpentry / Physical Mechanics to Complex Cognitive Labor / AI Interaction​
Quote: "Whacking it with a hammer"​
- Source Domain: Simple Carpentry / Physical Mechanics
- Target Domain: Complex Cognitive Labor / AI Interaction
- Mapping: The simple cause-and-effect physics of a hand tool is mapped onto the non-linear, probabilistic behavior of a neural network. It implies the user has complete control and the tool is passive.
- What Is Concealed: Conceals the agency and unpredictability of the AI. A hammer never decides to hit a different nail; an AI can decide (via temperature settings and probability) to output something unexpected. This mapping hides the autonomy of the system and the risks of 'misalignment' that don't exist with hammers.
Mapping 10: Motor / Engine / Captain to Software Functionality​
Quote: "Trust to drive research excellence"​
- Source Domain: Motor / Engine / Captain
- Target Domain: Software Functionality
- Mapping: The capacity to initiate movement and control direction ('drive') is mapped onto the software. It implies the software is the active force in the research process.
- What Is Concealed: Conceals the passive nature of the software without human input. It obscures the fact that 'excellence' is a human standard that the machine cannot comprehend. It hides the potential for the software to 'drive' research into a ditch of hallucinations if not steered by a human.
Mapping 11: Historical Exploration / Frontier to Product Development / Format Migration​
Quote: "Pioneered the use of microfilm"​
- Source Domain: Historical Exploration / Frontier
- Target Domain: Product Development / Format Migration
- Mapping: The heroic narrative of exploring new territory ('pioneering') is mapped onto business history. It implies a lineage of courage and foresight that culminates in the current AI product.
- What Is Concealed: Conceals the fundamental rupture between analog (microfilm) and synthetic (AI). It hides the fact that while microfilm was about fidelity (perfect copy), AI is about probability (imperfect mimicry). It obscures the technical risks of the new technology by wrapping it in the safety of the old.
Mapping 12: Fortress Defense / War to Information Literacy / Curation​
Quote: "Gate-keepers... in the age of AI"​
- Source Domain: Fortress Defense / War
- Target Domain: Information Literacy / Curation
- Mapping: The role of a guard controlling access to a citadel is mapped onto librarianship. The 'Age of AI' is mapped as the besieging army or chaotic environment.
- What Is Concealed: Conceals the integration of the 'invader' inside the 'walls.' Libraries are buying the AI (Clarivate). The metaphor implies AI is external, hiding the fact that the 'gate-keepers' are now employing the 'invaders' as 'assistants.' It obscures the complicity of the institution in the very changes they are guarding against.
Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")​
This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.
Explanation 1​
Quote: "Clarivate helps libraries adapt with AI they can trust to drive research excellence, student outcomes and library productivity."​
-
Explanation Types:
- Functional: Explains a behavior by its role in a self-regulating system that persists via feedback
- Intentional: Refers to goals or purposes and presupposes deliberate design
-
Analysis (Why vs. How Slippage): This explanation hybridizes the functional role of the software (increasing productivity) with high-level intentional agency ('driving' excellence). It shifts from a mechanistic 'how' (productivity tools) to a purposive 'why' (the AI's goal is excellence). This choice emphasizes the AI as an active partner in the library's mission, rather than a passive utility. It obscures the alternative explanation: that the AI merely generates text which humans must leverage to achieve excellence. It credits the tool with the outcome of the labor.
-
Consciousness Claims Analysis: This passage attributes a high-level causal agency to the AI: it 'drives' excellence. While it avoids explicit consciousness verbs like 'knows,' the phrase 'AI they can trust' implies the system possesses a moral or epistemic reliability akin to a conscious agent. Trust implies the entity 'knows' the difference between truth and falsehood and 'chooses' truth.
The text treats the AI as 'Knowing' (capable of discerning and delivering 'excellence') rather than 'Processing' (optimizing metrics). The 'curse of knowledge' is evident here: the authors know that 'research excellence' is the goal of the library, and they project this goal onto the AI, implying the AI 'understands' what excellence is.
Mechanistic Reality: The model does not 'drive excellence'; it 'optimizes for token acceptance based on reinforcement learning from human feedback (RLHF).' It has no concept of 'excellence,' only of 'statistical likelihood of user approval.'
- Rhetorical Impact: By framing the AI as a 'driver' of excellence that can be 'trusted,' the text invites the audience to relinquish control. It positions the AI as an authority figure (a driver) rather than a tool. This increases the perceived reliability of the system, encouraging librarians to integrate it into core workflows without the intense scrutiny they might apply to a mere 'text generator.' It frames the risk not as 'technical failure' but as 'trust issues,' which the vendor promises to resolve.
Explanation 2​
Quote: "Summon Research Assistant Enables users to uncover trusted library materials via AI-powered conversations."​
-
Explanation Types:
- Functional: Explains a behavior by its role in a self-regulating system
- Reason-Based: Gives the agent's rationale or argument for acting
-
Analysis (Why vs. How Slippage): The phrase 'AI-powered conversations' frames the mechanism of search as a social exchange. It shifts the 'how' (database query) to a 'why' (conversation for the purpose of discovery). This emphasizes the ease and naturalness of the interaction, obscuring the friction of keyword formulation. It suggests the system is reasoning with the user.
-
Consciousness Claims Analysis: The key word here is 'conversations.' Conversation implies two conscious entities sharing information. The text implies the AI 'knows' how to converse. It conflates 'Processing' (converting natural language queries into SQL/vector search commands) with 'Knowing' (engaging in dialectic).
Curse of Knowledge: The author understands the user's desire for a helpful dialogue and projects that capacity onto the system. They conflate the interface (chat window) with the agent (interlocutor).
Mechanistic Reality: The model 'parses natural language inputs into query vectors, retrieves nearest-neighbor documents, and generates a natural language summary of the results.' It does not converse; it executes a 'retrieve-and-summarize' loop.
- Rhetorical Impact: This framing dramatically lowers the perceived barrier to entry (anyone can have a conversation) but also lowers the user's guard. If users believe they are 'conversing,' they may fall into social patterns of trust, asking open-ended questions and accepting the answers as advice from a 'knower' rather than data from a 'processor.' It increases the authority of the machine by anthropomorphizing its interface.
Explanation 3​
Quote: "Web of Science Research Assistant Navigate complex research tasks and find the right content."​
-
Explanation Types:
- Intentional: Refers to goals or purposes and presupposes deliberate design
-
Analysis (Why vs. How Slippage): The verbs 'Navigate' and 'Find' are deeply agential. They suggest the AI has a map of the territory and a specific destination ('the right content'). This explanation frames the AI as a skilled worker performing a task, rather than a tool being used by a worker. It emphasizes autonomy.
-
Consciousness Claims Analysis: The text claims the AI 'finds the right content.' This is a massive epistemic claim. 'Right' implies a judgment of truth, relevance, and quality. Only a conscious entity with knowledge of the research context can determine what is 'right.' The text treats the AI as 'Knowing' the user's intent and the literature's value.
Curse of Knowledge: The authors project their own knowledge of what makes content 'right' (peer review, citations) onto the AI, implying the AI 'understands' these concepts.
Mechanistic Reality: The model 'ranks documents based on vector similarity scores and citation network density.' It does not know what is 'right'; it knows what is 'mathematically similar' or 'highly connected.'
- Rhetorical Impact: This creates a liability trap. If the AI claims to find the 'right' content, users may skip the verification step. It positions the AI as an expert curator. This framing constructs the AI as an authority on the literature, enticing users to defer to its judgment rather than exercising their own information literacy.
Explanation 4​
Quote: "The Digital Librarian points to the future of computer literacy, considering AI's impact on critical evaluation and academic rigor."​
-
Explanation Types:
- Theoretical: Embeds behavior in a deductive or model-based framework
- Functional: Explains a behavior by its role in a self-regulating system
-
Analysis (Why vs. How Slippage): Here, AI is framed as an environmental force with an 'impact.' This shifts the explanation from agency (what AI does) to structural effect (what AI causes). It emphasizes the inevitability of the change, obscuring the specific design choices that create that impact.
-
Consciousness Claims Analysis: This passage attributes an abstract agency to AI—it impacts 'critical evaluation.' It implies that AI interacts with the human mind's capacity for judgment. While less anthropomorphic than 'Assistant,' it still treats AI as a 'Knower' in the sense that it is a force that modifies knowledge practices.
Curse of Knowledge: The author assumes the connection between AI and rigor is self-evident, projecting the 'problem' of AI onto the 'solution' of literacy.
Mechanistic Reality: 'The widespread availability of low-cost, high-variance synthetic text generation reduces the cost of producing non-factual assertions, necessitating higher verification costs for humans.'
- Rhetorical Impact: This framing generates anxiety ('impact on rigor') which the report then offers to solve (with Clarivate's tools). It positions AI as a powerful, somewhat dangerous wave that requires 'literacy' (read: training in Clarivate products) to survive. It constructs the AI as a powerful other.
Explanation 5​
Quote: "Librarians understand that AI will require significant upskilling... structured professional development opportunities remain limited."​
-
Explanation Types:
- Empirical Generalization (Law): Subsumes events under timeless statistical regularities
-
Analysis (Why vs. How Slippage): This explains the 'gap' in adoption as a deficiency in human skill ('upskilling') rather than a deficiency in tool usability or safety. It emphasizes the human need to adapt to the machine. It obscures the alternative: that the machines are perhaps too unreliable or complex for their purported purpose.
-
Consciousness Claims Analysis: The consciousness verb here ('understand') is applied to the librarians, contrasting with the implied complexity of the AI. However, the implication is that the AI 'contains' knowledge that the librarians must learn to access. It treats the AI as a repository of potential that is locked away.
Curse of Knowledge: The text assumes that 'using AI' is a valid skill set (Knowing) rather than 'operating a vendor product' (Processing).
Mechanistic Reality: 'The interface complexity and non-deterministic outputs of current LLMs require users to learn prompt engineering strategies to minimize error rates.'
- Rhetorical Impact: This shifts the burden of responsibility. If the AI fails, it's because the librarian wasn't 'upskilled' enough. It preserves the authority of the tool by locating the failure mode in the user. It creates a market for 'training' (which Clarivate also offers or supports).
Task 4: AI Literacy in Practice: Reframing Anthropomorphic Language​
Moving from critique to constructive practice, this task demonstrates applied AI literacy. It selects the most impactful anthropomorphic quotes identified in the analysis and provides a reframed explanation for each. The goal is to rewrite the concept to be more accurate, focusing on the mechanistic processes (e.g., statistical pattern matching, token prediction) rather than the misleading agential language. Additionally, for quotes with consciousness claims (e.g., "the AI knows"), this section provides a technical reality check that explicitly states what the system actually does at the mechanistic level.
| Original Quote | Mechanistic Reframing | Technical Reality Check |
|---|---|---|
| "Web of Science Research Assistant" | Web of Science Search Automation Tool | The system does not 'assist' in the human sense; it processes query tokens and retrieves database entries based on vector similarity. |
| "A trusted partner to the academic community" | A reliable service provider for the academic community | Trust implies moral agency; the system is a commercial product that executes code. Reliability refers to uptime and consistent error rates, not fidelity. |
| "AI-powered conversations" | AI-powered query interfaces | The model does not converse; it predicts the next statistically probable token in a sequence based on the user's input prompt. |
| "Transformative intelligence" | Advanced statistical analytics | The system does not possess intelligence (conscious understanding); it performs high-dimensional statistical correlation on massive datasets. |
| "Navigate complex research tasks" | Filter and rank complex research datasets | The model does not 'navigate' (plan a route); it filters data based on the parameters of the prompt and the weights of the training set. |
| "Uncover trusted library materials" | Retrieve indexed library materials | The model does not 'uncover' (reveal hidden truth); it retrieves items that match the search pattern. 'Trusted' refers to the source whitelist, not the model's judgment. |
| "Guides students to the core of their readings" | Summarizes frequent themes in student readings | The model does not know the 'core' (meaning); it identifies statistically frequent terms and patterns to generate a summary. |
| "Effortlessly create course resource lists" | Automate the compilation of course resource lists | The process is not effortless; the cognitive load shifts from compilation to verification of the model's automated output. |
| "Drive research excellence" | Accelerate data processing for research | The model does not 'drive' (initiate) excellence; it processes data faster, which humans may use to improve their work quality. |
| "Understand getting a blockbuster result" | Recognize the statistical pattern of a high-impact result | The model does not 'understand' success; it classifies outputs based on patterns associated with high engagement or citation in its training data. |
| "Gate-keepers... in the age of AI" | Curators... in the context of generative text proliferation | AI is not an 'age' or external force; it is a specific technology (generative text) that increases the volume of information requiring curation. |
| "Teaching patrons how to critically engage with AI tools" | Teaching patrons how to verify the outputs of probabilistic models | Critical engagement implies social interaction; the actual task is verification of probabilistic outputs against ground truth. |
Critical Observations​
This section synthesizes the findings from the previous tasks into a set of critical observations. It examines the macro-patterns of agency slippage (the shift between treating AI as a tool vs. an agent), how cognitive metaphors drive trust or fear, and what actual technical processes are obscured by the text's dominant linguistic habits.
Agency Slippage​
The text exhibits a distinct and strategic oscillation between framing AI as a passive 'tool' and an active 'agent.' This slippage is not random; it follows a clear rhetorical gradient. When discussing the challenges and risks (pp. 18-25), the text largely adopts the language of the users (librarians), who consistently frame AI as a 'tool' (e.g., 'It's just a tool,' 'tools in your toolbox'). In these sections, the agency is located in the human librarian who must 'whack' the screw or 'teach' the patrons.
However, when the text shifts to the product pitch (pp. 27-29), the direction of the slippage reverses sharply toward agential consciousness. The mechanical 'tool' becomes a 'Research Assistant,' a 'Partner,' and a 'Guide.' The AI suddenly 'navigates,' 'uncovers,' and 'drives.' This builds the 'illusion of mind' by first establishing the safety of the tool metaphor (don't worry, you're in charge) and then layering the 'partner' metaphor on top (but this tool is smart enough to do the work for you).
The 'consciousness projection' is foundational to the product pitch. To sell a 'Research Assistant' (p. 27), Clarivate must imply the system 'knows' research. If it merely 'processed' text, it would be a search engine. The value proposition relies on the 'curse of knowledge': the authors know what a human assistant does, and they project that conscious capability onto the software to justify the branding. This allows the text to claim the authority of an agent while evading the liability of an employee—it's a partner when it succeeds, but just a 'tool' (subject to human supervision) when it hallucinates.
Metaphor-Driven Trust​
Trust is the central currency of this report, explicitly invoked in phrases like 'AI they can trust' and 'Trusted partner.' The metaphorical framing constructs a specific type of trust: relation-based trust. By calling the AI a 'Partner' and 'Assistant,' the text encourages librarians to trust the system as they would a colleague—based on assumed shared values, loyalty, and competence. This is a dangerous manipulation because AI systems warrant only performance-based trust (reliability, error rates, predictability).
The consciousness language ('AI-powered conversations,' 'understanding context') functions as a massive trust signal. We trust things that 'understand' us. If the AI 'knows' what you mean, you don't need to audit its query syntax. This framing undermines the very 'critical evaluation' the report claims to support. If the system is a 'Trusted Partner,' verifying its work feels like a breach of that partnership.
The text manages the risk of failure by anthropomorphizing success and mechanizing failure. Success is 'driving excellence' (agent), but failure is a 'lack of upskilling' (user error) or a need for 'literacy' (education). This effectively privatizes the benefits of agency to the vendor while socializing the risks to the user. By conflating the statistical 'confidence' of the model with the moral 'trustworthiness' of a partner, Clarivate invites libraries to extend a human vulnerability to a system that is mathematically incapable of reciprocation.
Obscured Mechanics​
The anthropomorphic metaphors in this text systematically obscure the material and technical realities of the AI products being sold.
Technical Realities: The 'Assistant' and 'Conversation' metaphors hide the reality of token prediction and vector search. They obscure the fact that the system has no concept of truth, only probability. The phrase 'uncover trusted materials' hides the ranking algorithms that determine visibility—algorithms that may be biased toward Clarivate's own citation indices (Web of Science).
Labor Realities: The metaphor of 'effortless creation' (p. 28) erases the human labor involved. It obscures the fact that 'intelligence' is actually the harvested aggregate of millions of human researchers' work (the training data). It also obscures the new labor imposed on librarians: the labor of verification. The 'Assistant' doesn't actually do the work; it generates a draft that requires intense scrutiny, a cost the metaphor hides.
Economic Realities: The 'Partner' metaphor conceals the extractive nature of the relationship. Clarivate is a vendor extracting rent for access to data that the academic community largely created. By framing this as a 'partnership' driven by 'shared goals,' the text masks the commercial imperative to sell subscription upgrades ('AI add-ons'). The consciousness framing ('it understands') hides the dependency on training data—if the AI 'knows,' we don't need to ask where it learned it. This conveniently sidesteps questions about copyright and data sovereignty, which are major concerns for the library community.
Context Sensitivity​
The distribution of anthropomorphism in this text is highly strategic. The report effectively has two voices: the 'Survey Voice' (data-driven, cautious, focused on budgets/risks) and the 'Vendor Voice' (metaphor-heavy, optimistic, focused on solutions).
In the 'Findings' sections (pp. 9-24), consciousness claims are rare. AI is described as a 'tool,' a 'challenge,' or a 'budget item.' Limitations are prominent. This section builds credibility; it says, 'We understand your reality.'
However, the moment the text pivots to 'Recommendations' (p. 26) and 'Clarivate Academic AI' (p. 27), the metaphor density explodes. Suddenly, 'tools' become 'Assistants,' processes become 'conversations,' and software becomes 'intelligence.' This shift leverages the credibility earned in the survey section to sell the magical thinking in the product section.
Crucially, capabilities are described agentially ('The Assistant navigates'), while limitations are described structurally or passively ('Budget constraints,' 'Privacy concerns'). The text never says 'The Assistant hallucinates' or 'The Partner lies.' It says 'libraries face challenges.' This asymmetry serves a clear rhetorical function: it isolates the product from the problems. The AI 'knows' how to help; the 'environment' is what makes it difficult. This encourages the audience to view the AI as the savior from the very complexity the report describes.
Conclusion​
This final section provides a comprehensive synthesis of the entire analysis. It identifies the text's dominant metaphorical patterns and explains how they construct an "illusion of mind." Most critically, it connects these linguistic choices to their tangible, material stakes—analyzing the economic, legal, regulatory, and social consequences of this discourse. It concludes by reflecting on AI literacy as a counter-practice and outlining a path toward a more precise and responsible vocabulary for discussing AI.
Pattern Summary​
This text constructs a metaphorical system dominated by two interconnected patterns: AI AS SOCIAL AGENT ('Partner,' 'Colleague,' 'Assistant') and AI AS COGNITIVE KNOWER ('Intelligence,' 'navigator,' 'uncoverer'). These patterns are not isolated; they form a 'consciousness architecture' where the assumption that the AI 'knows' (has epistemic access to truth) serves as the foundation for the claim that it can act as a 'partner' (has social agency). The 'Research Assistant' metaphor is the load-bearing structure here. It creates a bridge between the user's need for help and the vendor's software. If the AI were framed merely as a 'search tool,' the claim of 'partnership' would collapse. It is only by projecting a human-like 'knowing' mind onto the software that Clarivate can position its product as a collaborator rather than a utility. This system relies on the 'curse of knowledge,' where the authors project the professional competencies of a human librarian onto the pattern-matching capabilities of the code.
Mechanism of Illusion: The "Illusion of Mind"​
The 'illusion of mind' is constructed through a careful rhetorical sleight-of-hand involving timing and definition. The text begins by acknowledging the user's skepticism ('It's just a tool,' p. 25), establishing a baseline of shared reality. It then slowly redefines 'tool' to mean 'Assistant' (p. 27), exploiting the audience's desire for relief from administrative burden. The central trick is the conflation of Output with Intent. Because the AI outputs text that looks like a research assistant's email (citations, summaries), the text implies it has the intent of a research assistant. This is supported by hybrid explanations that mix functional descriptions ('supports decision-making') with intentional ones ('navigates complex tasks'). The illusion is sealed by the 'conversation' metaphor, which creates a social obligation to treat the interface as a 'who' rather than a 'what.' The audience, anxious about budget cuts and 'the age of AI,' is vulnerable to the promise of a competent, automated partner who 'knows' the way forward.
Material Stakes​
- Selected Categories: Economic, Epistemic, Institutional
- Analysis: The metaphorical framing of AI as a 'Knowing Partner' rather than a 'Processing Tool' carries concrete material risks.
Economic Stakes: Libraries viewing AI as an 'Assistant' may allocate scarce budget to these tools under the false belief that they replace human labor ('effortlessly create'). If the AI 'knows,' it justifies a premium price. If it merely 'processes,' it is a commodity utility. This framing directly benefits Clarivate's pricing power at the expense of library collection budgets.
Epistemic Stakes: The 'Research Assistant' frame encourages users to trust the system's outputs. If a researcher believes the AI 'navigates' to the 'right content' (a knowing act), they may bypass verification. This risks the pollution of the scholarly record with hallucinated citations or biased literature reviews, degrading the very 'excellence' the library aims to support.
Institutional Stakes: By accepting the 'Partner' frame, libraries risk eroding their own professional jurisdiction. If an AI 'uncovers depth' and 'guides students,' the librarian's role shrinks to 'upskilling' users to use the vendor's product. This transfers institutional authority from the library (the domain expert) to the vendor (the tool provider), effectively outsourcing the core mission of curation to an opaque algorithm.
Literacy as Counter-Practice: AI Language Literacy​
Practicing AI literacy requires a disciplined refusal of these 'Assistant' and 'Partner' metaphors. Reframing 'The AI understands your query' to 'The model correlates your prompt tokens with training data' is not just pedantry; it is an act of economic and epistemic resistance.
By replacing consciousness verbs with mechanistic ones, we strip the system of its unearned authority. If we insist on calling it a 'probabilistic text generator' rather than a 'Research Assistant,' the rationale for trusting it blindly evaporates. This forces the recognition that the system has no commitment to truth, only to probability.
Systematic adoption of this practice would require journals and library associations to mandate 'mechanistic disclosure'—refusing to publish papers that say 'The AI thought' or 'The model learned,' and instead requiring 'The model weightings adjusted.' This counters the material stakes by re-establishing the line between human judgment (knowing) and machine calculation (processing), protecting the library's budget from hype and its reputation from automated error. It explicitly resists the vendor's attempt to humanize the product to evade scrutiny.
Path Forward​
To move forward, the library and information science community must adopt a vocabulary of Computational Realism. We must replace the language of 'Intelligence' and 'Assistance' with the language of Information Processing and Automation.
Vocabulary Shift:
- Instead of 'AI Partner,' use 'Automated Analysis Tool.'
- Instead of 'The AI understands,' use 'The model processes.'
- Instead of 'Trust,' use 'Verify.'
- Instead of 'Conversations,' use 'Query Cycles.'
Institutional Support: Funding agencies should require grant proposals to define AI functions mechanistically, rejecting anthropomorphic descriptions. Library Consortia should demand that vendors like Clarivate provide 'Explainability Audits' that describe how the system retrieves results, rather than marketing copy about who the system is ('a partner').
Vision: The goal is not to banish AI, but to demystify it. Precision enables agency. When we see the AI as a statistical processor, we can use it effectively without being used by it. This restores the librarian to their proper place: not as a 'user' of a magical assistant, but as the expert operator of a complex, powerful, and strictly mechanical instrument.
Raw JSON: 2025-11-18-pulse-of-the-library-2025-metaphor-92f3ef.json
Analysis Framework: metaphor v6.0
Generated: 2025-11-18T17:34:20+00:00Z
Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0
Extended Processing Summary​
Gemini refers to the text below as “thought summaries.” This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as a just another rhetorical artifact —a way of making the model’s processing legible. The first-person framing of these “thought summaries” is a presentation choice for the user-facing output, not a window into “real”thoughts. These are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind.
Defining the Analysis Focus
I'm now zeroing in on the core of this task: a critical discourse analysis. My focus is sharpened on Clarivate's "Pulse of the Library 2025," which will be dissected using metaphor structure-mapping and Brown's concept of epistemic injustice. This has led me to identify the exact data points I need to extract. I am exploring how metaphors shape the text's understanding of librarianship.
Unpacking Metaphors Further
I'm now focusing on a deeper dive into the metaphors. I've pinpointed "Research Assistant" and "Trusted Partner" as key metaphors within the document. My next move is to dissect these metaphors within the framework of Lakoff and Johnson's theories. I'll unpack the implicit meanings, source domains, and target domains of each one, preparing to identify potential instances of epistemic injustice.
Pinpointing Key Metaphors
I've identified several key metaphors and am now analyzing them in more detail. I'm focusing on "AI-powered conversations" and "transformative intelligence," exploring their source and target domains. I'm also examining the implications of "AI as mentor/guide" and the contrasting "AI as simple physical tool" framing. My draft mappings are starting to take shape, examining the implications of "Assistant" being a software interface rather than a human employee.
Dissecting Metaphors Further
I'm now thoroughly examining the metaphors identified, focusing on their source and target domains. Specifically, I've mapped "Partner" (Social Relationship -> Commercial Service), "Conversation" (Human Dialogue -> Query/Response Loop), and "Intelligence" (Biological Mind -> Statistical Computation). I'm also analyzing the implications of "Navigate" (Physical Travel -> Database Querying) and "Uncover" (Physical Excavation -> Information Retrieval), as well as "Guide" (Human Mentor -> Algorithm). The "Effortlessly" mapping (Physical Ease -> Automation) and the "Toolbox/Hammer" analysis (Physical Carpentry -> Cognitive Labor) are also progressing. "Gatekeeper" (Security/Defense -> Curatorial Authority) is also being evaluated. Finally, I've started the Explanation Audit process on a relevant passage.
Mapping the Metaphorical Slippage
I'm now identifying and analyzing instances of metaphorical slippage, specifically in the provided passages. I'm noting where "conversations" imply mutual understanding, where AI is framed as an active force rather than a passive tool, and where the software acts as an agent. I am also investigating the claim that AI requires significant upskilling.
Analyzing Language Shifts
I'm now focusing on a more explicit and precise reframing of the document's key terminology. I've been working on a new reframing of "uncovers" to the more technical term, "retrieves". I'm also planning to refactor the language, such as replacing "guides" with "prompts". I have started thinking about how to refactor "intelligent" language, in general, to be less anthropomorphic. I also have an example of this reframing, as an alternative passage. I am also working on re-framing concepts like "navigates," "trust," and "transformative" into more specific technical terms.
Identifying Underlying Issues
My focus has shifted to the deeper analysis of the report, particularly examining the agency slippage between portraying AI as a "tool" versus an "agent". I am dissecting the "Trusted Partner" and "Research Assistant" metaphors, exploring how they utilize the social capital of human relationships to present a commercial product. I'm also examining how the report obscures the actual mechanics of AI, the labor of librarians, and the potential for factual inaccuracies. I am also working to identify how the report uses high metaphor density for marketing.
Summarizing Key Arguments
I'm now drafting my conclusion. My initial focus is on the contrast between the report's confident claims regarding AI as a "colleague/partner" versus the more hesitant language in the survey results. I will highlight how the "Assistant" framing creates an illusion of shared goals and explore the economic and epistemic stakes involved. I'm also developing a counter-argument that centers on "statistical text generation" to ground the conversation. My final point will advocate for "Computational Literacy" over "AI Literacy." I'm also focusing on the need to distinguish between survey respondents' concerns and the report's more certain claims.
Evaluating the Report's Claims
I'm now focusing on the tension between AI as a tool and the report's framing of it as an assistant or partner, which I'll be explicitly addressing. I will contrast data retrieval with actual knowledge, which I've defined as an "Epistemic Claim Analysis". I am also starting to finalize the JSON structure, making sure that it contains the necessary metaphor audit components.
Producing Final Outputs
I'm now generating the JSON string, and I'm ensuring it's structured to convey my core arguments effectively. I'm double-checking the structure against my final framework to ensure the metaphor audit components are all present. I'm focusing on clarity and conciseness to accurately represent my nuanced analysis. I'm also confirming the JSON structure captures the Epistemic Claim Analysis, as well as the observations derived from the Explanation Audit.