🆕 Pulse of the library
- About
- Analysis Metadata
- 📊 Audit Dashboard
This document presents a Critical Discourse Analysis focused on AI literacy, specifically targeting the role of metaphor and anthropomorphism in shaping public and professional understanding of generative AI. The analysis is guided by a prompt that draws from cognitive linguistics (metaphor structure-mapping), the philosophy of social science (Robert Brown's typology of explanation), and accountability analysis.
All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputs—not guarantees of factual accuracy or authorial intent.
Source Title: Pulse of the library Source URL: https://clarivate.com/pulse-of-the-library/ Model: gemini-3.1-pro-preview Temperature: 1.05 Top P: 0.95 Tokens: input=16241, output=19289, total=35530 Source Type: report Published: 2025 Analyzed At: 2026-03-28T08:27:50.742Z Framework: metaphor Framework Version: 6.4 Schema Version: 3.0 Run ID: 2026-03-28-pulse-of-the-library-metaphor-06cspv
Metaphor & Illusion Dashboard
Anthropomorphism audit · Explanation framing · Accountability architecture
Deep Analysis
Select a section to view detailed findings
Explanation Audit
Browse how/why framing in each passage
"Web of Science Research Assistant: Navigate complex research tasks and find the right content."
🔍Analysis
🧠Epistemic Claim Analysis
🎯Rhetorical Impact
How/Why Slippage
80%
of explanations use agential framing
8 / 10 explanations
Unacknowledged Metaphors
100%
presented as literal description
No meta-commentary or hedging
Hidden Actors
100%
agency obscured by agentless constructions
Corporations/engineers unnamed
Explanation Types
How vs. Why framing
Acknowledgment Status
Meta-awareness of metaphor
Actor Visibility
Accountability architecture
Source → Target Pairs (8)
Human domains mapped onto AI systems
Metaphor Gallery (8)
Reframed Language (Top 4 of 8)
| Original Quote | Mechanistic Reframing | Technical Reality | Human Agency Restoration |
|---|---|---|---|
| Web of Science Research Assistant: Navigate complex research tasks and find the right content. | The Web of Science interface executes vector similarity searches against our proprietary database to retrieve and rank documents based on statistical relevance to your query. | The AI does not 'know' or 'navigate' anything; it converts text inputs into numerical embeddings and retrieves database tokens that mathematically correlate with the user's prompt based on predefined ranking algorithms. | Clarivate's engineering team designed and deployed a search algorithm that ranks content according to parameters chosen by the company's developers. |
| ProQuest Research Assistant: Helps users create more effective searches, quickly evaluate documents... and explore new topics | The ProQuest interface processes user inputs to generate optimized database queries and uses language models to generate text summaries of retrieved documents based on statistical patterns. | The software cannot 'evaluate' documents or 'explore' topics. It classifies tokens and generates text outputs that statistically correlate with similar training examples, entirely lacking semantic comprehension or academic judgment. | Clarivate's product teams integrated a generative model designed to summarize texts based on parameters established by their data scientists. |
| Alethea: Simplifies the creation of course assignments and guides students to the core of their readings. | The Alethea platform automates the formatting of assignments and extracts high-frequency and heavily weighted sentences from texts to generate automated summaries. | The model does not 'know' the core of a reading or 'guide' anyone. It mathematically weights contextual embeddings using attention mechanisms tuned during its training phase to extract statistically prominent text. | Software engineers designed a system that extracts text according to statistical weights; educators must decide whether these automated summaries accurately represent their syllabus. |
| Clarivate helps libraries adapt with AI they can trust to drive research excellence... | Clarivate sells language and search models that generate outputs mathematically aligned with academic datasets, requiring constant human verification to ensure accuracy. | AI possesses no intent and cannot 'drive excellence.' It retrieves and generates tokens based on probability distributions from its training data, requiring human researchers to verify factual truth. | Clarivate executives chose to deploy these statistical models to market, shifting the burden of verifying accuracy and maintaining research excellence onto librarians and users. |
Task 1: Metaphor and Anthropomorphism Audit
About this task
For each of the major metaphorical patterns identified, this audit examines the specific language used, the frame through which the AI is being conceptualized, what human qualities are being projected onto the system, whether the metaphor is explicitly acknowledged or presented as direct description, and—most critically—what implications this framing has for trust, understanding, and policy perception.
V3 Enhancement: Each metaphor now includes an accountability analysis.
1. Software as Epistemic Navigator
Quote: "Web of Science Research Assistant: Navigate complex research tasks and find the right content."
- Frame: Model as conscious researcher
- Projection: This metaphor projects human spatial awareness, intellectual discernment, and conscious intent onto a statistical retrieval system. By characterizing the software as a 'Research Assistant' capable of 'navigating' and actively 'finding the right content,' the text attributes conscious epistemic agency to what are fundamentally mathematical operations. A human research assistant possesses subjective awareness, contextual understanding, and justified beliefs about what constitutes the 'right' or accurate content. Projecting this human capacity onto an artificial intelligence system suggests the software possesses a mind that genuinely knows and comprehends research goals, rather than merely calculating statistical vector similarities and retrieving the highest-probability token sequences based on a user's prompt. It falsely grants the system an independent, conscious awareness of academic truth, converting deterministic algorithms into an illusion of thoughtful participation.
- Acknowledgment: Direct (Unacknowledged) (The text presents the AI's ability to 'navigate complex research tasks' as a literal capability and product feature, entirely devoid of any hedging, quotation marks, or qualifications.)
- Implications: This framing profoundly inflates the perceived sophistication and reliability of the software, directly influencing user trust and institutional policy. If librarians and students believe the AI actually 'understands' and can consciously discern the 'right' content, they become highly susceptible to automation bias and may bypass critical evaluation of the results. This creates severe risks of unwarranted trust in academic settings, where the system might hallucinate or retrieve irrelevant information with high statistical confidence. Furthermore, by positioning the tool as an independent 'Assistant,' the framing obscures vendor liability; if the system fails, the implication is that an autonomous entity made a mistake, rather than acknowledging that the company deployed a statistically flawed retrieval algorithm.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The system is framed as an autonomous assistant acting entirely on its own accord. The engineers at Clarivate who designed the search algorithms, the executives who decided to integrate the generative model, and the company that profits from selling this interface are completely erased from the action of 'navigating' or 'finding.' By stating the AI autonomously 'finds the right content,' the text hides the fact that specific human actors programmed the retrieval parameters and relevance weights. This agentless construction serves the vendor's interests by absorbing credit for success while diffusing direct responsibility for systemic failures or algorithmic biases.
Show more...
2. Algorithmic Correlation as Intellectual Evaluation
Quote: "ProQuest Research Assistant: Helps users create more effective searches, quickly evaluate documents, engage with content more deeply, and explore new topics with confidence."
- Frame: Model as intellectual collaborator
- Projection: This phrasing projects higher-order human cognitive functions—specifically evaluation, deep engagement, and intellectual exploration—onto an algorithmic process. The text suggests the AI possesses the capacity to 'evaluate documents' and facilitate 'deep' engagement, which maps human conscious judgment and semantic comprehension onto the system. In reality, the AI only processes text data, classifies tokens, and predicts sequences based on training weights. It does not 'know' or 'believe' anything about the documents it processes, nor can it experience 'depth' of engagement. By attributing these conscious faculties to the software, the text transforms mechanistic pattern matching into an illusion of a reasoning mind capable of assessing qualitative academic merit.
- Acknowledgment: Direct (Unacknowledged) (The capabilities to 'evaluate documents' and 'engage with content' are presented as literal, unhedged product functions in the catalog without any metaphorical framing acknowledged.)
- Implications: Framing an AI as capable of evaluating documents and deepening engagement creates dangerous epistemic vulnerabilities in the research process. It encourages users to offload their own critical thinking and academic judgment onto a statistical model that lacks any true comprehension of the material. If users trust the AI to 'evaluate' on their behalf, they risk absorbing generated hallucinations or statistically probable but factually incorrect summaries. This inflates the perceived capabilities of the tool, leading to unwarranted trust and a potential degradation of rigorous research standards, while masking the fundamental lack of conceptual understanding within the system.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The text completely obscures the human developers who defined the metrics for 'effective searches' and programmed the summarization parameters used to 'evaluate documents.' Clarivate and its engineering teams are the actual actors who designed the algorithms that perform these classifications, yet they are entirely absent from the sentence. This displacement of agency constructs the AI as an independent intellectual actor, shielding the corporate designers from scrutiny regarding how 'evaluation' is quantified and what biases might be embedded in the code.
3. Probabilistic Generation as Pedagogical Guidance
Quote: "Alethea: Simplifies the creation of course assignments and guides students to the core of their readings."
- Frame: Model as mentor and teacher
- Projection: This metaphor maps the intentional, empathetic, and authoritative role of a human educator onto an automated text-processing system. The claim that the software 'guides students to the core of their readings' projects a conscious understanding of both the student's learning process and the semantic 'truth' or essence of a text. 'Guiding' implies an intentional actor that knows the destination and understands how to lead someone there. However, the system merely calculates attention weights and extracts statistically salient phrases based on its training distribution. It does not 'know' what the core of the reading is, nor does it have an educational intent. The language replaces mechanistic extraction with a projection of conscious pedagogical wisdom.
- Acknowledgment: Direct (Unacknowledged) (The action of guiding students to the core of readings is stated as a definitive, literal feature of the Alethea product, with no acknowledgement of the metaphorical nature of the claim.)
- Implications: This pedagogical anthropomorphism heavily impacts the institutional adoption and student trust in the platform. By branding the AI as a guide to the 'core' of readings, it positions the system as an epistemic authority, replacing the human professor's interpretive framework with a proprietary algorithm. This risks flattening complex academic texts into statistically average summaries, preventing students from developing their own analytical skills. The consciousness projection inflates the system's perceived ability to teach, creating a false sense of security that the AI 'understands' the syllabus, while obfuscating the risk of the model prioritizing common misinterpretations found in its training data.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The statement asserts that the product 'Alethea' is the sole actor doing the simplifying and guiding. The human educators who originally created the source materials, the Clarivate developers who wrote the summarization model, and the data annotators who shaped the AI's output preferences are completely invisible. By naming the software as the agent, the text obscures the human decisions that define what is algorithmically considered the 'core' of a reading. If the algorithmic extraction misses vital nuance, the blame is diffused into the software rather than the engineering choices.
4. Software as Moral and Relational Agent
Quote: "Clarivate helps libraries adapt with AI they can trust to drive research excellence, student outcomes and library productivity."
- Frame: Model as a trustworthy professional
- Projection: This framing projects human moral reliability, professional integrity, and intentional ambition onto artificial intelligence. The phrase 'AI they can trust to drive research excellence' maps interpersonal, relation-based trust onto a statistical processing tool. Humans 'trust' other humans because of shared values, intentions, and vulnerability. By asking libraries to trust an AI to 'drive excellence,' the text attributes a conscious desire to achieve high standards to the software. The model, however, processes data without any conception of 'excellence' or any moral stake in the outcomes. Projecting trustworthiness obscures the mechanical reality that the AI operates strictly on mathematical probabilities, devoid of any ethical commitments or intentional goals.
- Acknowledgment: Direct (Unacknowledged) (The claim asks for literal trust in the AI to perform complex, value-laden tasks like driving research excellence, presented without any qualifying language to indicate a metaphorical use.)
- Implications: Transferring relation-based trust to an algorithmic system fundamentally corrupts institutional risk assessment. When administrators 'trust' AI to drive excellence, they are discouraged from implementing necessary auditing and verification protocols, assuming the system inherently intends to do well. This unwarranted trust obscures the reality that the system will predictably generate plausible but false information when it lacks sufficient data constraints. Furthermore, characterizing the AI as a trusted driver of outcomes legally and culturally shifts the burden of performance away from the corporate vendor providing the tool and onto the software itself, complicating liability when the system inevitably produces erroneous or biased results.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: While Clarivate is named as a helper, the entity that actually 'drives research excellence' is portrayed as the AI itself. This subtly displaces the responsibility for the quality of the outputs. By asking users to trust the AI, rather than asking them to trust Clarivate's engineering team, data scientists, and corporate QA processes, the company constructs an accountability shield. If the product fails to deliver excellence, the phrasing implies the technology fell short, rather than Clarivate making poor development or deployment decisions.
5. Algorithmic Operations as Dialogic Comprehension
Quote: "Summon Research Assistant: Enables users to uncover trusted library materials via AI-powered conversations."
- Frame: Model as an active interlocutor
- Projection: This metaphor projects the human capacity for reciprocal communication, conscious listening, and semantic comprehension onto a prompt-based generative interface. Calling the interaction a 'conversation' implies that the AI is a conscious interlocutor that listens, understands the user's intent, and responds with considered knowledge. In reality, the system takes the user's text as a sequence of input tokens, projects them into a high-dimensional space, and statistically predicts the most likely subsequent tokens based on its training weights and the library's indexed data. The projection erases the mechanistic reality of sequence prediction, replacing it with the illusion of a mind actively comprehending and participating in a dialogue.
- Acknowledgment: Direct (Unacknowledged) (The term 'conversations' is used as a literal description of the human-computer interaction feature within the Summon platform, without any scare quotes or explanatory hedging.)
- Implications: Describing prompt-response cycles as 'conversations' tricks the user's cognitive heuristics into treating the system as a social agent. This heightens the risk of users disclosing sensitive information and leads to overestimating the system's ability to 'understand' complex or nuanced queries. Because humans associate conversation with consciousness and understanding, users will intuitively assume the AI 'knows' what it is talking about, thereby lowering their skeptical defenses. This anthropomorphic framing masks the system's total lack of contextual awareness and makes its generated responses—even completely fabricated ones—feel socially authoritative and intuitively true.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The text attributes the action of 'uncovering' materials to the user facilitated by the 'conversations' with the AI. The engineers who built the RAG (Retrieval-Augmented Generation) pipeline, the indexers of the library materials, and the designers of the conversational interface at Clarivate are entirely omitted. This obscures the fact that human designers pre-determined the parameters of what gets retrieved and how the generative model formats the response, displacing the agency of the platform creators onto an automated interactive loop.
6. Machine Learning as Biological Organism
Quote: "People are very nervous because if you've got a well-trained AI, then why do you need people to work in libraries?"
- Frame: Model as a trained animal or student
- Projection: The phrase 'well-trained AI' projects the biological and psychological processes of learning, habituation, and cognitive development onto the mathematical process of gradient descent and weight adjustment. It implies the AI is an entity that has undergone a process of education or behavioral conditioning, suggesting an internal cognitive state that 'learns' and 'retains' knowledge. Mechanistically, training an AI simply means exposing an algorithm to vast datasets to optimize a loss function until its statistical predictions align with human-provided labels. Projecting biological training onto this process falsely suggests the system acquires actual knowledge and competence in the way a human or animal does, granting it a ghost of conscious capability.
- Acknowledgment: Direct (Unacknowledged) (The librarian quoted uses 'well-trained' as standard, literal industry terminology to describe the capability of the AI system, showing how deeply naturalized the biological metaphor has become.)
- Implications: The 'well-trained' metaphor heavily influences public perception of AI competence and reliability. If an AI is considered 'well-trained,' audiences naturally assume it possesses generalized competence, reliable judgment, and an understanding of the rules it was 'taught.' This obscures the brittle nature of machine learning, where a model might perform perfectly on training data but fail catastrophically on slight variations (out-of-distribution data). This projection of organic learning creates unwarranted fear of job displacement because it positions the AI as an equivalent, albeit artificial, worker that 'knows' the job, rather than a statistical tool that fundamentally requires human oversight.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: By focusing on the 'well-trained AI,' the statement obscures the vast human labor required to perform that training. It renders invisible the data scientists who selected the training data, the thousands of underpaid click-workers who provided the reinforcement learning feedback (RLHF), and the corporate executives funding the compute power. The AI is presented as the culmination of the training, hiding the massive infrastructure of human actors and decisions whose biases and labor are actually responsible for the system's capabilities and flaws.
7. Algorithmic Bias as Inherent Flaw
Quote: "identifying and mitigating bias in AI tools"
- Frame: Model as an independent prejudiced entity
- Projection: This framing projects human social prejudices, subjective biases, and moral failings onto a mechanistic software tool. By locating the 'bias' strictly 'in AI tools,' the metaphor suggests the algorithm itself has independently developed prejudiced beliefs or flawed judgments. In reality, AI systems do not possess consciousness or the capacity for bigotry; they strictly process mathematical correlations found within their training datasets. The 'bias' is actually the historical human prejudice encoded in the data collected and fed into the system by human engineers. Projecting the bias onto the tool divorces the output from its human origins, treating the software as an autonomous agent with its own flaws.
- Acknowledgment: Direct (Unacknowledged) (The phrase treats the existence of bias as an inherent, literal property residing within the 'AI tools,' presented without any qualification about the human origins of that data.)
- Implications: Locating bias 'in' the AI tool profoundly impacts regulatory approaches and institutional accountability. It frames the problem as a technical glitch to be 'mitigated' by software patches, rather than a systemic issue of human data curation, historical inequality, and corporate negligence. This technological determinism leads audiences to believe that AI fairness is a mathematical puzzle rather than a sociopolitical challenge. Furthermore, it creates a fatalistic acceptance of biased outputs, as they are seen as mysterious artifacts of the machine rather than the direct, predictable consequence of human engineers choosing to scrape uncurated, discriminatory internet data.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: This is a classic accountability sink. By stating the goal is mitigating bias 'in AI tools,' the text completely erases the human engineers, data brokers, and corporate managers who selected, purchased, and deployed the biased training data. Clarivate and other tech vendors are not named as the perpetrators of the bias. The phrase transforms an active corporate failure—deploying models trained on discriminatory data without proper auditing—into a passive, almost natural phenomenon that happens to exist 'in' the software, thereby shielding the actual decision-makers from ethical and legal liability.
8. Mathematical Similarity as Relevance Assessment
Quote: "Ebook Central Research Assistant: Facilitates deeper engagement with ebooks, helping students assess books' relevance and explore new ideas."
- Frame: Model as critical evaluator
- Projection: This metaphor projects the conscious human ability to evaluate qualitative relevance and comprehend conceptual ideas onto statistical string-matching and embedding-proximity algorithms. The text claims the AI helps 'assess books' relevance,' which implies the software reads the book, understands the student's semantic need, and consciously judges the conceptual alignment between the two. Mechanistically, the software converts words into numerical vectors and calculates the mathematical distance between them. The AI does not 'know' what the book is about or what the student actually needs. The language projects the epistemic state of 'knowing relevance' onto a system that only processes statistical correlations.
- Acknowledgment: Direct (Unacknowledged) (The capability of the software to help assess relevance and explore ideas is marketed as a literal, factual feature of the platform, with no metaphorical distancing.)
- Implications: This framing severely compromises information literacy by encouraging students to treat an algorithmic similarity score as an authoritative judgment of academic relevance. If the system is believed to 'assess relevance' consciously, students will blindly accept its recommendations, ignoring books that the algorithm mathematically bypassed but which might be conceptually vital. This projection of evaluative consciousness inflates the system's authority, masking the reality that the algorithm is biased toward dominant, highly cited, or frequently occurring phrases, and possesses zero genuine understanding of niche, novel, or interdisciplinary ideas.
Accountability Analysis:
- Actor Visibility: Hidden (agency obscured)
- Analysis: The agentless construction positions the 'Ebook Central Research Assistant' as the sole entity facilitating engagement and assessing relevance. The developers who tuned the search algorithms and the corporate executives who prioritized specific engagement metrics over qualitative academic discovery are entirely invisible. By hiding these actors, Clarivate claims credit for educational support while deflecting responsibility for how their proprietary, closed-source algorithms actively shape and constrain what students are able to discover, essentially privatizing academic curation without accountability.
Task 2: Source-Target Mapping
About this task
For each key metaphor identified in Task 1, this section provides a detailed structure-mapping analysis. The goal is to examine how the relational structure of a familiar "source domain" (the concrete concept we understand) is projected onto a less familiar "target domain" (the AI system). By restating each quote and analyzing the mapping carefully, we can see precisely what assumptions the metaphor invites and what it conceals.
Mapping 1: Human Research Assistant (Conscious, intentional employee) → Retrieval-Augmented Generation (RAG) system running database queries
Quote: "Web of Science Research Assistant: Navigate complex research tasks and find the right content."
- Source Domain: Human Research Assistant (Conscious, intentional employee)
- Target Domain: Retrieval-Augmented Generation (RAG) system running database queries
- Mapping: The relational structure of a human employee assigned a task is mapped onto a software interface. The source domain assumes an entity that can listen to instructions, conceptually understand the goal of a research project, physically or digitally explore a library, evaluate findings against truth conditions, and return with curated answers. This maps onto the AI system, inviting the assumption that the algorithmic retrieval process involves conscious understanding of the query's meaning, an awareness of the complex nature of the task, and an intentional, judgmental selection of the 'right' textual outputs. It projects the conscious state of knowing exactly what is needed onto the mechanistic process of vector similarity search.
- What Is Concealed: This mapping conceals the rigid, mathematical nature of the underlying algorithms, primarily hiding the fact that the system relies entirely on statistical frequency and proximity, not semantic truth. It obscures the proprietary, opaque nature of Clarivate's search index and the specific weights assigned to different ranking signals. The rhetoric exploits this opacity, replacing a transparent explanation of database querying with a comforting but deceptive anthropomorphic narrative that hides the total absence of human-like discernment.
Show more...
Mapping 2: Academic Collaborator (Critical, evaluating peer) → Generative Summarization and Search Optimization Algorithms
Quote: "ProQuest Research Assistant: Helps users create more effective searches, quickly evaluate documents... and explore new topics"
- Source Domain: Academic Collaborator (Critical, evaluating peer)
- Target Domain: Generative Summarization and Search Optimization Algorithms
- Mapping: The structure of an intellectual partnership is mapped onto user-software interactions. The source domain relies on the existence of a peer who possesses critical thinking skills, understands academic quality, and can quickly read and judge a text's merit. Projected onto the target domain, it implies the AI possesses these exact evaluative and exploratory capacities. It invites the user to assume the system exercises justified belief and critical evaluation when processing documents, mapping the conscious act of 'judging quality' onto the mechanistic act of 'extracting statistically salient tokens.' It projects epistemic awareness onto text-generation.
- What Is Concealed: This mapping utterly conceals the system's inability to comprehend meaning, factual accuracy, or academic rigor. It hides the algorithmic reality that the system evaluates 'documents' only by parsing patterns in token distribution. Furthermore, because these are proprietary systems, users cannot see the training data or the weights determining what makes a search 'effective' or a document 'valuable.' The mapping obscures the reality that the user is interacting with a blind, albeit highly complex, mathematical mirror rather than a discerning colleague.
Mapping 3: Teacher/Mentor (Pedagogical guide with epistemic authority) → Text Summarization and Key-Phrase Extraction Pipeline
Quote: "Alethea: Simplifies the creation of course assignments and guides students to the core of their readings."
- Source Domain: Teacher/Mentor (Pedagogical guide with epistemic authority)
- Target Domain: Text Summarization and Key-Phrase Extraction Pipeline
- Mapping: The structure of a teacher-student dynamic is mapped onto the software's summarization output. The source domain involves a human who has read the text, synthesized its meaning, determined the most educationally vital concepts, and intentionally leads a student toward comprehension. This maps onto the AI, projecting a conscious understanding of both the text's 'core' meaning and the student's cognitive needs. It invites the dangerous assumption that the algorithm possesses justified true belief about what the text signifies and intentionally curates this for educational benefit, mapping conscious pedagogical wisdom onto mechanistic text-processing.
- What Is Concealed: This framing conceals the statistical extraction methods used to generate summaries. It hides the fact that the algorithm determines the 'core' based on attention weights, word frequencies, and proximity, not through philosophical or thematic understanding. It obscures the reality that the system may confidently extract the wrong 'core' entirely if the text uses non-standard formatting or irony. By framing it as a 'guide,' the text rhetorically exploits proprietary opacity to present automated data processing as an authoritative educational intervention.
Mapping 4: Trusted Professional Colleague (Moral, reliable agent) → Commercial Machine Learning Product Integration
Quote: "Clarivate helps libraries adapt with AI they can trust to drive research excellence"
- Source Domain: Trusted Professional Colleague (Moral, reliable agent)
- Target Domain: Commercial Machine Learning Product Integration
- Mapping: The relational dynamics of interpersonal trust and professional reliance are mapped onto the procurement and use of commercial software. In the source domain, trust is earned through shared values, demonstrated integrity, and conscious commitment to shared goals (excellence). Projected onto the AI, this maps the capacity for moral reliability and intentional goal-seeking onto code. It invites the audience to assume the system consciously 'wants' to achieve research excellence and can be relationally trusted to uphold academic standards, mapping subjective moral commitment onto automated statistical outputs.
- What Is Concealed: This metaphor conceals the fundamental lack of intentionality, morality, and reliability in statistical models. It hides the technical reality that LLMs frequently 'hallucinate' plausible falsehoods because they predict tokens without grounding in truth. It also obscures the commercial motives of Clarivate, shifting the focus from trusting a profit-driven corporation to trusting a seemingly objective, dedicated digital entity. The metaphor masks the vast computational and infrastructural dependencies required to run the models, presenting a massive industrial mechanism as a simple, trustworthy friend.
Mapping 5: Human Conversational Partner (Listening, comprehending interlocutor) → Iterative Prompt-and-Response Natural Language Interface
Quote: "Summon Research Assistant: Enables users to uncover trusted library materials via AI-powered conversations."
- Source Domain: Human Conversational Partner (Listening, comprehending interlocutor)
- Target Domain: Iterative Prompt-and-Response Natural Language Interface
- Mapping: The structure of human dialogue is mapped onto an iterative software interface. The source domain features mutual understanding, turn-taking, theory of mind, and continuous semantic comprehension. Projected onto the target domain, it invites users to assume the AI system 'hears' their query, 'understands' the context, and 'speaks' back with considered intent. It maps the conscious experience of reciprocal linguistic comprehension onto the mechanistic, stateless process of processing input tensors and generating output probabilities based on a vast matrix of numerical weights.
- What Is Concealed: This mapping aggressively conceals the stateless, unthinking nature of the underlying language model. It hides the fact that the system does not 'remember' the conversation but simply processes the entire text history anew with each prompt to predict the next word. It obscures the absence of ground truth and semantic understanding, hiding the mathematical complexity of token generation behind the universally familiar, comforting interface of a chat. This opacity is actively exploited to make users feel they are collaborating with a mind rather than querying a database.
Mapping 6: Trained Animal or Educated Human (Biological learning) → Optimized Machine Learning Model
Quote: "People are very nervous because if you've got a well-trained AI, then why do you need people to work in libraries?"
- Source Domain: Trained Animal or Educated Human (Biological learning)
- Target Domain: Optimized Machine Learning Model
- Mapping: The structure of biological habituation and cognitive education is mapped onto algorithmic optimization. The source domain implies an organic entity that learns from experience, internalizes rules, and develops generalized competence to perform tasks independently. This projects the human/animal capacity for genuine understanding and adaptive reasoning onto the AI. It invites the assumption that gradient descent and data exposure create a holistic 'knowing' entity that can replace human holistic labor, mapping conscious skill acquisition onto the mathematical adjustment of billions of parameters.
- What Is Concealed: This mapping conceals the immense fragility and narrowness of machine learning models. It hides the fact that a 'well-trained' model has merely achieved a low error rate on its specific training data and lacks any generalized common sense or adaptability to novel situations outside its distribution. Crucially, it conceals the massive, invisible human labor force—data annotators, engineers, RLHF workers—whose ongoing effort is required to maintain the illusion of the AI's 'training.' The metaphor replaces a massive socio-technical infrastructure with a single, self-contained, capable entity.
Mapping 7: Prejudiced Human Actor or Flawed Vessel → Unrepresentative/Historical Training Data Distributions
Quote: "identifying and mitigating bias in AI tools"
- Source Domain: Prejudiced Human Actor or Flawed Vessel
- Target Domain: Unrepresentative/Historical Training Data Distributions
- Mapping: The structure of human psychological prejudice or an inherently flawed physical container is mapped onto a statistical software tool. The source domain involves an entity possessing unfair beliefs, moral failings, or inherent defects. Projected onto the AI, it maps the concept of active discrimination or inherent flaw onto the mathematical outputs of the system. It invites the assumption that the AI itself acts with bias or contains bias organically, projecting moral and cognitive failure onto a system that merely reflects the statistical reality of its inputs.
- What Is Concealed: This mapping completely conceals the human origins of the bias. It hides the fact that AI bias is nothing more than the mathematical reflection of human historical prejudice embedded in the internet data scraped to train the models. It obscures the active decisions made by data scientists and corporate executives to use massive, uncurated datasets without adequate filtering because it is cheaper and faster. By placing the bias 'in the tool,' it conceals corporate negligence and the societal reality of discrimination, framing a sociopolitical and engineering failure as an abstract software glitch.
Mapping 8: Academic Advisor (Judging and evaluating expert) → Semantic Search and Embedding Proximity Scoring
Quote: "Ebook Central Research Assistant: Facilitates deeper engagement with ebooks, helping students assess books' relevance"
- Source Domain: Academic Advisor (Judging and evaluating expert)
- Target Domain: Semantic Search and Embedding Proximity Scoring
- Mapping: The structure of an academic mentorship where an expert evaluates texts for a student is mapped onto a search algorithm. The source domain relies on deep reading, philosophical comprehension of a student's needs, and the ability to synthesize conceptual relevance. This maps onto the AI system, projecting conscious evaluative judgment and a deep semantic understanding of literature onto the software. It invites the user to assume the system 'knows' the text's meaning and intentionally evaluates it, mapping the subjective state of 'assessing' onto the automated calculation of cosine similarity between text embeddings.
- What Is Concealed: This mapping conceals the mathematical reductionism of semantic search. It hides the fact that the system reduces complex books to high-dimensional vectors and merely calculates spatial proximity to the user's query vectors. It obscures the system's inability to comprehend irony, subtext, paradigm shifts, or truly novel ideas that do not map cleanly onto existing statistical clusters. The rhetorical framing exploits the proprietary opacity of the search algorithm, presenting mathematical correlation as expert academic judgment, thereby obscuring the loss of genuine critical evaluation.
Task 3: Explanation Audit (The Rhetorical Framing of "Why" vs. "How")
About this task
This section audits the text's explanatory strategy, focusing on a critical distinction: the slippage between "how" and "why." Based on Robert Brown's typology of explanation, this analysis identifies whether the text explains AI mechanistically (a functional "how it works") or agentially (an intentional "why it wants something"). The core of this task is to expose how this "illusion of mind" is constructed by the rhetorical framing of the explanation itself, and what impact this has on the audience's perception of AI agency.
Explanation 1
Quote: "Web of Science Research Assistant: Navigate complex research tasks and find the right content."
-
Explanation Types:
- Intentional: Refers to goals/purposes, presupposes deliberate design
- Dispositional: Attributes tendencies or habits
-
Analysis (Why vs. How Slippage): This explanation frames the AI system entirely agentially, focusing on 'why' and 'what' it intends to do rather than 'how' it operates mechanistically. By using the verbs 'navigate' and 'find', the text embeds the software within an intentional framework, suggesting it possesses deliberate goals and the active agency required to complete complex tasks. This choice heavily emphasizes the tool's supposed autonomy, user-friendliness, and end-goal utility, making it highly appealing to the consumer. Conversely, it completely obscures the functional and theoretical explanations of how the AI actually works—such as vectorizing queries, querying databases, and applying ranking algorithms. The intentional framing hides the mechanism, presenting a complex socio-technical system as a simple, autonomous, goal-seeking entity.
-
Consciousness Claims Analysis: The passage makes a profound, albeit implicit, epistemic claim by attributing conscious states to the software. (1) It utilizes the consciousness-adjacent verbs 'navigate' and 'find,' which in an intellectual context imply discernment and evaluation, rather than mechanistic verbs like 'query', 'filter', or 'retrieve'. (2) It projects an assessment of 'knowing' onto the system by suggesting it can identify the 'right content,' which requires justified true belief and contextual understanding, rather than merely processing statistically relevant text. (3) This represents a clear curse of knowledge dynamic: the authors at Clarivate understand the complex parameters that define 'rightness' in their database, but project that holistic understanding onto the software interface itself. (4) Mechanistically, the software does not navigate or know; it converts user input into query vectors, compares them against a proprietary indexed database, and retrieves results based on hardcoded relevance metrics and probability distributions, entirely devoid of conscious intent.
-
Rhetorical Impact: This intentional framing radically shapes audience perception by granting the AI system an illusion of autonomy and reliability. By presenting the AI as an entity that 'navigates' and 'finds the right content,' it encourages users to trust the system's outputs as if they were generated by a conscious expert. This consciousness framing dramatically increases perceived reliability, leading users to lower their critical defenses. The material risk is that users will accept the AI's statistically generated results as epistemically sound 'truth,' potentially bypassing the rigorous human verification required in academic research.
Show more...
Explanation 2
Quote: "Alethea: Simplifies the creation of course assignments and guides students to the core of their readings."
-
Explanation Types:
- Intentional: Refers to goals/purposes, presupposes deliberate design
- Reason-Based: Gives agent's rationale, entails intentionality and justification
-
Analysis (Why vs. How Slippage): This explanation employs an agential framework that blends intentional and reason-based logic. It frames the AI ('Alethea') as the primary actor possessing the goal to 'simplify' and the rationale to 'guide' students toward a specific, philosophically loaded destination: 'the core.' This strongly emphasizes the pedagogical value and user-centric design of the product, appealing directly to overworked educators. However, it entirely obscures the functional mechanism by which the software operates. It hides the fact that the system does not 'guide' but rather extracts, truncates, and statistically summarizes text. The framing replaces a mechanical description of data processing with a narrative of educational stewardship.
-
Consciousness Claims Analysis: This passage asserts a massive epistemic claim: that the system possesses the conscious ability to comprehend a text and identify its 'core.' (1) The verb 'guides' is a consciousness verb, implying intentionality, empathy, and knowledge of the destination, masking mechanistic actions like 'processes' or 'extracts'. (2) The phrase attributes 'knowing' to the AI, assuming it possesses the justified belief necessary to separate the essential truth of a reading from peripheral information. (3) This is an authorial projection; the designers assume their summarization weights align with academic truth, and project this epistemic authority onto the software. (4) The actual mechanistic process involves a natural language processing model parsing the text into tokens, calculating self-attention weights to identify frequently associated or structurally prominent sentences, and generating a concise summary sequence based on those statistical weights—all without any semantic comprehension of the reading's actual meaning.
-
Rhetorical Impact: Framing the AI as a conscious guide directly impacts institutional trust and student autonomy. It elevates the software from a mere text-summarizer to an authoritative pedagogical agent. This consciousness framing reassures faculty that the tool is educationally sound while subtly encouraging students to view the AI's output as the definitive 'core' of their coursework. If audiences believe the AI genuinely 'knows' the core, they are highly likely to substitute reading the actual text with reading the AI's generated summary, degrading the quality of learning and shifting epistemic authority from the author and educator to a proprietary algorithm.
Explanation 3
Quote: "Clarivate helps libraries adapt with AI they can trust to drive research excellence, student outcomes and library productivity."
-
Explanation Types:
- Dispositional: Attributes tendencies or habits
- Functional: Explains behavior by role in self-regulating system with feedback
-
Analysis (Why vs. How Slippage): This explanation utilizes a dispositional framework disguised as functional utility. By stating the AI can be 'trusted to drive' specific outcomes, it frames the technology agentially, endowing it with a reliable, success-oriented disposition. The choice emphasizes the ultimate institutional benefits (excellence, outcomes, productivity) and Clarivate's role as a helpful partner. However, it completely obscures the genetic origin of the AI and the empirical generalizations governing its behavior. By framing the AI as a driver of excellence, it hides the massive infrastructural dependencies, the potential for statistical error, and the fact that AI cannot independently 'drive' anything without constant human prompting and correction.
-
Consciousness Claims Analysis: The passage projects a complex moral and epistemic state onto the AI system. (1) While using slightly more abstracted verbs ('drive'), the invocation of 'trust' firmly plants the discourse in the realm of consciousness and relational intent. (2) It conflates processing data efficiently with knowing how to achieve 'research excellence'—a highly subjective, human-defined epistemic standard. (3) The author projects Clarivate's corporate goals onto the software, assuming the system's outputs inherently align with the highest standards of academia. (4) Mechanistically, the software does not drive excellence; it executes predictive language models and search algorithms optimized to return high-probability results based on user queries, operating entirely devoid of any concept of excellence, student welfare, or institutional productivity.
-
Rhetorical Impact: This framing shapes the audience's perception of risk by demanding relational trust in an unthinking statistical model. By framing the AI as a trusted driver of excellence, it disarms critical scrutiny and encourages institutions to deeply integrate the software without sufficient safeguards. The consciousness framing implies the AI possesses the integrity to self-correct and aim for high standards. If administrators believe the AI 'knows' how to drive outcomes, they may make budget decisions that reduce human staffing or oversight, relying on the false assumption that the software is an autonomous, reliable professional.
Explanation 4
Quote: "ProQuest Research Assistant: Helps users create more effective searches, quickly evaluate documents... and explore new topics with confidence."
-
Explanation Types:
- Intentional: Refers to goals/purposes, presupposes deliberate design
- Reason-Based: Gives agent's rationale, entails intentionality and justification
-
Analysis (Why vs. How Slippage): This passage relies on intentional and reason-based explanations, framing the software as an active, conscious collaborator. The text focuses heavily on 'why' the system exists—to help, to evaluate, to explore—rather than 'how' it accomplishes these tasks. This agential choice emphasizes the product's ability to augment human intellectual labor, making it highly marketable to researchers facing information overload. However, it obscures the theoretical and functional reality of the algorithms. By claiming the AI 'evaluates documents,' the text hides the specific mathematical criteria used for evaluation, erasing the human biases embedded in those metrics and presenting the AI as an objective intellectual peer.
-
Consciousness Claims Analysis: The passage makes explicit claims about the system's conscious epistemic abilities. (1) The use of the verb 'evaluate' is a direct consciousness claim, asserting the ability to judge value, which stands in stark contrast to mechanistic verbs like 'process,' 'rank,' or 'classify.' (2) The text projects the state of 'knowing' onto the system, suggesting it can comprehend a document well enough to judge its worth. (3) The authors project their own understanding of what makes a document valuable onto the software's automated sorting mechanisms. (4) Mechanistically, the software does not evaluate or explore; it runs natural language processing algorithms to extract keywords, cross-reference them with query terms, and generate statistical summaries based on probabilistic language models, all without any subjective comprehension of the document's content.
-
Rhetorical Impact: This intentional framing creates a powerful illusion of mind that directly impacts the user's research behavior. By describing the AI as an entity that 'evaluates' and 'explores,' it invites the user to surrender their own critical agency to the machine. The consciousness framing boosts perceived reliability, making users feel they can explore 'with confidence' because they have a smart assistant checking the work. If users believe the AI genuinely 'knows' how to evaluate documents, they are likely to blindly accept its summaries, potentially missing critical nuances, methodological flaws in the papers, or hallucinations generated by the model.
Explanation 5
Quote: "identifying and mitigating bias in AI tools"
-
Explanation Types:
- Empirical Generalization: Subsumes events under timeless statistical regularities
- Dispositional: Attributes tendencies or habits
-
Analysis (Why vs. How Slippage): This explanation utilizes a hybrid of dispositional framing and empirical generalization. It frames 'bias' as a persistent tendency or habit residing within the 'AI tools' themselves. This framing emphasizes the existence of a problem to be solved ('mitigated') by technical experts. However, it completely obscures the genetic explanation of the bias. By locating the bias 'in' the tool, it hides the historical process by which human engineers collected, labeled, and fed prejudiced human data into the system. The choice to frame bias dispositionally rather than genetically absolves the human creators of responsibility, treating the bias as an unfortunate side-effect of the technology rather than a direct result of human decision-making.
-
Consciousness Claims Analysis: While less overtly anthropomorphic than the 'Assistant' metaphors, this passage still projects conscious states by implication. (1) It uses the term 'bias', which in human contexts implies a conscious or subconscious prejudice, a failure of moral knowing, rather than using precise mechanistic terms like 'statistical skew' or 'unrepresentative training distributions'. (2) It conflates the processing of skewed data with the holding of biased beliefs. (3) The authors project societal failures onto the software artifact, treating the algorithm as the container of human flaws. (4) Mechanistically, the AI tool possesses no bias; it simply executes mathematical optimization. If the training dataset contains 80% male pronouns in leadership contexts, the model will output predictions reflecting that probability. It processes data exactly as designed; the 'bias' exists entirely in the human curation of the dataset.
-
Rhetorical Impact: Framing bias as a property of the AI tool shapes the audience's perception of accountability and risk. It makes the AI appear as a semi-autonomous entity that has somehow developed flaws, distancing the technology from the corporate entities that built it. This framing encourages users and regulators to view algorithmic discrimination as a technical glitch requiring a software patch, rather than a profound failure of human design and corporate ethics. If audiences believe the AI 'holds' the bias, they focus their demands on fixing the machine rather than holding the human creators accountable for their data practices.
Task 4: AI Literacy in Practice - Reframing Anthropomorphic Language
About this task
This section proposes alternative language for key anthropomorphic phrases, offering more mechanistic and precise framings that better reflect the actual computational processes involved. Each reframing attempts to strip away the projections of intention, consciousness, or agency that are embedded in the original language.
V3 Enhancement: A fourth column addresses human agency restoration—reframing agentless constructions to name the humans responsible for design and deployment decisions.
| Original Anthropomorphic Frame | Mechanistic Reframing | Technical Reality Check | Human Agency Restoration |
|---|---|---|---|
| Web of Science Research Assistant: Navigate complex research tasks and find the right content. | The Web of Science interface executes vector similarity searches against our proprietary database to retrieve and rank documents based on statistical relevance to your query. | The AI does not 'know' or 'navigate' anything; it converts text inputs into numerical embeddings and retrieves database tokens that mathematically correlate with the user's prompt based on predefined ranking algorithms. | Clarivate's engineering team designed and deployed a search algorithm that ranks content according to parameters chosen by the company's developers. |
| ProQuest Research Assistant: Helps users create more effective searches, quickly evaluate documents... and explore new topics | The ProQuest interface processes user inputs to generate optimized database queries and uses language models to generate text summaries of retrieved documents based on statistical patterns. | The software cannot 'evaluate' documents or 'explore' topics. It classifies tokens and generates text outputs that statistically correlate with similar training examples, entirely lacking semantic comprehension or academic judgment. | Clarivate's product teams integrated a generative model designed to summarize texts based on parameters established by their data scientists. |
| Alethea: Simplifies the creation of course assignments and guides students to the core of their readings. | The Alethea platform automates the formatting of assignments and extracts high-frequency and heavily weighted sentences from texts to generate automated summaries. | The model does not 'know' the core of a reading or 'guide' anyone. It mathematically weights contextual embeddings using attention mechanisms tuned during its training phase to extract statistically prominent text. | Software engineers designed a system that extracts text according to statistical weights; educators must decide whether these automated summaries accurately represent their syllabus. |
| Clarivate helps libraries adapt with AI they can trust to drive research excellence... | Clarivate sells language and search models that generate outputs mathematically aligned with academic datasets, requiring constant human verification to ensure accuracy. | AI possesses no intent and cannot 'drive excellence.' It retrieves and generates tokens based on probability distributions from its training data, requiring human researchers to verify factual truth. | Clarivate executives chose to deploy these statistical models to market, shifting the burden of verifying accuracy and maintaining research excellence onto librarians and users. |
| Summon Research Assistant: Enables users to uncover trusted library materials via AI-powered conversations. | The Summon interface allows users to query library databases using an iterative prompt-and-response text generation model. | The system does not engage in 'conversations' or 'understand' intent; it classifies input tokens and predicts sequential output text that mimics dialogic structure based on training data. | Clarivate designed a user interface that formats database queries as chat interactions, determining which library materials are statistically prioritized in the generated responses. |
| People are very nervous because if you've got a well-trained AI, then why do you need people to work in libraries? | People are nervous about automation because highly optimized statistical models can rapidly generate text and classify data based on vast computational processing. | The AI is not 'trained' in a cognitive sense; its parameters have been mathematically optimized through massive data exposure to minimize error rates in token prediction. | Tech companies employ engineers and data annotators to optimize these models, while library administrators make decisions about whether to replace human labor with automated software. |
| identifying and mitigating bias in AI tools | Identifying and mitigating unrepresentative statistical distributions and historical discrimination encoded within the model's training datasets. | AI tools do not harbor inherent prejudice. They mechanically process and predict correlations based entirely on the statistical weights derived from the datasets they were exposed to during optimization. | Engineers and corporate data brokers selected datasets containing historical human prejudice; developers must now audit their selection choices and adjust weights to mask these statistical skews. |
| Ebook Central Research Assistant: Facilitates deeper engagement with ebooks, helping students assess books' relevance | The Ebook Central feature calculates the semantic similarity between user queries and text vectors to generate automated relevance scores for digital texts. | The model cannot 'assess relevance' or facilitate 'deep engagement.' It processes word embeddings and mathematically ranks documents based on cosine similarity to the user's prompt. | Clarivate developers programmed an algorithm that dictates the relevance ranking of ebooks, deciding mathematically which texts students are most likely to encounter. |
Task 5: Critical Observations - Structural Patterns
Agency Slippage
The Clarivate report demonstrates a profound structural oscillation between framing AI as a passive, mechanical tool and an autonomous, conscious agent. This agency slippage occurs systematically along the boundary between managing librarian anxieties and marketing commercial products. In the early sections of the report, which rely heavily on qualitative quotes from library professionals, the discourse is overwhelmingly mechanical. Librarians emphasize that AI is 'just another tool,' comparing it explicitly to a hammer or Wikipedia. This mechanical framing serves a vital rhetorical function: it manages existential professional anxiety. By reducing the complex system to a passive instrument, the text assures librarians that human agency remains central and irreplaceable.
However, a dramatic and abrupt shift occurs in the final pages of the report during the Clarivate product catalog. Here, the mechanical framing completely vanishes, replaced by intense agential and anthropomorphic language. Software systems are suddenly 'Research Assistants' that 'evaluate documents,' 'explore new topics,' and 'guide students.' The text flows aggressively from mechanical to agential. This transition correlates perfectly with the shift from discussing the profession to selling a product. The oscillation reveals that anthropomorphism in this text is a strategic commercial deployment rather than a lack of technical understanding.
This slippage relies on pervasive agentless constructions that erase human actors. Phrases like 'simplifies the creation of course assignments' hide the human educators and software engineers (the Clarivate product teams) who actually defined the simplification parameters. Instead, agency is transferred from the humans who built and profit from the system onto the system itself. This constructs the illusion of a digital colleague.
This pattern also perfectly illustrates the 'curse of knowledge' interacting with commercial incentives. The developers at Clarivate understand the complex statistical mechanisms underlying semantic search and token prediction. But instead of explaining these empirical and theoretical mechanisms, they project their own intent and understanding onto the system. They establish the AI as a 'knower'—capable of assessing relevance and evaluating quality—only when it is commercially advantageous to do so, while retreating to the 'tool' defense when addressing fears about job replacement. Using Robert Brown's typology, the text relies on intentional and reason-based explanations for the product catalog, completely ignoring the genetic origins or functional mechanisms of the software. The rhetorical accomplishment of this slippage is remarkable: it simultaneously pacifies the workforce by telling them AI is merely a hammer, while elevating the product by selling it to administrators as an autonomous intellectual worker.
Metaphor-Driven Trust Inflation
The text systematically leverages metaphorical and consciousness-attributing framings to construct an unwarranted architecture of trust around statistical software. In the Clarivate report, trust is not framed merely as technical reliability (performance-based trust), but is deeply conflated with interpersonal, moral reliance (relation-based trust). The most blatant example is the assertion that 'Clarivate helps libraries adapt with AI they can trust to drive research excellence.' This phrasing explicitly asks the audience to transfer the kind of trust one places in a sincere, competent human colleague onto a commercial algorithmic pipeline.
By utilizing consciousness language—suggesting the AI can 'navigate,' 'evaluate,' and 'assess'—the text signals to the user that the system possesses the epistemic awareness necessary to be trusted relationally. Claiming an AI 'evaluates' accomplishes something fundamentally different than claiming it 'processes.' Processing implies a blind mechanism that requires human oversight. Evaluation implies a conscious judgment; it suggests the system understands the context, applies critical criteria, and cares about the truth value of the outcome. This anthropomorphism artificially inflates perceived competence, tricking human cognitive heuristics into extending relation-based trust to a system utterly incapable of reciprocating sincerity or understanding moral obligations.
This construction of authority through metaphor is highly dangerous in an academic context. Human-trust frameworks rely on intentionality and vulnerability; we trust peers because they have a stake in the truth. Statistical systems, however, are merely optimized to predict tokens based on training weights. They do not 'know' anything and have no stake in research excellence. By inappropriately applying relational trust frameworks to these systems, the text encourages automation bias. Users are invited to drop their skeptical defenses and accept statistically generated text as authoritative knowledge.
Furthermore, when the text discusses system limitations, such as 'hallucinations' or 'bias,' it often retreats to a mechanical framing, treating these profound epistemological failures as mere technical glitches rather than fundamental characteristics of ungrounded probabilistic generation. The intentional explanations construct a sense that AI decisions are justified by reason, masking the reality that they are justified only by statistical correlation. The ultimate risk is that libraries and universities will extend deep relational trust to proprietary black boxes, offloading critical academic evaluation to algorithms that cannot comprehend the texts they process, thereby corrupting the integrity of the research lifecycle.
Obscured Mechanics
The anthropomorphic and consciousness-attributing language in the Clarivate report actively conceals the technical, material, and commercial realities of artificial intelligence, rendering massive socio-technical infrastructures invisible behind the mask of a digital 'Assistant.' When the text claims that the 'ProQuest Research Assistant' can 'quickly evaluate documents,' it erects a profound transparency obstacle. It obscures the underlying mathematical mechanisms—specifically token prediction, gradient descent, and semantic vector embeddings—replacing them with the illusion of an autonomous, reading mind.
Applying the 'name the corporation' test reveals the extent of this concealment. Where the text says 'AI guides students,' it actually means 'Clarivate's proprietary algorithm filters text based on invisible corporate parameters.' The metaphor of the conscious AI acts as an accountability shield, hiding specific teams, executives, and business models.
Concretely, this metaphorical framing obscures four massive realities. First, it hides technical dependencies. When the text claims the AI 'knows' or 'understands,' it masks the system's absolute reliance on training data, its lack of causal reasoning, and its inability to access ground truth. It hides the fact that the system generates output based entirely on statistical confidence, not factual accuracy. Second, it conceals material costs. The metaphor of a lightweight, helpful 'Assistant' erases the immense environmental footprint, server farms, and energy consumption required to run Large Language Models. Third, it obscures exploited labor. An 'Assistant' sounds autonomous, rendering completely invisible the thousands of underpaid data annotators and RLHF (Reinforcement Learning from Human Feedback) workers whose hidden labor makes the model appear coherent. Finally, it conceals economic realities. By personifying the software, it obfuscates Clarivate's commercial objective to lock universities into proprietary, closed-source ecosystems.
Who benefits from these concealments? The vendor. By projecting consciousness onto the AI, Clarivate claims credit for the magic of automation while hiding the proprietary, un-auditable nature of their algorithms. If these metaphors were replaced with mechanistic language—if the catalog stated, 'Clarivate's servers calculate vector proximity based on scraped data to generate statistically probable summaries'—the magic would evaporate. The material realities of corporate control, data extraction, and statistical fragility would become immediately visible, forcing institutions to reckon with the actual costs and risks of the technology rather than buying into the fantasy of a digital colleague.
Context Sensitivity
The distribution of anthropomorphic and consciousness-attributing language across the Clarivate report is not uniform; it is highly context-sensitive and strategically deployed to serve varying rhetorical goals. A structural mapping of the text reveals a stark asymmetry: metaphors of consciousness are aggressively concentrated in the product marketing sections, while technical or professional discussions rely on mechanical vocabulary.
In the early pages, where the report addresses the anxieties of the library workforce, metaphor density is low. The text utilizes mechanical language, establishing credibility by discussing 'implementing AI tools,' addressing 'parameters,' and managing 'budget constraints.' The AI is positioned as a manageable object. However, as the document transitions to the Clarivate Academic AI product catalog (Pages 27-28), consciousness claims abruptly intensify. Here, the vocabulary shifts from 'processing' to 'understanding,' from 'retrieving' to 'evaluating,' and from 'filtering' to 'knowing.' This demonstrates a strategic rhetorical maneuver: the text establishes grounded credibility through mechanical language in the survey analysis, only to leverage that credibility to license aggressive anthropomorphism in the sales pitch.
This pattern reveals a profound capability versus limitation asymmetry. When describing the AI's capabilities, the text employs deeply agential, consciousness-driven terms: the AI 'guides,' 'navigates,' and 'evaluates.' It is granted full epistemic authority. Conversely, when discussing limitations or risks—such as bias or security—the language reverts to the mechanical and passive. Problems are framed as 'bias in the data' or 'lack of training,' stripped of any agential intent. This asymmetry accomplishes a vital commercial function: it maximizes the perceived intelligence of the system when selling its benefits, but minimizes the system's agency when accounting for its failures.
Furthermore, the register shifts seamlessly from acknowledged metaphor ('AI is like a tool') to literalized anthropomorphism (The AI 'finds the right content'). The strategic function of this intensification is purely marketing and vision-setting. By tailoring the metaphorical intensity to the implied audience—soothing librarians with mechanical reality, then seducing administrators with agential fantasy—the text manages critique while maximizing commercial appeal. This context sensitivity proves that the 'illusion of mind' is not an accidental misunderstanding of the technology by laypeople, but a meticulously constructed discourse designed to shape institutional adoption.
Accountability Synthesis
This section synthesizes the accountability analyses from Task 1, mapping the text's "accountability architecture"—who is named, who is hidden, and who benefits from obscured agency.
Synthesizing the accountability analyses reveals a systemic architecture of displaced responsibility designed to protect corporate interests while maximizing product appeal. The Clarivate text consistently distributes agency in a manner that makes human decision-makers invisible. When analyzing the agentless constructions across the document, a clear pattern emerges: successes are attributed to the autonomous 'AI Assistant,' while failures, biases, and systemic risks are diffused into abstract technological inevitabilities or blamed on the 'data.'
In this architecture, specific actors—Clarivate executives, software engineers, data brokers, and university administrators—are rarely named in conjunction with active verbs. Instead, the 'AI' acts as the primary subject, and the 'user' as the passive beneficiary or victim. The ultimate 'accountability sink' in this discourse is the concept of the AI itself. By anthropomorphizing the system as an independent agent that 'evaluates,' 'guides,' and 'navigates,' the text creates a fictional entity capable of absorbing blame. If the 'Alethea' system extracts a factually incorrect 'core' of a reading, the framing suggests the AI made a mistake, completely hiding the reality that a Clarivate engineer chose a specific, flawed optimization metric.
This architecture has severe liability implications. If audiences and institutions accept the framing that the AI is an autonomous, evaluating entity, legal and ethical responsibility becomes hopelessly muddy when the system fails. It shields the vendor from liability for deploying fundamentally brittle statistical models.
Applying the 'naming the actor' test radically alters the landscape. For example, if we reframe 'identifying and mitigating bias in AI tools' to 'Clarivate engineers must audit the discriminatory datasets they chose to train their models on,' the narrative shifts entirely. What was a mysterious software glitch becomes a visible corporate choice. The questions become askable: Why was this data used? Who approved it? Naming the actors forces recognition that the deployment of AI is a series of active, alterable human decisions, not a predetermined technological evolution. The text benefits immensely from obscuring human agency because it protects the commercial vendor from scrutiny, allows them to sell proprietary algorithms as objective 'truth machines,' and subtly shifts the burden of ethical management onto the librarians who are forced to manage a technology they did not design.
Conclusion: What This Analysis Reveals
A critical synthesis of the text reveals two dominant, interconnected anthropomorphic patterns: the 'Model as Trusted Professional' (the Research Assistant) and the 'Model as Epistemic Authority' (the guide/evaluator). These patterns function as a cohesive system to obscure the mechanistic reality of the software. The foundational, load-bearing pattern is the projection of cognitive evaluation—the persistent linguistic suggestion that the system 'knows,' 'understands,' and 'assesses' qualitative truth, rather than merely calculating statistical correlations. This consciousness architecture is critical; the 'Assistant' metaphor only works if the audience implicitly accepts the foundational claim that the machine possesses the epistemic awareness required to assist intelligently. By mapping human intentionality onto matrix multiplication, the discourse establishes a complex analogical structure that replaces deterministic text-generation with the illusion of an active, thinking participant in the academic process. If the consciousness projection is removed—if 'evaluates' is forcefully replaced with 'calculates vector proximity'—the entire metaphor of the 'Colleague' collapses, revealing the system as a blind, albeit powerful, statistical tool.
Mechanism of the Illusion:
The text creates this 'illusion of mind' through a highly effective rhetorical architecture based on the strategic borrowing of institutional prestige. The central sleight-of-hand is linguistic: the text systematically blurs 'processing data' with 'knowing truth' through the deployment of consciousness verbs ('evaluates,' 'guides,' 'navigates'). It establishes the AI as a 'knower' by wrapping statistical generation in familiar academic titles like 'Research Assistant.' This relies heavily on the 'curse of knowledge'—the developers understand the parameters of their search algorithms, but project that holistic understanding onto the software interface, presenting it to the user as an entity that possesses that same comprehension. The temporal structure of the report facilitates this illusion: it first validates the librarian's role mechanically to disarm professional anxiety, and then, once defenses are lowered, introduces the product catalog steeped in aggressive anthropomorphism. The illusion exploits the audience's vulnerability to information overload; researchers desperately want an intelligent assistant to ease their burden, making them highly susceptible to metaphors that promise conscious, reliable cognitive offloading.
Material Stakes:
Categories: Epistemic, Institutional, Economic
These metaphorical framings trigger severe, tangible consequences across multiple domains. Epistemically, framing an algorithm as a 'guide' that 'evaluates' fundamentally degrades research integrity. If students and faculty believe the system possesses conscious discernment, they will substitute algorithmic output for critical reading, offloading epistemic authority to a statistical model incapable of recognizing methodological flaws or novel paradigms. Institutionally, the framing of AI as a 'trusted' driver of excellence corrupts administrative decision-making. Administrators, persuaded by the illusion of autonomous competence, may redirect budgets away from human domain experts—librarians and teaching assistants—toward vendor subscriptions, under the false assumption that the software can perform conscious evaluative labor. Economically, this discourse systematically privileges commercial vendors like Clarivate. By projecting consciousness onto proprietary black boxes, vendors obscure the unreliability and bias of their tools, shielding themselves from liability while capturing massive institutional budgets. The clear winners are the technology providers who profit from the mystification of their products, while the losers are the students deprived of rigorous engagement, and the librarians left to manage the fallout of algorithmic hallucinations.
AI Literacy as Counter-Practice:
Practicing critical discourse literacy and enforcing mechanistic precision acts as a direct resistance to the risks generated by anthropomorphic marketing. As demonstrated in the reframings, replacing consciousness verbs ('knows,' 'evaluates') with mechanistic verbs ('processes,' 'predicts') shatters the illusion of mind. Reframing 'guides students' to 'extracts statistically weighted text' forces users to confront the absence of awareness and the statistical fragility of the output. Similarly, restoring human agency by replacing 'AI evaluates' with 'Clarivate's algorithms classify' dismantles the accountability sink, forcing recognition of the corporate actors who design, deploy, and profit from these systems. Systematic adoption of this precision would require academic journals to mandate mechanistic language in methodology sections, institutions to demand plain-language capability disclosures from vendors, and educators to commit to stripping anthropomorphism from their syllabi. This precision will face fierce resistance from technology vendors, whose valuations rely entirely on the marketing magic of the 'conscious' AI, and from institutional leaders who prefer the narrative of an easy, automated solution to the expensive reality of human academic labor.
Path Forward
Looking toward the future of this discursive ecology, the choice of vocabulary shapes what is politically and technologically possible. A status quo approach, maintaining the current blend of mechanical pacification and agential marketing, serves vendor interests perfectly. It allows companies to sell 'understanding' while legally delivering 'prediction,' leaving users to navigate the epistemic wreckage of hallucinations. If mechanistic precision becomes the institutional norm—requiring terms like 'probabilistic generation' instead of 'AI conversation'—we gain immense transparency and safeguard academic rigor, though we risk alienating non-technical users who find mechanistic language unintuitive. Conversely, if anthropomorphic language deepens without check, we embed the dangerous assumption that machines possess moral and epistemic standing, potentially leading to fully automated, un-audited decision-making pipelines in research and education. Structural changes could mediate these futures: funding agencies could require rigorous mechanistic explanations in grant proposals, and regulatory frameworks could mandate transparency about the statistical nature of commercial tools. Ultimately, different vocabularies serve different stakeholders. Mechanistic language serves the truth and protects the public; anthropomorphic language serves capital and drives adoption. The desired future depends on whether institutions value the comfort of an illusion or the rigor of reality.
Extended Processing Summary
Gemini's Processing Steps
Gemini refers to the text below as "thought summaries." This is an overt consciousness projection because 'intentions' are hallmarks of a conscious mind that 'knows' what it is doing and why. The concealed mechanistic process is probabilistic text generation. Treat this as just another rhetorical artifact—a way of making the model's processing legible. When the model writes "I should double check that...", it is mathematically altering its own probability distribution for the next token to be more accurate. The "I" is a functional gear in the machine, not a self-reference. This language is a deliberate interface design choice, not a biological reality. The developers could have programmed the model to output passive logs. Instead, they chose to simulate a narrator.
Defining the Task's Core
I'm zeroing in on the Clarivate report, "Pulse of the Library 2025." My main objective is a critical discourse analysis, dissecting the AI metaphors and anthropomorphism within. I need to track how consciousness is projected and human agency is displaced, ultimately constructing a JSON object based on a defined schema.
Extracting Key Phrases
I've extracted pertinent quotes regarding AI anthropomorphism from the Clarivate report. Noteworthy phrases like "well-trained AI" and descriptors for AI tools that project human agency are at the forefront of my mind. These will form the foundation for identifying patterns and metaphors for critical discourse analysis. My aim is to define a system to track the use of consciousness projections.
Analyzing Agential Language
I am now closely examining the agential language used, particularly the "Assistant" metaphor. Verbs like "navigate," "help," and "guide" displace human agency onto the AI. Interestingly, the report frames bias as a feature, which opens a critical question of accountability. These linguistic patterns will form the core of my analysis. I have the beginnings of a list to classify agential verbs and biases.
Identifying Obscured Agency
I'm now focusing on how the framing of AI obscures the roles of engineers and data selectors. Phrases like "AI they can trust" are transferring relationship-based trust onto statistical models, which seems like a troubling trend. I'm noting examples of agential language and metaphors to compile the Metaphor Audit.
Mapping Agential Frames
I'm now identifying agential frames within different AI-related scenarios presented in the Clarivate report. "Facilitates deeper engagement" and "enables users to uncover" are key. I'm noting specific projections of AI roles – evaluator, conversational partner, cataloging clerk, teacher – and linking these to displaced human cognitive labor, such as judgment, dialogic capabilities, and epistemic authority. My focus is on unpacking the meaning from each instance.
Auditing Autonomy Projections
I'm developing an audit focusing on autonomy projections. The current instance involves AI being framed as an entity with inherent biases, divorcing those biases from the human creators' data selection processes. I am beginning to map source domains, like "Biased Actor", to target domains, such as "Uncurated training data". My aim is to define a system to track the use of consciousness projections.
Deconstructing Agential Phrases
I'm now zeroing in on instances where AI is framed as actively navigating and making judgments, such as "helps users create effective searches" and "guides students to the core." These phrases project evaluative judgment and pedagogical intent onto the AI. I am beginning to identify how specific agential frames are constructed and what human capabilities they displace. I am also working on re-framing these phrases to reflect a more mechanistic view, for example, re-writing "AI they can trust" to clarify the role of statistical models.
Reframing Agential Projections
I'm now rewriting agential phrases from the Clarivate report to illuminate underlying mechanisms. For instance, "Navigate complex research tasks" becomes "Executes vector similarity searches." "AI-powered conversations" shifts to "iterative prompt-and-response token generation." I'm identifying the displaced human labor and developing alternative phrasings that clarify the role of the AI as a tool performing defined tasks, not an agent in itself. This will inform my critical analysis.
Analyzing Agency Oscillations
Now I'm focusing on the "Agency Slippage" within the report. It uses tool-based language, but it markets the AI as a conscious entity. The report oscillates between "tool" and "Assistant" to pacify librarians, while emphasizing the capabilities to sell the product. I'm exploring the implications of shifting trust from human relations to statistical systems.
Unpacking Mechanistic Hiding
I am now dissecting the "Assistant" metaphor, noting how it obscures backend labor and proprietary data monopolies. I've tracked the shift from cautious mechanical language to aggressive anthropomorphism in the product catalog. The "AI bias" framing transfers accountability away from engineers and Clarivate. I'm focusing on the accountability implications of AI-driven research.
Framing Vendor Accountability
I'm now identifying how the Clarivate report strategically deflects responsibility. By portraying AI as a trustworthy agent, it protects the vendor and shifts accountability for errors to the AI itself or the user's "misplaced trust." I am starting the conclusion and drafting it in 5 sections. I am looking at mechanisms of illusion. I am examining material stakes.
Refining Schema Alignment
I'm now integrating formatting constraints and self-correction, ensuring the final output aligns perfectly with the JSON schema. The focus is now on replacing newlines or tabs in strings with spaces. Also, I'm checking the word count requirements to meet the project's minimums. My immediate task is to review the schema and my output to ensure they match perfectly.
Expanding Critical Arguments
I'm now generating content for the longer critical analysis sections. I'm focusing on ensuring all the analyses meet the word count, using spaces in place of line breaks to keep it concise, which means some editing is required. My focus is expanding on my past analyses. I need to ensure a minimum of 400 words per section.
Evaluating Schema Robustness
I'm now concentrating on string value robustness for the JSON output. I'm focusing on ensuring string values meet the word count, especially for the "projection" fields. I am analyzing how long the projection field can be. I am ensuring that I have enough critical analysis across eight items for Task 1, Task 2, Task 4, and five items for Task 3, Task 5, Task 6, and I will be testing the limits of the data.
Analyzing Trust Implications
I'm now examining how attributing "understanding" and discernment to AI influences user trust, particularly with "right" content. This inflates the AI's perceived sophistication and reliability, raising the risks of automation bias. I am also exploring how this obscures vendor liability by positioning the AI as an "Assistant" with assumed subjective understanding.
Unveiling Agential Erasure
I'm now zeroing in on instances of agential erasure, particularly how "navigates" obscures Clarivate's role. The prompt reveals this obfuscation, erasing any accountability for a flawed product. I am ensuring strict adherence to the output format, focusing on seamless generation of the data for this instance. I'm focusing on the format and the underlying mechanisms.
Ensuring Output Integrity
I'm now completely focused on strict output compliance. I am avoiding all control tokens, specifically newlines, within the JSON. To achieve this, I'm generating a minified, single-line JSON structure. This eliminates any potential formatting conflicts and ensures the output adheres to the "at any places" constraint. I'm prioritizing data delivery.
Dissecting Agency Slippage
I'm currently dissecting "agency slippage," the oscillation between tool and agent in the Clarivate report. I observe how it shifts from mechanical language in the librarian quotes to intense agential framing in the product catalog. The "Research Assistant" metaphor obscures the system's nature, promoting trust in a tool while also removing liability. I now understand the impact of tool versus agent framings.
Decoding Commercial Strategy
The recent focus is on the abrupt shift from mechanistic to agential language in the Clarivate report, directly tied to product marketing. I see that anthropomorphism is a strategic tactic, removing agency from developers and projecting it onto the AI. This is a clear case of the "curse of knowledge" in a commercial context.
Analyzing Rhetorical Accomplishment
I'm now zeroing in on the rhetorical accomplishment: AI as "knower" versus "tool," strategically deployed. This dual framing pacifies anxieties while also elevating the product's capabilities. It facilitates the illusion that AI won't replace human judgment, yet it simultaneously automates evaluative processes. I am working to make this a minimum of 400 words. My task now is to ensure each string meets the minimum requirements.
Structuring JSON Compliance
I'm now mentally structuring the JSON to eliminate errors, with no comments or control characters. Double-quoted strings will be used throughout. My key focus is a single, contiguous string with no spaces outside the JSON formatting to ensure complete compliance. I'm focusing on "at any places" constraints.
Synthesizing Task Analyses
I'm now deeply synthesizing the provided Clarivate quotes to meet word count requirements and to extract critical insights. The phrases, "evaluating AI-generated content" and "identifying bias" are key. I am focusing on the agential displacement and how "Research Assistants" "navigate," "create," "facilitate," and "guide." This will now ensure a minimum of 400 words per section.
Run ID: 2026-03-28-pulse-of-the-library-metaphor-06cspv
Raw JSON: 2026-03-28-pulse-of-the-library-metaphor-06cspv.json
Framework: Metaphor Analysis v6.4
Schema Version: 3.0
Generated: 2026-03-28T08:27:50.742Z
Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0