AI Competencies for Academic Library Workers
This document presents a Critical Discourse Analysis (CDA) of a text, generated based on a comprehensive analytical prompt. Pulling from the work of Adorno, Marcuse, Lukács, etc, this prompt instructs the analysis to move beyond the surface meaning of language to unmask the power relations, social hierarchies, and ideological assumptions embedded within the text. This analysis is grounded in critical theories that view language as a social practice of power. Its primary objective is to "denaturalize" the text—to make visible the strategic linguistic choices that construct a particular version of reality. All findings and summaries below were generated from detailed system instructions provided to a large language model and should be read critically as interpretive outputs—not guarantees of factual accuracy or authorial intent.
"AI Competencies for Academic Library Workers"
Source Title: AI Competencies for Academic Library Workers Source URL: https://www.ala.org/acrl/standards/ai Model: gemini-2.5-pro Temperature: 1.15 Tokens: input=8027, output=12878, total=20905 Source Type: report Published: 2025-10-01 Analyzed At: 2025-11-11T09:58:47+00:00Z Framework: cda-spicy Framework Version: 3.0
Task 1: Agency and Accountability Audit
This task examines how agency—the power to act, decide, and be held accountable—is linguistically engineered within the text. It asks how language distributes responsibility, transfers authority, or erases human decision-making to naturalize particular power relations. Instructions aim to identify the mechanisms (such as passive constructions, nominalizations, or personified abstractions) that manage perception of who acts and who is acted upon, then classify the strategy at work—whether agency is erased, delegated, diffused, inverted, collectivized, or personified. For each case, you rewrite the sentence to restore or redirect agency and articulate a concise interpretive claim about what ideological or institutional payoff this transfer achieves. The goal is not only to show that agency is obscured, but to reveal why it is obscured, who benefits, and how this linguistic maneuver sustains a particular social or political order.
Personifying Technology as an Autonomous Historical Actor
Quote: "Artificial intelligence...is beginning to transform higher education by influencing research methods, pedagogical practices, data analyses, information production, and information consumption."
- Participant Analysis: The primary actor is 'Artificial intelligence' (a non-human, abstract entity). The process is material ('transforming,' 'influencing'). The participants being acted upon are 'higher education,' 'research methods,' etc. This construction grants AI historical agency.
- Agency Assignment: Delegated and personified.
- Linguistic Mechanism: Abstract or inanimate actors ('Artificial intelligence') are granted human-like agency to perform complex actions.
- Power Analysis: This construction benefits the corporations that design and profit from AI by framing their products as a neutral, inevitable force of nature, akin to a geological event. It obscures the specific, profit-motivated decisions made by executives, engineers, and investors that are actually driving these changes.
- Agency Strategy: Personification
- Counter-Voice: Decisions by technology corporations to deploy AI products are beginning to force transformations in higher education...
- Interpretive Claim: This shift frames a corporate business strategy as an inevitable technological evolution, thus discouraging critical examination of its human architects and their motives.
Erasing Corporate and Administrative Agency Through Passive Voice
Quote: "AI is being integrated into library software and discovery platforms, often without consultation with library staff."
- Participant Analysis: The participant is 'AI,' which is acted upon ('is being integrated'). The agent performing the integration is deleted. The process is material ('integrated'). Library staff are positioned as passive observers.
- Agency Assignment: Obscured and erased.
- Linguistic Mechanism: Passive voice ('is being integrated') deletes the actor responsible for the action.
- Power Analysis: This benefits software vendors and university/library administrators who make procurement decisions. By erasing their agency, the text frames the integration of AI as a happening, an unavoidable event, rather than a series of conscious choices for which specific people are accountable. It absolves decision-makers of responsibility for the lack of consultation.
- Agency Strategy: Erasure
- Counter-Voice: Software vendors and library administrators are integrating AI into library software and discovery platforms, often without consulting library staff.
- Interpretive Claim: The use of passive voice transforms a top-down administrative decision into an agentless, unavoidable technological creep, mitigating potential labor resistance.
Diffusing Responsibility by Attributing Challenges to Technology
Quote: "While AI offers opportunities for innovation and efficiency, it also presents significant ethical, social, and environmental challenges that academic library workers must critically engage with."
- Participant Analysis: The actor is 'AI.' The processes are material and verbal ('offers opportunities,' 'presents challenges'). The challenges are presented as inherent attributes of the technology itself, not as consequences of its design and deployment.
- Agency Assignment: Delegated and diffused.
- Linguistic Mechanism: Personification of an abstract concept ('AI') as an actor that can 'present' challenges.
- Power Analysis: This benefits the creators of AI systems. If 'AI' presents challenges, then no single corporation is to blame for the biases, environmental costs, or labor exploitation embedded in their products. The responsibility for solving these 'challenges' is then shifted downstream to the users ('academic library workers').
- Agency Strategy: Delegation
- Counter-Voice: The design and deployment of AI by corporations creates significant ethical, social, and environmental challenges that these companies have offloaded onto academic library workers to manage.
- Interpretive Claim: This construction mystifies the corporate origins of AI's harms, reframing them as inherent technical properties to be managed by end-users.
Naturalizing Labor Restructuring Through Nominalization
Quote: "Understand how employers' adoption of AI is changing the job market."
- Participant Analysis: The primary actor is a nominalization, 'employers' adoption of AI'. The process is material ('is changing'). The entity being acted upon is 'the job market'. The nominalization bundles employers, their choices, and the technology into a single, abstract agent.
- Agency Assignment: Obscured.
- Linguistic Mechanism: Nominalization ('adoption') turns the action of 'adopting' into an abstract noun, making it seem like a singular event or force rather than a continuous set of contested decisions.
- Power Analysis: This benefits employers (capital) by framing their strategic decisions to cut labor costs, increase surveillance, or deskill workers as a neutral, technical process of 'adoption'. It obscures the inherent power struggle between labor and capital over the terms of technological implementation, making the resulting changes to the job market seem like a natural consequence of technology itself.
- Agency Strategy: Erasure
- Counter-Voice: Understand how employers choose to use AI to change working conditions, wages, and employment levels in the job market.
- Interpretive Claim: Nominalizing employers' choices as 'adoption' reifies a set of contested class-based decisions into a single, inevitable historical event.
Inverting Causality in AI Detection
Quote: "Understand that tools claiming to detect AI-generated writing are not completely accurate and can be circumvented."
- Participant Analysis: The actor is 'tools.' The process is mental/verbal ('claiming to detect'). The problem is framed as the tool's inaccuracy. The agent who created the initial problem (AI text generators) is absent.
- Agency Assignment: Inverted.
- Linguistic Mechanism: Deletion of the primary causal agent (the generative AI that makes detection necessary). The focus is shifted to the failure of the secondary tool.
- Power Analysis: This subtly shifts focus away from the source of the problem—the mass proliferation of difficult-to-detect AI-generated text by tech companies—and onto the inadequacies of the tools meant to deal with the fallout. It implicitly normalizes the existence of AI-generated text as the baseline reality.
- Agency Strategy: Inversion
- Counter-Voice: Understand that AI text generators produce content that is difficult to distinguish from human writing, and that the tools created to solve this problem are consequently unreliable.
- Interpretive Claim: This framing focuses on the failure of the technological fix, thereby naturalizing the original technological problem and obscuring its source.
Task 2: Ideology and Common Sense Audit
This task audits the text's lexical choices, identifying where seemingly neutral words smuggle in contested values, assumptions, or hierarchies. It examines what worldview a given word or phrase wants the reader to accept as "common sense" and explores alternative framings that would construct reality differently.
Framing a Structural Problem as a Skills Deficit: The Ideology of 'AI Literacy'
Quote: "Given these realities, academic library workers need to develop AI literacy..."
Lexical Feature Type: Metaphorical framing
Alternative Framings:
- Phrasing: "collective bargaining strategies for technological change"
- Worldview Centered: Labor/unions
- Makes Visible: The power dynamics between workers and management, and the need for organized, collective action to shape working conditions.
- Phrasing: "critical analysis of the political economy of AI"
- Worldview Centered: Critical theory/Marxism
- Makes Visible: The underlying profit motives, data extractivism, and class interests that drive AI development and deployment.
- Phrasing: "regulatory frameworks for corporate accountability"
- Worldview Centered: Policy makers/Government
- Makes Visible: The responsibility of the state to restrain corporate power and protect the public from harmful technologies.
Ideological Work: The term 'AI literacy' naturalizes a neoliberal worldview where systemic problems caused by unregulated corporate technology are reframed as individual knowledge deficits. It makes the solution 'more training for workers' seem like common sense, while making it difficult to talk about collective power, regulation, or refusal.
Inclusion/Exclusion: It positions the individual library worker as the rational actor responsible for upskilling and adapting. It excludes or marginalizes perspectives centered on collective organizing, state regulation, or anti-capitalist critique.
Naturalizing Neoliberal Values as Universal Goods: 'Innovation and Efficiency'
Quote: "While AI offers opportunities for innovation and efficiency, it also presents significant ethical, social, and environmental challenges..."
Lexical Feature Type: Semantic prosody
Alternative Framings:
- Phrasing: "opportunities for increased profit and labor discipline"
- Worldview Centered: Labor/Marxist critique
- Makes Visible: The class-based motives behind AI deployment: reducing labor costs and increasing managerial control.
- Phrasing: "new methods for data extraction and surveillance"
- Worldview Centered: Privacy advocates/Critical data studies
- Makes Visible: The business model of surveillance capitalism that underpins many AI technologies.
- Phrasing: "tools for accelerating automation and deskilling"
- Worldview Centered: Workers facing displacement
- Makes Visible: The direct, negative impacts of technology on livelihoods and professional expertise.
Ideological Work: 'Innovation' and 'efficiency' are presented as inherently positive, universal goals. This choice reflects a corporate, managerial worldview where progress is measured by speed and cost-reduction. It obscures the fact that one person's 'efficiency' is another's job loss, deskilling, or intensified workload.
Inclusion/Exclusion: This wording includes and validates a managerial, pro-business perspective as the default 'common sense' starting point. It marginalizes the views of workers, communities, and environmental advocates for whom 'innovation and efficiency' often have severe negative consequences.
Instrumentalizing Education: The Logic of 'Workforce Preparation'
Quote: "Recognizing AI's potential impact on workforce preparation, some institutions are embracing AI as part of their core mission."
Lexical Feature Type: Cultural stereotypes or 'common sense' assumptions
Alternative Framings:
- Phrasing: "potential impact on civic discourse and democracy"
- Worldview Centered: Civic educators/Political scientists
- Makes Visible: The threats AI poses to a shared public reality and the potential for mass manipulation.
- Phrasing: "potential impact on critical thinking and human creativity"
- Worldview Centered: Humanities scholars/Artists
- Makes Visible: The risk of cognitive atrophy and the devaluation of human intellectual and artistic labor.
- Phrasing: "potential impact on student surveillance and data commodification"
- Worldview Centered: Student privacy advocates
- Makes Visible: The ways educational institutions might use AI to monitor and extract value from students.
Ideological Work: This phrase naturalizes the assumption that the primary purpose of higher education is to prepare students to be workers for employers. It adopts the logic of human capital theory, reducing education to an economic input. This makes it difficult to argue for education's value in fostering critical citizenship, personal flourishing, or democratic participation.
Inclusion/Exclusion: It positions employers and the 'market' as the key stakeholders of education whose needs must be met. It excludes students, faculty, and society at large who may have different, non-instrumental goals for education.
Individualizing Resistance as 'Thoughtful Selectivity'
Quote: "This balance between informed engagement and thoughtful selectivity will be essential for academic libraries to remain both relevant to their communities and true to their professional principles."
Lexical Feature Type: Euphemism/Stance markers
Alternative Framings:
- Phrasing: "collective refusal and non-compliance"
- Worldview Centered: Labor organizers/Activists
- Makes Visible: The possibility of organized, oppositional power against the imposition of harmful technologies.
- Phrasing: "demands for democratic governance over technology"
- Worldview Centered: Community advocates/Socialists
- Makes Visible: The idea that workers and communities should have direct control over the technologies that shape their lives.
- Phrasing: "strategic obstruction and sabotage"
- Worldview Centered: Radical Luddites
- Makes Visible: The long history of direct action by workers to resist technologies that threaten their autonomy and livelihoods.
Ideological Work: The phrase 'thoughtful selectivity' domesticates and professionalizes acts of resistance. It reframes a potentially political act of refusal into a quiet, individual, intellectual exercise. It makes radical opposition seem unprofessional and hysterical, while making careful, case-by-case adoption seem like the only rational choice.
Inclusion/Exclusion: It includes the pragmatic, managerial professional who works within the system as the ideal actor. It excludes and delegitimizes the activist, the organizer, or the radical critic who might advocate for broad-based refusal or systemic change.
Framing Exploitation Gently: The Euphemism of 'Unseen Labor'
Quote: "Be aware of the unseen labor involved in developing and supporting AI systems."
Lexical Feature Type: Euphemism/Semantic prosody
Alternative Framings:
- Phrasing: "the exploited labor of global south data annotators"
- Worldview Centered: Postcolonial/Marxist scholars
- Makes Visible: The specific class and geopolitical dynamics of the labor, highlighting exploitation rather than invisibility.
- Phrasing: "the stolen labor of creators whose work was used for training data"
- Worldview Centered: Artists/Writers/Creators
- Makes Visible: The concept of uncompensated, non-consensual appropriation of intellectual property as a form of theft.
- Phrasing: "the superexploited clickworkers generating training data"
- Worldview Centered: Labor sociologists
- Makes Visible: The specific mechanisms of exploitation in the gig economy that create a precarious, low-wage global workforce.
Ideological Work: Calling the labor 'unseen' suggests the problem is one of perception or awareness, which can be solved by 'being aware.' This neutral framing obscures the active, structural exploitation and vast power imbalances involved. It avoids politically charged terms like 'exploitation,' 'theft,' or 'superexploitation' that would demand a political, rather than merely an ethical, response.
Inclusion/Exclusion: It positions the library worker as an ethical observer who needs to 'be aware.' It excludes the voices and perspectives of the exploited workers themselves, whose reality is one of material hardship and powerlessness, not just invisibility.
Task 3: Positioning and Solidarity Audit
This task audits how the text uses language to position the speaker, the audience, and third parties in relationships of power. It examines how pronouns, formality, and voice representation are used to build trust, create distance, establish authority, and construct "us" (the in-group) and "them" (the out-group).
Establishing Authority Through an Inclusive-Yet-Directive 'We'
Quote: "This document expands on Lo's (2025) broad definition of AI literacy...tailoring it into a comprehensive, library-specific set of competencies applicable to academic library workers."
- Positioning Mechanism: Pronoun strategies (implied authorial 'we') and register (formal, academic).
- Relationship Constructed: A hierarchy is established between the expert Task Force (the authorial 'we' who 'tailor' and 'expand') and the target audience ('academic library workers') who are the recipients of this expertise. The relationship is one of authority and guidance.
- Whose Reality Wins: The Task Force's reality wins. Their framing of the problem (a need for competencies) and their proposed solution (this document) are positioned as objective, authoritative, and necessary.
- Power Consequences: This empowers the Task Force and the professional association (ACRL) to define the terms of the debate about AI in libraries. It positions individual library workers as subjects to be trained, rather than as agents who might develop their own, potentially more radical, frameworks from the ground up.
Manufacturing a Professional Imperative Through Modal Verbs
Quote: "...academic library workers must critically engage with...[and] need to develop AI literacy..."
- Positioning Mechanism: Use of strong modal verbs of obligation ('must,' 'need to').
- Relationship Constructed: This constructs a relationship of professional duty and obligation. The library worker is positioned not as someone with a choice, but as a professional with a non-negotiable responsibility to engage with AI in the manner prescribed by the document. To not do so would be a professional failing.
- Whose Reality Wins: The document's reality, in which engagement with AI is an unavoidable and essential professional task, is presented as objective fact.
- Power Consequences: This positioning discourages non-engagement or outright refusal as legitimate professional stances. It creates pressure on workers to conform to this new agenda, potentially against their own judgment or political commitments, in order to be seen as competent professionals.
Neutralizing Political Conflict by Framing it as Expert Disagreement
Quote: "Be aware that experts disagree on many AI-related topics."
- Positioning Mechanism: Presupposition (that all sides of the 'disagreement' are equally valid 'expert' opinions).
- Relationship Constructed: It constructs a 'view from nowhere,' positioning the reader as a neutral observer of a balanced debate between credible 'experts.' It creates a false equivalence between, for example, a corporate CEO promoting AI and a Marxist critic warning of mass exploitation.
- Whose Reality Wins: The reality of the liberal center wins. This reality presupposes that radical critiques and corporate boosterism are merely two commensurable points on a spectrum of opinion, rather than expressions of a fundamental class antagonism.
- Power Consequences: This strategy depoliticizes the debate. It disempowers radical critique by watering it down to just another 'disagreement.' It encourages a passive 'awareness' of different views rather than taking a firm political stance against what may be a technologically-driven assault on labor and public life.
Positioning Workers as Adapters-in-Training
Quote: "Readers are encouraged to reflect on and adapt these mindsets and use them as a foundation for responsible, human-centered engagement with AI technologies."
- Positioning Mechanism: Register and formality (instructive, gentle command).
- Relationship Constructed: This creates a teacher-student dynamic. The Task Force provides the 'mindsets,' and the 'readers' (workers) are 'encouraged'—a polite directive—to 'adapt' them. The worker's role is to internalize and apply the framework handed down to them.
- Whose Reality Wins: The Task Force's reality, where the correct response to AI is the cultivation of specific internal dispositions ('mindsets') and individual skills, is naturalized as the only 'responsible' path.
- Power Consequences: This disempowers workers from defining their own terms of engagement or resistance. It channels their energy into individual self-improvement and 'reflection' rather than collective analysis and action. It subtly enforces conformity under the guise of gentle encouragement.
Constructing the Pragmatic Professional via a False Dichotomy
Quote: "This balance between informed engagement and thoughtful selectivity will be essential for academic libraries to remain both relevant to their communities and true to their professional principles."
- Positioning Mechanism: Presupposition (that 'relevance' and 'principles' are in tension and must be 'balanced').
- Relationship Constructed: This positions the ideal library worker as a pragmatic centrist, navigating a delicate 'balance'. They are neither a Luddite refuser nor a naive techno-optimist. They are judicious, reasonable, and managerial.
- Whose Reality Wins: The institutional-managerial reality wins. This worldview prioritizes institutional survival ('relevance') and seeks to manage technological disruption with minimal friction, co-opting and domesticating critique as mere 'selectivity'.
- Power Consequences: This positioning marginalizes more confrontational stances. It makes collective refusal seem 'irrelevant' and unprincipled, while making techno-solutionism seem naive. It empowers those who are willing to accommodate and adapt to the new technological regime, reinforcing the institutional status quo.
Discourse Strategies - The Architecture of Ideology
This task identifies overarching strategies that reveal the text's systematic ideological work. It analyzes how the micro-linguistic choices from the previous tasks combine into coherent projects of reality construction, all in the service of a particular interest.
Manufacturing Inevitability: Naturalizing Corporate Strategy as Technological Destiny
- Cited Instances:
- Personifying Technology as an Autonomous Historical Actor
- Erasing Corporate and Administrative Agency Through Passive Voice
- Naturalizing Neoliberal Values as Universal Goods: 'Innovation and Efficiency'
- Linguistic Patterns: This strategy combines the personification of AI as an agent that 'transforms higher education' (as seen in 'Personifying Technology as an Autonomous Historical Actor') with the systematic erasure of human actors via passive voice, such as AI 'being integrated' into platforms (detailed in 'Erasing Corporate and Administrative Agency Through Passive Voice'). This is reinforced by lexical choices like 'innovation and efficiency' ('Naturalizing Neoliberal Values as Universal Goods') which frame the goals of this technological force as inherently positive and beyond question.
- Ideological Function: This strategy constructs a techno-determinist reality where AI is an autonomous, agentive force driving historical change. It depoliticizes the rollout of AI by presenting it as a natural evolution rather than a set of deliberate business strategies. This serves the interests of tech corporations and institutional managers by making adaptation seem like the only rational response and resistance seem futile and irrational.
- Material Consequences: This discourse can lead to institutions investing heavily in AI technologies without critical assessment of their necessity or consequences, reallocating budgets away from human labor. It fosters a sense of resignation among workers, making it harder to organize opposition to technologies that may deskill them, increase surveillance, or eliminate their jobs.
- Counter-Discourse: A counter-discourse would consistently name the human and corporate actors behind the technology ('Instead of AI transforming education, we should ask how Google and OpenAI's business models are forcing schools to change'). It would replace the language of inevitability ('is being integrated') with the language of choice and power ('administrators are choosing to buy').
Individualizing the Crisis: Reframing a Structural Conflict as a Professional Skills Gap
- Cited Instances:
- Framing a Structural Problem as a Skills Deficit: The Ideology of 'AI Literacy'
- Manufacturing a Professional Imperative Through Modal Verbs
- Positioning Workers as Adapters-in-Training
- Linguistic Patterns: This strategy's cornerstone is the lexical framing of the problem as one of 'AI literacy' ('Framing a Structural Problem as a Skills Deficit'). This is reinforced by positioning workers as subjects who 'must' and 'need to' develop these skills ('Manufacturing a Professional Imperative Through Modal Verbs'), and who are 'encouraged to adapt' the mindsets provided by an expert body ('Positioning Workers as Adapters-in-Training'). The problem is located inside the individual worker, not in the external political economy.
- Ideological Function: This strategy shifts the locus of the problem from the political economy (unregulated corporations, capital's drive for automation) to the individual worker's knowledge and mindset. It transforms a political problem requiring collective action into a technical/pedagogical problem requiring individual training. This serves power by diverting energy away from organizing and towards 'professional development,' thereby maintaining the status quo.
- Material Consequences: This leads to universities spending money on training programs instead of, for example, funding union efforts to bargain over technological change. It can create anxiety and self-blame among workers who feel they can't keep up, and it provides a justification for management to devalue or dismiss workers who lack these new 'competencies.'
- Counter-Discourse: A counter-discourse would reject the language of individual 'literacy' and 'competencies' in favor of collective power. It would talk about 'worker-led technology committees,' 'collective bargaining rights over automation,' and 'building solidarity to resist deskilling.' It would frame the issue as one of power, not politeness or personal skill.
Domesticating Dissent: Channeling Resistance into Professional Practice
- Cited Instances:
- Individualizing Resistance as 'Thoughtful Selectivity'
- Neutralizing Political Conflict by Framing it as Expert Disagreement
- Constructing the Pragmatic Professional via a False Dichotomy
- Linguistic Patterns: This strategy operates by taking the potential for resistance and reframing it in manageable, professional terms. Radical refusal becomes 'thoughtful selectivity' ('Individualizing Resistance as 'Thoughtful Selectivity''). Fundamental political antagonism is softened into a 'disagreement' among 'experts' ('Neutralizing Political Conflict...'). This culminates in the construction of an ideal professional who finds a 'balance' between relevance and principles ('Constructing the Pragmatic Professional...'), positioning radical opposition as unprofessional.
- Ideological Function: This strategy functions to co-opt and neutralize critique. It creates a 'safe' space for dissent that does not threaten the fundamental institutional or corporate power structures. It allows the institution to appear critical and responsive while ensuring that no substantive challenge to its operations or the broader political economy can gain traction. It maintains institutional stability by channeling oppositional energy into sanctioned, individual acts of 'critical evaluation.'
- Material Consequences: This discourages the formation of collective movements within the profession. It isolates critical individuals by making them feel their only legitimate outlet is 'selectivity' in their own work. This can prevent the development of broad-based campaigns for regulation, divestment from certain tech vendors, or union action, thus allowing the large-scale integration of potentially harmful AI to proceed with only minor, localized resistance.
- Counter-Discourse: A counter-discourse would reclaim the language of refusal, solidarity, and opposition. It would reject the notion of 'balance' with harmful systems and instead talk about 'drawing a line.' It would frame the choice not as one of individual 'selectivity' but of collective 'non-compliance' and organized 'resistance.'
Structural Relations Audit
This task audits how the text naturalizes changeable social structures into inevitable "things." It is divided into three parts: Reification Analysis, which identifies where social relations are presented as natural forces; Social Amnesia Analysis, which finds passages that erase historical alternatives; and False Individual/Society Separation, which examines how structural problems are reduced to individual failings.
Reification Analysis
Technology as Autonomous Historical Force
Quote: "Artificial intelligence...is beginning to transform higher education..."
- Reification Mechanism: A complex set of social, economic, and political relations (corporate R&D, venture capital investment, marketing, university procurement policies) is reified into a singular, personified noun, 'Artificial Intelligence,' which is then granted the agency to act upon society.
- What's Obscured: This obscures the human decisions, profit motives, and specific class interests of the investors, executives, and engineers who build and deploy these systems. 'AI' doesn't 'transform' things; people using AI as a tool for specific ends do.
- Material Relations: The relationship between capital (tech companies, investors) and institutions (universities) that purchase or implement their products, as well as the impact on the labor (faculty, staff) within those institutions.
- Structural Function: This reification makes corporate strategy appear as inevitable technological destiny, thereby discouraging opposition and promoting a fatalistic acceptance of whatever changes these corporations impose.
The Market as Agentic Process
Quote: "...how employers' adoption of AI is changing the job market."
- Reification Mechanism: The 'job market' is presented as an object that is being acted upon by another reified process ('adoption'). This turns a site of social struggle between labor and capital into a quasi-natural environment that simply 'changes'.
- What's Obscured: It hides the fact that the 'job market' is not a thing, but a set of power relations. 'Changes' are not neutral; they are wins or losses for specific classes. This phrasing obscures the active, strategic decisions by employers to reduce wages, increase precarity, and deskill labor for the purpose of maximizing profit.
- Material Relations: The antagonistic relationship between employers (capital) who seek to minimize labor costs and workers (labor) who seek to maximize wages and job security.
- Structural Function: This reification depoliticizes class conflict, making the consequences of capital's decisions (e.g., wage stagnation, layoffs) seem like the unfortunate but unavoidable outcomes of an impersonal market process.
Harms as Inherent Technological Properties
Quote: "While AI offers opportunities... it also presents significant ethical, social, and environmental challenges..."
- Reification Mechanism: The consequences of specific design, training, and deployment choices (e.g., using scraped data, energy-intensive models, biased algorithms) are reified as inherent 'challenges' presented by the technology itself. The 'challenges' become properties of the object 'AI'.
- What's Obscured: The agency and accountability of the corporations that made those choices. Bias is not a property of 'AI'; it is the result of using biased data and having biased design teams. Environmental damage is not a property of 'AI'; it is the result of using energy-inefficient hardware for profit.
- Material Relations: The relationship between technology producers and the rest of society, which is forced to bear the negative externalities (social, ethical, environmental) of their production processes.
- Structural Function: This serves to shield corporations from liability by framing harms as unavoidable side effects of technology, rather than as direct consequences of their specific business practices.
Integration as an Agentless Flow
Quote: "AI is being integrated into library software and discovery platforms..."
- Reification Mechanism: The active, top-down decision-making process by specific individuals (vendors, administrators) is transformed into an impersonal, agentless process of 'integration.' It becomes a technical flow rather than a political choice.
- What's Obscured: The power of administrators to make decisions that affect workers' daily lives without their consent. The sales tactics of vendors. The financial incentives driving these decisions. The specific people who could be held accountable are rendered invisible.
- Material Relations: The power relationship between management, which controls procurement and workplace technology, and labor, which must use the tools management imposes.
- Structural Function: By making a managerial decision seem like a natural technological process, it preempts labor resistance and normalizes a lack of workplace democracy.
Social Amnesia Analysis
Erasing the History of Labor Resistance to Technology
Quote: "This document contains two sections: mindsets (guiding orientations or dispositions) and competencies (skills, knowledge, behaviors, and abilities)."
- What's Forgotten: The entire history of collective labor struggle as the primary means through which workers have contested and shaped the implementation of new technologies in the workplace. This includes strikes, slowdowns, collective bargaining over new machinery, and Luddite traditions of direct action.
- Mechanism of Forgetting: Presentism. The document frames the problem of AI as a novel challenge that requires a new, individualistic response (developing 'mindsets' and 'competencies'), completely ignoring the long historical precedent of collective action.
- Function of Amnesia: Forgetting the history of collective struggle makes the individualizing 'competency' framework seem like the only possible option. It disarms workers by erasing the memory of their most powerful tools for shaping their working conditions.
- Counter-Memory: The memory of unions bargaining for contract language that gives workers a say in technological change, or the history of printers' unions fighting to control the introduction of computerized typesetting.
Presenting Old Power Dynamics as New Technical Problems
Quote: "AI-generated content presents many of the same challenges that information professionals have long addressed: bias, inaccuracies, and questions of authority and credibility."
- What's Forgotten: While acknowledging a parallel, the framing subtly erases the political history of librarianship's struggles with these issues. It forgets the fight against biased classification systems (like Dewey's racism and sexism), the role of libraries as battlegrounds during McCarthyism, and the critique of information neutrality as a myth that serves power.
- Mechanism of Forgetting: Teleological framing. It suggests librarianship has always been on a path to address these issues, and AI is just the next step. This smooths over the contentious, political, and often-defeated struggles to make libraries truly critical institutions.
- Function of Amnesia: This amnesia allows the problem of AI bias to be treated as a technical issue of 'evaluation' rather than a political issue of power and ideology, which has a long and contested history in the field. It allows the profession to feel it is on familiar ground, rather than facing a new front in a very old political war.
- Counter-Memory: The memory of the Social Responsibilities Round Table (SRRT) of the ALA challenging the profession's complicity in the Vietnam War and institutional racism, demonstrating that 'information' has always been a political battleground.
Forgetting the History of Collective Governance
Quote: "individuals, institutions, and others who use this framework are encouraged to adapt it to specific job functions, responsibilities, or organizational contexts."
- What's Forgotten: The history of democratic and cooperative models of governance for infrastructure and knowledge, such as public utilities, library cooperatives, or academic senates with real power.
- Mechanism of Forgetting: Explicit dismissal/replacement. By positing the solution as individuals and institutions 'adapting' a top-down framework, it implicitly dismisses the possibility of bottom-up, democratic creation of policy for new technologies. The locus of control is assumed to be individual or institutional-managerial.
- Function of Amnesia: Forgetting that infrastructure can be governed democratically makes the current model—where private corporations build tools and public institutions merely 'adapt' to them—seem natural and inevitable. It closes off the possibility of demanding public or cooperative ownership and control over AI infrastructure.
- Counter-Memory: The history of the rural electrification cooperatives, which brought electricity to vast parts of America under democratic, non-profit community control, as a model for how essential infrastructure could be managed.
False Separation Analysis
Privatizing Structural Transformation as Individual 'Competency'
Quote: "Given the diversity of roles and job duties among academic library workers, it is not possible to create a set of competencies that apply uniformly to everyone. Therefore, individuals, institutions, and others who use this framework are encouraged to adapt it..."
- False Separation: The document separates the structural threat of AI (mass deskilling, job replacement, increased surveillance) from the individual worker's response. The structural problem is treated as a given, an unchangeable context. The solution is located entirely in the realm of the individual's 'competencies' and 'mindsets'.
- What's Actually Structural: The implementation of AI is a structural change driven by capital's desire to reduce labor costs and increase control. It is a class-wide attack on the autonomy, security, and power of professional and academic labor.
- Ideological Function: This false separation prevents workers from recognizing their shared structural position. By encouraging them to focus on 'adapting' competencies to their 'specific job functions,' it divides them and discourages them from organizing collectively around their shared interests as workers facing a common threat.
- Dialectical Insight: The 'private' feeling of anxiety or inadequacy about AI is not a personal failing but the internalization of a structural condition: the precarity and powerlessness of labor under capitalism. The demand to develop 'competencies' is the external social pressure of the market penetrating the individual's sense of professional self.
Psychologizing Political Resistance as a 'Mindset'
Quote: "Guiding Mindsets...Skepticism: Approach AI critically, questioning results, and representations."
- False Separation: It separates the political act of opposition from the individual's internal psychology. Resistance to a harmful system is reduced to a 'mindset' of 'skepticism'—a cognitive orientation. This frames a political stance as a personal character trait.
- What's Actually Structural: The unchecked power of technology corporations to impose their products on society, the lack of democratic governance over technology, and the use of technology as a tool for labor discipline are all structural issues.
- Ideological Function: Psychologizing resistance defangs it. A 'skeptical' individual is not a threat to power; an organized union or social movement is. This serves the status quo by channeling oppositional energy into harmless, private intellectual activity ('questioning results') rather than disruptive collective action.
- Dialectical Insight: The 'personal mindset' of skepticism is itself socially produced by the objective unreliability and harmfulness of the AI systems being imposed. The text tries to capture this nascent critique and contain it within the safe bounds of individual psychology, preventing it from maturing into a collective political consciousness.
Synthesis
This document masterfully weaves reification, social amnesia, and false individualization into a coherent ideological project that mystifies the current social order. By reifying corporate strategy into an autonomous technological force ('AI is transforming...'), it creates a seemingly natural, agentless event. Social amnesia then erases the history of collective labor struggle, the primary tool for contesting such transformations, making organized resistance appear unthinkable. Finally, a false separation is constructed: this reified, ahistorical force must be confronted not by a collective, but by atomized individuals who are tasked with cultivating personal 'competencies' and 'mindsets.' The totality concealed is that of capitalism itself—a system of social relations where technology is not a neutral force but a weapon in the ongoing struggle between capital and labor. The discourse prevents readers from seeing that their private anxieties about AI are a shared, structural experience, and that the solution is not individual upskilling but collective organizing. It forecloses solidarity by prescribing a therapeutic, individualized response to a political and economic assault, ensuring that librarians remain 'responsible' professionals rather than becoming militant organizers.
Critical Observations: The Big Picture
This section synthesizes the findings from the previous tasks to examine the text's systematic ideological project. It looks at how patterns of agency, language, and structural distortion combine to build a coherent, power-serving worldview.
Distribution of Agency and Accountability:
Agency is consistently granted to abstract entities: 'Artificial Intelligence' transforms, presents challenges, and impacts workforces. The 'adoption of AI' changes job markets. In contrast, library workers are rendered passive; they must 'engage with,' 'develop,' and 'adapt to' these forces. Human agency for corporations, vendors, and administrators who actually make decisions is systematically erased through passive voice ('is being integrated'). Blame for failures is implicitly shifted to workers who may lack the proper 'competencies.' This distribution perfectly aligns with a power structure where capital and management act, and labor must react.
Naturalized Assumptions (The Invisible Ideology):
The document naturalizes a neoliberal, techno-determinist worldview. It assumes technological change is an inevitable, linear progress to which we must adapt. Values like 'innovation' and 'efficiency' are treated as universal, self-evident goods, not as class-specific goals that often mean deskilling and precarity for workers. The very concept of 'workforce preparation' as the goal of education is presented as common sense. The text makes it nearly impossible to think that refusing a technology, demanding its democratic control, or organizing for its abolition could be valid professional responses. The reification of AI as a natural force makes the current order seem unchangeable.
Silences, Absences, and the Unspeakable:
The text is profoundly silent on concepts of power, capital, and labor. The words 'union,' 'collective bargaining,' 'solidarity,' 'capitalism,' and 'regulation' are completely absent. While it mentions 'unseen labor,' it avoids the more politically charged 'exploited labor.' It erases the long history of labor movements, Luddism, and union struggles over the terms of technological implementation in the workplace. What's missing is the entire language and historical memory of collective power, leaving only the impoverished vocabulary of individual adaptation and professional responsibility.
False Separations (The Dialectical Illusion):
The document's entire architecture is built on a false separation between the structural and the individual. It isolates the 'problem' of AI from the political economy of surveillance capitalism that produces it. It presents structural threats—deskilling, automation, erosion of professional autonomy—as challenges to be met with individual solutions: 'mindsets,' 'competencies,' 'literacy,' and 'ethical consideration.' The text systematically prevents workers from connecting their 'private' anxieties about job security or ethical compromises to their shared, 'public' position as workers in a system that is imposing this technology upon them from above.
Coherence of Ideology (The Architecture of Power):
All these linguistic patterns work in concert to construct a specific kind of subject: the ideal neoliberal professional. This subject is adaptable, resilient, individually responsible, and forever upskilling. They see systemic problems as personal challenges. They practice critique as a private, cognitive 'mindset' ('skepticism') rather than a collective, political act. The discourse of reification (AI as a natural force), amnesia (no history of resistance), and false separation (your problem to solve) mutually reinforce one another to produce a docile, managerial professional who will smooth the integration of capital-led automation into the public sector without organized opposition.
Conclusion: Toward Structural Counter-Discourse
This concluding section synthesizes the entire analysis. It names the ideology the text constructs, connects it to the material power structures it serves, and explores the real-world consequences. Finally, it recovers the historical alternatives the text erases and imagines a "counter-discourse" capable of challenging its version of reality.
Names the Ideology and Its Material Base:
The core ideology constructed by this text is professional-managerial neoliberalism. It serves a political project of aligning library labor with the interests of institutional management and the broader tech-capitalist economy. The discourse naturalizes the inevitability of corporate AI, while individualizing the responsibility for managing its negative consequences. This ideology mystifies the material base of modern capitalism, where technology is a primary tool for increasing labor productivity, reducing labor costs through automation and deskilling, and expanding markets. The reification of 'AI' and the social amnesia regarding labor history work to conceal the antagonistic relationship between capital (which profits from AI) and labor (whose security, autonomy, and wages are threatened by it).
Traces Material Consequences:
This discourse translates directly into material consequences. It justifies the allocation of university budgets toward AI vendor contracts and individual worker training, rather than toward hiring more librarians or increasing wages. It creates a metric ('competencies') by which workers can be evaluated, disciplined, or deemed obsolete. By framing problems as individual failings, it fosters burnout and anxiety while discouraging the formation of unions or other collective bodies that could demand concessions from management, such as a say in technology procurement or job protections against automation. Ultimately, it facilitates a smoother, less contested restructuring of library labor in the interest of institutional 'efficiency' and austerity.
Recovers Historical Alternatives:
The text's social amnesia conceals a rich history of alternatives. It forgets the Luddite movement, which was not a blind hatred of machines, but a targeted, political struggle by workers to destroy machinery that was being used to destroy their communities and livelihoods. It forgets the powerful printers' unions of the 20th century, which used collective bargaining to negotiate the terms of automation, ensuring that new technologies led to shared prosperity and shorter work weeks, not just mass layoffs. Remembering these struggles reveals that technology is not destiny; it is a site of political contestation. It reminds us that workers have, in the past, successfully organized to control, refuse, and shape technology to serve their own interests, not just those of capital.
Imagines Counter-Discourse:
A counter-discourse would de-reify social relations by consistently naming the agents and their interests. It would recover historical memory by referencing past labor struggles as a source of strategy. It would collapse the false individual/society separation by framing the issue in terms of collective power. It would speak not of 'competencies' but of 'demands,' not of 'mindsets' but of 'solidarity,' not of 'selectivity' but of 'struggle.'
Counter-Discourse Examples:
- Original: "Given these realities, academic library workers need to develop AI literacy..."
- Reframed: Facing a concerted effort by tech corporations and administrators to automate professional work, academic library workers need to build collective power to defend their jobs and professional autonomy.
- Stakes Shift: This shifts the problem from individual ignorance ('literacy') to a political power struggle, and the solution from individual training to collective organizing.
- Original: "Skepticism: Approach AI critically, questioning results, and representations."
- Reframed: Solidarity: Organize with fellow workers to collectively investigate, and if necessary, refuse to use technologies that harm our communities, surveil patrons, or deskill our labor.
- Stakes Shift: This reframes critique from a private, internal 'mindset' into a public, collective action of organizing and refusal.
- Original: "Evaluate benefits and risks associated with using AI technologies in teaching and research."
- Reframed: Establish worker-led committees with the power to veto the procurement and implementation of any new technology that has not been proven safe and beneficial by independent, non-corporate auditors.
- Stakes Shift: This shifts the act of 'evaluation' from a powerless, individual assessment into a demand for democratic governance and material control over the workplace.
Reflects on the Stakes:
This matters because language is a crucial battlefield. If this discourse goes unchallenged, it will succeed in producing a generation of professionals who see a structural assault on their labor as a personal opportunity for self-improvement. It will pacify a workforce that should be on the front lines of resisting the corporate capture of information infrastructure. The risk is the quiet, frictionless erosion of professional jobs, autonomy, and the very humanistic values libraries claim to uphold. If we learn to see through the reification of AI, resist the amnesia of past struggles, and think structurally, what becomes possible is solidarity. We could move beyond a lonely 'skepticism' to build powerful unions and alliances that can make real demands: for democratic control over technology, for investment in people over automation, and for a future where technology serves human communities, not corporate profit. The mystification prevents a powerful workforce from recognizing its own collective strength; dismantling it is the first step toward using that strength.
Raw JSON: 2025-11-11-ai-competencies-for-academic-library-workers-cda-spicy-889a89.json
Analysis Framework: cda-spicy v3.0
Generated: 2025-11-11T09:58:47+00:00Z
Discourse Depot © 2025 by TD is licensed under CC BY-NC-SA 4.0
Extended Processing Summary
Gemini refers to the text below as “thought summaries.” Let’s just call it a model-generated process trace. Treat this as a just anotherrhetorical artifact—a way of making the model’s processing legible. The first-person framing of these “thought summaries” is a presentation choice for the user-facing output, not a window into “real”thoughts. These are computational artifacts, not cognitive reports from a quirky, curious or conflicted mind.
Analyzing the Text
I'm currently immersed in a comprehensive Critical Discourse Analysis of the "ACRL AI Competencies" document. My immediate focus is establishing the framework with those five tasks. I'm aiming to identify key discourse elements and how they are structured.
Exploring Initial Findings
I've begun the CDA process, targeting the ACRL AI document. I'm focusing on Agency and Accountability, searching for instances where agency might be obscured or shifted. Already, I've noted an early inclination to categorize and examine the text based on its assumptions, as my initial impressions now demand I look for a more concrete answer. The text's professional tone presents a particular challenge to analysis, requiring deep focus.
Identifying Patterns in AI Discourse
I'm now deep into the CDA process, particularly with the ACRL AI document. Focusing on the five tasks, I'm identifying patterns in the discourse strategies. My analysis has highlighted instances of obscured agency in the context of AI's integration into libraries and higher education, noting the passive voice used in descriptions of AI's impact and the potential blurring of accountability for associated challenges. I am noticing recurring themes of technological determinism. I'm actively pinpointing reification and other structures that shape meaning.
Deconstructing Ideological Frameworks
I'm now fully immersed in the CDA of the ACRL document, actively deconstructing its ideological frameworks. I've pinpointed instances where 'AI literacy' subtly shifts responsibility onto individuals, while 'innovation and efficiency' are presented as inherently positive, potentially obscuring potential negative consequences. I'm noting how the text implicitly prioritizes workforce readiness over critical thinking or civic engagement, and I'm also examining how the concept of "thoughtful selectivity" subtly frames resistance as individual, rather than a collective, political act. The overarching goal is to expose the implicit biases that these phrases carry, to reveal the assumptions shaping the text.
Examining Textual Structures
I've just finished a preliminary examination of the ACRL AI document. The main takeaways reveal passive voice and nominalizations that obscure agency. I've begun an audit for ideological undercurrents as well, finding that phrases like "AI literacy" and "innovation" aren't as value-neutral as they seem. I'm also now analyzing the text through the lens of positioning and solidarity, noticing an implicit hierarchy in the document's structure, where library workers are positioned as "followers" of expert recommendations.
Developing a Critical Framework
I'm developing a structured CDA, focusing on the ACRL AI document. I'm actively analyzing the text, looking at agency, ideology, and positioning, and forming discourse strategies. My current thoughts are on the document's structure and the implied power dynamics, especially regarding library workers. I am identifying textual examples that support these developing positions.
I'm synthesizing findings into discourse strategies and focusing on structural relations. I'm highlighting textual examples of nominalization and passive voice to identify obscured agency. The goal is to articulate the inherent power dynamics, the ideological assumptions, and the positioning created within the document.
Developing Structural Insights
I'm now focusing on structural relations and reification within the ACRL document. I'm actively analyzing how the text solidifies the status quo of AI as an inevitable force, obscuring the political decisions that shape it. I'm also finding that the language subtly portrays workers as passive recipients of this technological shift. I'm focusing on the synthesis of these findings.
Refining Ideological Analysis
I've just focused on identifying ideological undercurrents and am synthesizing these into actionable discourse strategies. The document seems to promote a passive acceptance of AI as unavoidable, using phrases like "AI literacy," which I've marked as a concern. I'm focusing on ways to highlight the inherent power dynamics, the ideological assumptions, and the positioning of actors within the text.
Developing CDA Strategies
My analysis is progressing well. I'm focusing on the ACRL document, formulating discourse strategies to understand how it approaches AI. I'm developing strategies that relate to techno-determinism, the skills deficit, and a framework to build a pragmatic, adaptable professional. I'm also examining the implications of individualizing what is, in reality, a structural challenge.
Formulating CDA Framework
I'm now fully immersed in developing my CDA. My focus is on the ACRL AI document, and my efforts are going toward identifying structural relationships such as reification and false separations. I'm also synthesizing findings into discourse strategies, and I'm actively highlighting the textual examples of nominalization and passive voice that obscure agency. I'm aiming to articulate the power dynamics and ideological assumptions that shape the document.
Formulating a Synthesis
I've crafted a comprehensive CDA framework for the ACRL document, analyzing the text's structures and ideologies, and am now formulating a conclusion. Key insights reveal reification of AI and a focus on individual responsibility, not structural change. I'm focusing on crafting a succinct synthesis.
Formulating a Synthesis
I'm now integrating insights from the ACRL document, examining agency, ideology, and structural relations. Key findings include the reification of AI and a focus on individual responsibility instead of structural changes. I'm focusing on crafting a succinct synthesis that captures the complex power dynamics and inherent biases within the text.
Synthesizing CDA Framework
I've just finished a comprehensive CDA framework for the ACRL document, which involves an examination of agency, ideology, and structural relations within the ACRL document. I am identifying structural relations like reification and false separations. Currently, I am formulating a synthesis that reflects the complex power dynamics and the biases inherent in the text.