Skip to main content

Metaphor - Key Sources

Metaphor Theory and Source–Target Mapping​

info

This work in cognitive linguistics provided the foundational concept: metaphors operate by mapping structure from a familiar source domain (e.g., human cognition) onto a less familiar target domain (e.g., algorithmic processes). The mapping isn't arbitrary, it imports specific inferences and hides others.

Key insight for prompt design: To analyze metaphor, I needed to instruct the model to identify both domains, describe the structural mapping between them, and articulate what the mapping conceals.

Adam, A. (1995). Artificial intelligence and women’s knowledge: What can feminist epistemologies tell us? Women’s Studies International Forum, 18(4), 407–415

Agre,Ā P.Ā (1997).Ā Computation and Human Experience.Ā United Kingdom:Ā Cambridge University Press.

Agüera y Arcas, Blaise. (2025). What Is Intelligence? Lessons from AI about Evolution, Computing, and Minds. Cambridge, MA: MIT Press

Alexander, P. A., Schallert, D. L., & REYNOLDS, R. E. (2009). What Is Learning Anyway? A Topographical Perspective Considered. Educational Psychologist, 44(3), 176–192. https://doi.org/10.1080/00461520903029006

Archer, K. The Origins of Artificial Intelligence in Natural Intelligence. https://static1.squarespace.com/static/60cd29c07fefa22428a53ac2/t/69cd394ebc6e536a63b08c74/1775057231635/The+Origins+of+Artificial+Intelligence+in+Natural+Intelligence.pdf

Ariso, J. M., & Bannister, P. (2025). ā€˜AI lost the prompt!’ Replacing ā€˜AI hallucination’ to distinguish between mere errors and irregularities. AI & Society. https://doi.org/10.1007/s00146-025-02757-1

Beer, D. (2026). What ought AI do?. Journal of Classical Sociology, 1468795X261433543.

Bender, E. M., & Hanna, A. (2025). The AI con: How to fight big Tech’s hype and create the Future we want. Harper.

Bergstrom, C. T., & Ogbundu, C. B. (2023). ChatGPT isn’t ā€œhallucinating.ā€ It’s bullshitting. Undark. https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/

Boisseau, Ɖ. (2026). Expertise, opacity, and trust in AI systems. Synthese, 207(3), 104. https://doi.org/10.1007/s11229-026-05484-2

Bones, H., Ford, S., Hendery, R., Richards, K., & Swist, T. (2021). In the frame: The language of AI. Philosophy and Technology, 34(1), 23–44.

Bose, S. (2023). Bug vs error: Key differences. BrowserStack. https://browserstack.wpengine.com/guide/difference-between-bugs-and-errors/

Buckner, C. J. (2024). From deep learning to rational Machines: What the history of philosophy can Teach us about the Future of artificial intelligence. Oxford University Press.

Buckner, C. (2018). Empiricism without magic: transformational abstraction in deep convolutional neural networks. Synthese, 195(12), 5339–5372. http://www.jstor.org/stable/26750679`

Burrell, J. (2016). How the machine ā€˜thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1).

Butlin, P. et al. (2025). Identifying indicators of consciousness in AI systems. Trends in Cognitive Sciences, 0(0). https://doi.org/10.1016/j.tics.2025.10.011

Byrne, Thomas (2024). ā€œThe Phenomenology of ChatGPT: A Semiotics,ā€ Journal of Consciousness Studies 31, no. 3: 6–27, https://doi.org/10.53765/20512201.31.3.006.

Chalmers, D. J. (2024). Could a Large Language Model be Conscious? (arXiv:2303.07103). arXiv. https://doi.org/10.48550/arXiv.2303.07103

Charteris-Black, J. (2004). Corpus approaches to critical metaphor analysis. Palgrave Macmillan.

Chirimuuta, M. (2021). Prediction versus understanding in computationally enhanced neuroscience. Synthese, 199(1/2), 767–790.

Chown, E., & Nascimento, F. (2023). Meaningful Technologies: How Digital Metaphors Change the Way We Think and Live. Lever Press. https://doi.org/10.3998/mpub.12668201

Clark, K. M. (2024). Embodied Imagination: Lakoff and Johnson’s Experientialist View of Conceptual Understanding. Review of General Psychology, 28(2), 166-183. https://doi.org/10.1177/10892680231224400 (Original work published 2024)

Colburn, T. R., & Shute, G. M. (2008). Metaphor in computer science. Journal of applied logic, 6(4), 526-533.

Coghlan, S. (2024). Anthropomorphizing machines: Reality or popular myth? Minds and Machines, 34(3), 25. https://doi.org/10.1007/s11023-024-09686-w

Creel, K. A. (2020). Transparency in Complex Computational Systems. Philosophy of Science, 87(4), 568–589. doi:10.1086/709729

D’Amato, K. (2025). ChatGPT: Towards AI subjectivity. AI & SOCIETY, 40(3), 1627–1641. https://doi.org/10.1007/s00146-024-01898-z

DelƩtang, G., Ruoss, A., Duquenne, P.-A., Catt, E., Genewein, T., Mattern, C., Grau-Moya, J., Wenliang, L. K., Aitchison, M., Orseau, L., Hutter, M., & Veness, J. (2024). Language Modeling Is Compression (arXiv:2309.10668). arXiv. https://doi.org/10.48550/arXiv.2309.10668

Deroy, O. (2023). The ethics of terminology: Can we use human terms to describe AI? Topoi, 42(3), 881–889. https://doi.org/10.1007/s11245-023-09934-1

De Toffoli, S., & Mancosu, P. (2026). The Philosophy of Mathematical Practice. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Summer 2026). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2026/entries/mathematical-practice/

DurĆ”n, J. M., & Formanek, N. (2018). Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism. Minds and Machines, 28(4), 645–666. https://doi.org/10.1007/s11023-018-9481-6

Durt, C., Froese, T., & Fuchs, T. (2023). Against AI understanding and sentience: large language models, meaning, and the patterns of human language use. Preprint.

Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864

Fernandez-Duque, D., & Johnson, M. L. (1999). Attention metaphors: How metaphors guide the cognitive psychology of attention. Cognitive Science, 23(1), 83–116.

Fisher, S. A. (2024). Large language models and their big bullshit potential. Ethics and Information Technology, 26(4), 67. https://doi.org/10.1007/s10676-024-09802-5

Floridi, L., & Nobre, A. C. (2024). Anthropomorphising machines and computerising minds: The crosswiring of languages between artificial intelligence and brain & cognitive sciences. Minds and Machines, 34(1), 5. https://doi.org/10.1007/s11023-024-09670-4

Gerber, Y., & Sander, E. (2025). Promoting a shift in perspective in argumentative thinking: Metaphorical framing for orienting attention. Journal of Applied Research in Memory and Cognition. https://doi.org/10.1037/mac0000226

Gibbs, J., Raymond W. (2017). Metaphor Wars: Conceptual Metaphors in Human Life. Cambridge University Press. https://doi.org/10.1017/9781107762350

Gillespie, T. (2014). The relevance of algorithms. Media technologies: Essays on communication, materiality, and society, 167(2014), 167.

Gouveia, S. S., & MorujĆ£o, C. (2024). Phenomenology and artificial intelligence: Introductory notes. Phenomenology and the Cognitive Sciences, 23(5), 1009–1015. https://doi.org/10.1007/s11097-024-10040-9

Gouveia, S. S. (2026). Guest Editorial: Introduction for the Special Issue JAIC. Journal of Artificial Intelligence and Consciousness, 1-4. https://www.worldscientific.com/doi/abs/10.1142/S2705078526020017

Graham, D. (2021). METAPHORS FOR THE BRAIN. In An Internet in Your Head: A New Paradigm for How the Brain Works (pp. 26–64). Columbia University Press.

Group, P. (2007). MIP: A Method for Identifying Metaphorically Used Words in Discourse. Metaphor and Symbol, 22(1), 1–39. https://doi.org/10.1080/10926480709336752

Gunkel, D., & Coghlan, S. (2025). Cut the crap: A critical response to ā€œChatGPT is bullshitā€. Ethics and Information Technology, 27(2), 23. https://doi.org/10.1007/s10676-025-09828-3

Hardwig, J. (1991). The Role of Trust in Knowledge. The Journal of Philosophy, 88(12), 693–708. https://doi.org/10.2307/2027007

Haugeland, John. (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press

Heersmink, R., de Rooij, B., Clavel VƔzquez, M. J., & Colombo, M. (2024). A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness. Ethics and Information Technology, 26(3), 41. https://doi.org/10.1007/s10676-024-09777-3

Hicke, R. M. M., & Kristensen-McLachlan, R. D. (2024). Science is Exploration: Computational Frontiers for Conceptual Metaphor Theory. https://doi.org/10.48550/arxiv.2410.08991

Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. https://doi.org/10.1007/s10676-024-09775-5

Huang, L., et al. (2025). A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Trans. Inf. Syst., 43(2), 42:1-42:55. https://doi.org/10.1145/3703155

Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. https://doi.org/10.1007/s11229-008-9435-2

IBM. (2024). What are AI hallucinations?. https://www.ibm.com/topics/ai-hallucinations

Johnson, D. G., & Verdicchio, M. (2017). Reframing AI discourse. Minds & Machines, 27, 575–590.

Johnson, M. (2022).Ā Embodied mind, meaning, and reason: How our bodies give rise to understanding. University of Chicago Press

Kadavath, S., et al (2022). Language Models (Mostly) Know What They Know (arXiv:2207.05221). arXiv. https://doi.org/10.48550/arXiv.2207.05221

Kaufmann, T., Weng, P., Bengs, V., & Hüllermeier, E. (2025). A Survey of Reinforcement Learning from Human Feedback (arXiv:2312.14925). arXiv. https://doi.org/10.48550/arXiv.2312.14925

Konigsberg, A. (2026). Beyond Behavior: Why AI Evaluation Needs a Cognitive Revolution (arXiv:2604.05631). arXiv. https://doi.org/10.48550/arXiv.2604.05631

Kosinski, M. (2024). Evaluating large language models in theory of mind tasks. Proceedings of the National Academy of Sciences, 121(45), e2405460121. https://doi.org/10.1073/pnas.2405460121

Kƶvecses, Z. (2002). Metaphor: A practical introduction. New York: Oxford University Press.

Lakoff, G., & Johnson, M. (2003). Metaphors we live by. The University of Chicago Press.

Lakoff, G., & Núñez, R. (2000). Where mathematics comes from. Basic Books.

Lin, S., Hilton, J., & Evans, O. (2022). TruthfulQA: Measuring how models mimic human falsehoods. In S. Muresan, P. Nakov, & A. Villavicencio (Eds.). Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume, 1: Long Papers) (pp. 3214–3252). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.acl-long.229

Littlemore, J. (2019). Metaphors in the mind : sources of variation in embodied metaphor. Cambridge University Press.

Long, D., & Magerko, B. (2020, April). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-16).

López-Rubio, E. (2021). Throwing light on black boxes: emergence of visual categories from deep learning. Synthese, 198(10), 10021–10041. https://www.jstor.org/stable/48692584

Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI hallucinations: A misnomer worth clarifying. https://arxiv.org/abs/2401.06796

Mattson, G. (2020). Weaponization: Ubiquity and Metaphorical Meaningfulness. Metaphor and Symbol, 35(4), 250–265. https://doi.org/10.1080/10926488.2020.1810577

McCarthy, J. (1955). Dartmouth Summer Research Project on Artificial Intelligence.

McGlone, M. S. (2011). Hyperbole, Homunculi, and hindsight bias: An alternative evaluation of conceptual metaphor theory. Discourse Processes, 48(8), 563–574. https://doi.org/10.1080/0163853X.2011.606104

Melzer, A. M., Weinberger, J., & Zinman, M. R. (Eds.). (1993). Technology in the Western Political Tradition. Cornell University Press. https://www.jstor.org/stable/10.7591/j.ctvr7fb7q

Merlo, P., Jiang, C., Samo, G., & Nastase, V. (2026). Blackbird Language Matrices: A Framework to Investigate the Linguistic Competence of Language Models. arXiv preprint https://arxiv.org/abs/2602.20966.

Meyer, L. S., & Tsaknaki, V. (2026, April). Exploring and Probing the Algorithmic Gaze on Bodies and Well-being. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (pp. 1-18). https://dl.acm.org/doi/10.1145/3772318.3791795

Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. New York:Farrar, Straus and Giroux.

Mitchell, M. (2021). Why AI is Harder Than We Think (No. arXiv:2104.12871). arXiv. https://doi.org/10.48550/arXiv.2104.12871

Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120. https://doi.org/10.1073/pnas.2215907120

Mulligan, K., & Correia, F. (2021). Facts. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter 2021). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2021/entries/facts/

Musolff, A. (2006). Metaphor Scenarios in Public Discourse. Metaphor and Symbol, 21(1), 23–38. https://doi.org/10.1207/s15327868ms2101_2

Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4), 435–450. https://doi.org/10.2307/2183914

NoĆ«, A. (2024, October 25). Rage against the machine: For all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does. Aeon. https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything

Northoff, G., & Gouveia, S. S. (2024). Does artificial intelligence exhibit basic fundamental subjectivity? A neurophilosophical argument. Phenomenology and the Cognitive Sciences, 23(5), 1097–1118. https://doi.org/10.1007/s11097-024-09971-0

O'Gieblyn,Ā M.Ā (2022).Ā God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning.Ā United States:Ā Knopf Doubleday Publishing Group.

Ƙstergaard, S. D., & Nielbo, K. L. (2023). False responses from Artificial Intelligence models are not hallucinations. Schizophrenia Bulletin, 49(5), 1105–1107. https://doi.org/10.1093/schbul/sbad068

Ouyang, L., et al (2022). Training language models to follow instructions with human feedback (arXiv:2203.02155). arXiv. https://doi.org/10.48550/arXiv.2203.02155

Pantsar, M., & Fabry, R. E. (2026). Why mental metaphors do not help us understand chatbot mistakes. Synthese, 207(4), 167. https://doi.org/10.1007/s11229-026-05551-8

Pickering, M. J., & Garrod, S. (2013). An integrated theory of language production and comprehension. Behavioral and Brain Sciences, 36(4), 329–347. https://doi.org/10.1017/S0140525X12001495

Pierce, A. E., & Garrison, S. T. (2011). The metaphorical horizon: Between facts and fictions. The International Journal of Interdisciplinary Social Sciences, 5(9), 95–105. https://doi.org/10.18848/1833-1882/cgp/v05i09/59308

Pope,Ā R.Ā (1994).Ā Textual intervention : critical and creative strategies for literary studies.Ā United Kingdom:Ā Routledge.

Putnam, H. (1967). Psychological predicates. Art, mind, and religion, 1, 37-48.`

Quattrociocchi, W., Capraro, V., & Perc, M. (2025). Epistemological Fault Lines Between Human and Artificial Intelligence. arXiv preprint arXiv:2512.19466.

Ramsay, S. (2011). Reading machines : toward an algorithmic criticism (1st ed.). University of Illinois Press.

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Cambridge, UK, 10(10), 19-36

Rehak, R. (2021). The language labyrinth: Constructive critique on the terminology used in the AI discourse. In P. Verdegem (ed.), AI for everyone?. London: University of Westminster Press.

Rudko, I., & Bashirpour Bonab, A. (2025). ChatGPT is incredible (at being average). Ethics and Information Technology, 27(3), 36. https://doi.org/10.1007/s10676-025-09845-2

S. Gouveia, S., & Wang, Y. (2026). Can generative artificial intelligence be considered a cognitive subject? An analytic analysis. AI & SOCIETY. https://doi.org/10.1007/s00146-026-02924-y

Sahebi, S., & Formosa, P. (2025). The AI-mediated communication dilemma: Epistemic trust, social media, and the challenge of generative artificial intelligence. Synthese, 205(3), 128. https://doi.org/10.1007/s11229-025-04963-2

Samuels, L., & McGann, J. J. (1999). Deformance and interpretation. New Literary History, 30(1), 25-56.

Searle, J. R. (1992). The rediscovery of mind. Cambridge, MA: MIT Press.

Sheremeta, O. (2023, June 25). Bug vs. defect: Difference with definition examples within software testing. Testomat.Io. https://testomat.io/blog/bug-vs-defect-difference-with-definition-examples-within-software-testing/

Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025). The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity. https://arxiv.org/abs/2506.06941v3

Smith, A. L., Greaves, F., & Panch, T. (2023). Hallucination or confabulation? Neuroanatomy as metaphor in large language models. PLoS Digital Health, 2(11), e0000388. https://doi.org/10.1371/journal.pdig.0000388

Stinson, C. (2020). From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence. Philosophy of Science, 87(4), 590–611.

Thibodeau, P. H., Matlock, T., & Flusberg, S. J. (2019). The role of metaphor in communication and thought. Language and Linguistics Compass, 13(5), Article e12327. https://doi.org/10.1111/lnc3.12327

Tigard, D. W. (2025). On bullshit, large language models, and the need to curb your enthusiasm. AI and Ethics, 5(5), 4863–4873. https://doi.org/10.1007/s43681-025-00743-3

Vallor, S. (2009). The fantasy of third-person science: Phenomenology, ontology and evidence. Phenomenology and the Cognitive Sciences, 8(1), 1–15. https://doi.org/10.1007/s11097-008-9092-4

Weizenbaum, J. (1966). ELIZA-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.

Wilkinson, C., Yawney, J., Gadsde, S.A. (2026). , Explaining explainability: A comprehensive survey on explainable artificial intelligence and relevant industry applications. Intelligent Systems with Applications. https://doi.org/10.1016/j.iswa.2026.200647.

Wilkinson, S., Green, H., Hare, S., Houlders, J., Humpston, C., & Alderson-Day, B. (2022). Thinking about hallucinations: Why philosophy matters. Cognitive Neuropsychiatry, 27(2–3), 219–235. https://doi.org/10.1080/13546805.2021.2007067

Winston, P. H., & Horn, B. (1975). The psychology of computer vision (Vol. 67). New York: McGraw-Hill.

Wyatt, S. (2021). Metaphors in critical Internet and digital media studies. New Media & Society, 23(2), 406-416. https://doi.org/10.1177/1461444820929324 (Original work published 2021)

Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34(2), 265–288. https://doi.org/10.1007/s13347-019-00382-7

Typologies of Explanation​

  • Brown, R. (1963). Explanation and Experience in Social Science. Routledge.
info

Robert Brown's classic work distinguishes between different modes of explanation: genetic (how it came to be), functional (how it works), intentional (why it "wants" something), dispositional (why it "tends" to act), and so on.

The System Instructions are provided with examples using the following table:

TypeDefinitionLens
GeneticTraces development or origin.How it came to be.
FunctionalDescribes purpose within a system.How it works (as a mechanism).
EmpiricalCites patterns or statistical norms.How it typically behaves.
TheoreticalEmbeds behavior in a larger framework.How it's structured to work.
IntentionalExplains actions by referring to goals/desires.Why it "wants" something.
DispositionalAttributes tendencies or habits.Why it "tends" to act a certain way.
Reason-BasedExplains using rationales or justifications.Why it "chose" an action.

License

License: Discourse Depot Ā© 2025 by TD is licensed under CC BY-NC-SA 4.0