Artificial Intelligence Does Not Do Human Things
Artificial Intelligence is just a calculating machine (i.e. a “computer”). Everything it does is not “thinking” but calculating in response to a human who has programmed it to carry out a command (e.g., by giving it a prompt). AI can make future calculations based on past calculations (which is not “learning”), it can make novel calculations based on data it has been fed (which is not “guessing”), it can calculate pixels to generate images based on image data it has been fed (which is not “art”). Even Moltbook does not show us AIs engaging in “conversation,” but rather AIs calculating what a conversation between AIs should look like (because humans gave them a command to do this). The phrase “AI Agent” is an oxymoron, because AI has no agency (see first sentence of this post).
AI’s human creators have fallen victim to magical thinking: believing, as in a fairy tale, that their machines will one day “think” for themselves and achieve “general intelligence,” proving to be “smarter” than humans. But human intelligence has nothing to do with how smart you are. The fact that an AI can win at chess and pass college entrance exams is not a demonstration of intelligence. It is a demonstration of calculating power and nothing more. AI’s creators are working to make it simulate human intelligence, but this is just monkey see, monkey do.
Human intelligence is embodied; it is the sum total of experience and feeling gained through bodily senses, including emotion, and AI possesses neither body nor emotion. True intelligence has more to do with wisdom than with smartness, and AI will never be wise, because AI has no body through which to experience life and gain wisdom. AIs do not accumulate experiences; they only accumulate data, with which they make calculations. Those who think AI is doing anything more than this are simply not very clever.
The naming of the thing—“artificial intelligence”—produces in humans a mistaken perception of the thing. The linguist Benjamin Lee Whorf gave us a perfect example of this kind of mistaken perception. Whorf was a protégé of the well-known anthropological linguist, Edward Sapir, but prior to that, he worked for a fire insurance company, where his job was to analyze the causes of fires in workplaces. In the case of an explosion in a storage facility for spent gasoline drums, which are filled with flammable vapors, he realized that because these drums were labeled “empty,” the workers came to so completely believe in the emptiness of these gas drums that they actually smoked among them, and carelessly tossed their cigarette butts around the room—with disastrous results.1
The naming of the thing—“empty gasoline drum”—produced in the minds of these workers an idea that had no basis in reality. Similarly, the makers of AI gave it the name “intelligence,” and after they named it, they came to believe that the thing is actually intelligent; they came to believe in it. And now they are behaving accordingly, i.e., stupidly, in relation to it—just like Whorf’s recklessly smoking workers.
An AI is not a human, is not equivalent to a human, is not a replacement or a substitute for a human, and giving it the name “intelligence” does not mean it will ever have the same kind of potential for intelligence that a human is born with. The realm of things that an AI can do that approximate what a human can do is very narrow, limited to anything that can be accomplished by a calculating machine. A calculating machine cannot autonomously create anything, cannot laugh or cry, cannot feel emotion, cannot eat or piss or take a shit, cannot generate new humans out of its own body.
Those who measure AI against humans have a kind of tunnel vision that doesn’t expand how they think about AI, but rather limits how they think about humans, limits them to thinking of humans narrowly as calculating machines (as Christian Wiman wrote in the December 2025 issue of Harper’s: “What is AI but the culmination of the notion that the brain is a machine?”). Sure, there is a part of our brain that calculates in a manner that can be seen as machine-like. Do you see that as the sum total of your being, though?
We need a more robust philosophy of artificial intelligence. I don’t look for it among the people who are currently developing AI; their ideas of being are impoverished.
See you IRL,
Patty A. Gray

