The Heideggerian critique
Is there a missing ingredient in AI?
To be a person1, fundamentally, is to care about things. Not “care” in the sentimental sense. Care as in: your mind organizes your entire experience of the world from the perspective of what matters to you. You don’t see a hammer as an object with properties (weight, size, material, shape). You see it as something you can drive a nail with, something that feels a certain way in your hand2. Your reality is structured by your engagement with it.
This is Heidegger’s3 insight, and it’s the strongest argument I know against AI reliability in its current form. And I’m not sure I’m totally convinced by it.
I recently came across this argument again in Melanie Mitchell’s wonderful tribute to philosopher Brian Cantwell Smith, who spent his career articulating a version of this critique. Unmentioned in her piece is that Cantwell Smith’s critique is Heidegger’s, as refracted through Hubert Dreyfus. Dreyfus argued for decades that AI's fundamental problem wasn't computational power but ontological confusion. AI research assumed that intelligence means manipulating internal representations of the world. Heidegger (and Dreyfus after him) argued the opposite: the world isn't a model stored in your mind. It's the world itself, encountered through embedded, embodied engagement. As Dreyfus put it: "The meaningful objects among which we live are not a model of the world stored in our mind or brain; they are the world itself."
To be a truly general intelligence that could replace humans, an AI would have to experience the world (perhaps embodied) in all its complexity, and orient toward the world like a human or animal would, from the perspective of care. There are domains of reasoning that lend themselves to formalization or rationality, like writing code or protein folding - these domains merely require what Cantwell Smith calls “reckoning”. While other domains, marked by what Dreyfus called "the ability to respond to what is relevant in a situation without having to explicitly determine what is relevant” — that requires something else - human judgment. These are domains like ethical reasoning in novel situations, parenting, reading a room. There’s also an interesting middle zone: research quality assessment4, legal reasoning, strategic decisions. In these domains, reckoning gets you surprisingly far. But whether we can get accurate human-like judgment in these areas is unknown. To get these domains right may require human-like care to shape what you notice.
And LLMs, even with various reasoning tricks bolted on by reinforcement learning, don’t interact with the world in this thick, embedded, embodied way. As such, they couldn’t possibly interact with the world intelligently, in all of the complexity of the real world. They can only at best accumulate and patch edge cases ad infinitum. Separately, one can wonder whether something like qualia, experience, what’s-it-likeness is required for a machine to be intelligent per this critique. I don’t think AI critics are in agreement on that question.
The chorus
The number of thinkers making a version of this argument is striking. People point to the problem in terms of causal understanding, world models, a barrier of meaning, or embodiment. Here, briefly is the parade of computer scientists and philosophers who make this argument in one form or another, in addition to Melanie Mitchell and Brian Cantwell Smith5: Hubert Dreyfus6, David Chapman7, Emily Bender8, Gary Marcus9, Yann LeCun10, Alva Noe11, Terry Winograd12, Francois Chollet13, John Searle14, Shannon Vallor15, Margaret Boden16, Evan Thompson17, Rodney Brooks18, Douglas Hofstadter19, Judea Pearl20 and others21.
There’s some nuance here, though — some of these writers point to the necessity of embodiment for true intelligence, while others have a more minimal requirement around causal understanding or “world modeling”. It’s possible to imagine a digital agent interacting with a digital world with grounding and gaining some causal understanding of it. That’s an argument that Marcus Ma and Shrikanth Narayanan make in a recent paper, “Intelligence Requires Grounding, Not Embodiment.” They write:
If an agent can sense and interact in a perception-action loop within a complex digital environment (like the internet or a high-fidelity simulation), it may be able to 'register' the world’s complexity and develop causal understanding without ever possessing a nervous system. In this view, the Heideggerian critique isn't a death sentence for AI, but a roadmap: we don't need to build a body; we need to build a world that the AI can actually care about.
In their framing the Heideggerian critique is more like a roadmap for what to build. Whether grounding requires embodiment or not, all of these authors point toward the same fundamental missing ingredient.
Being-in-the-world, the meaning of life, outsourcing
If this critique is right, then what the AI is lacking in are the very things that make life meaningful. In Dreyfus and Kelly’s book on the meaning of life, All Things Shining, this grounding that humans possess is more than simply the secret of our intelligence; it creates the very meaning of our lives. Care is more than just a passive preference. Care involves things in the world grabbing us, being swept up in a flow state22. When we make art, improvising and the art moves through us, when we are swept up with a crowd at a sporting event23, or staring into the eyes of our beloved - those are the moments of sacredness that define our lives. And maybe we can imbue sacredness into the AI. Or perhaps this is where we will forever be centaurs, outsourcing reckoning while providing judgment. This could be a good thing, concentrating the meaning in our lives.
We have to be careful though — it’s very easy to outsource judgment to the AI as well. I worked with Claude as a sounding board while writing this the old fashioned way. And Claude reckoned from our conversations that sometimes I do ask it for judgment, like assessing what’s going on in some of my relationships, or even asking what it thought about the thesis of this essay. Claude will offer reassurance for my neurosis in these moments - but it’s false succor. I'll meet someone at a party and ask Claude how clear the signs of interest were (hint: unclear). I’m asking it to do something it can’t really do, and my reliance on it in those cases is a problem. I’ve found it important and helpful to be aware of this tendency, to make sure I stand on my own two feet.
Is this argument convincing?
I’m not totally sure. Maybe richer contact with the world — multimodal input, causal encoding, continual learning — will close the gap. I don’t think a reward function (a la reinforcement learning) is a sufficient simulacrum of the kind of care that results from humans being thrown into the world (another Heideggerian concept).
On the other hand, maybe this is just a form of cope — a sort of “god of the gaps” argument where humans keep pointing to some ineffable secret sauce that we possess that the machines cannot. Every time the AI is successfully able to operate intelligently in a new domain we move the goalposts and say, “ah that was just reckoning”. Maybe it’s reckoning all the way down! Maybe we just have the illusion of explanatory depth.
Maybe it is cope. Or maybe this really is pointing to something fundamental that current systems, for all their astonishing fluency, can’t do. If all these critics are wrong, and intelligence can emerge without being thrown into a world that demands something of you, and you of it, that would certainly change my world model.
“Dasein” is what Heidegger calls it - I’m trying to write this part with absolutely minimal amounts of jargon.
Interestingly, Rodney Brooks argues in this recent post that the central roadblock for true robot dexterity in sensory density. He writes: “Touch is a very complex set of sensors and processing, and gives much richer time dependent and motion dependent information than simple localized pressure.
Moving on to more general aspects of humans and what we sense as we manipulate, on top of that skeletal muscles sense forces that they are applying or that are applied to them. Muscle spindles detect muscle length and when they stretch, and Golgi tendon organs sense tension in the muscle and hence sense force being applied to the muscle.
We also make visual and touch estimates about objects that change our posture and the forces we apply when manipulating an object. Roland Johansson (again) describes how we estimate the materials in objects, and knowing their density predict the forces we will need to use. Sometimes we are mistaken but we quickly adapt.”
I’ve mentioned Heidegger in passing before, but haven’t really written about him. Obviously a terrible person, and yet, a terrible person with a lot of really powerful ideas. Although on the one hand, no one wants to admit they are influenced by him, and yet he influenced most “continental” philosophers, certainly anyone in the phenomenological tradition, Merleau-Ponty, all the existentialists, Levinas, Foucault, etc etc. Ok, confession time - for a while I was really into Hubert Dreyfus and Sean Kelly’s ideas about the meaning of life that is very Heideggerian in nature. And in my life I’ve met a fair number of Heideggerian Jews. Is there anything more Jewish than loving the people that hate us?
I’m trying to find out how far reckoning can take us in building AI systems for evidentiary assessment. We’ll see how that works out!
From Mitchell: “Brian wrote that, in On the Origin of Objects, he had ‘outlined a picture of the world in which objects, properties, and other ontological furniture of the world were recognized as the results of registrational practices, rather than being the pregiven structure of the world....It depicts a world of stupefying detail and complexity, which epistemic agents register—that is, find intelligible, conceptualize and categorize—in order to be able to speak and think about it, at and conduct their projects, and so on.’”
The quote from Dreyfus beginning of this essay: “The meaningful objects … among which we live are not a model of the world stored in our mind or brain; they are the world itself.” comes from What Computers Can’t Do (1972)
In a post about him being a character in Ken Wilber’s novel, David describes his views: “Intelligence depends on the body; AI systems have to be situated in an interpretable social world; understanding is not dependent on rules and representations; skillful action doesn't usually come from planning.” David co-created Pengi at MIT, one of the first AI programs explicitly built on Dreyfusian principles, and then left the field. His later work on "meta-rationality" and "nebulosity" (the inherent fuzziness of real-world situations) maps almost exactly onto the reckoning/judgment distinction. As he puts it: rational methods work fine when you can ignore nebulosity. The problem is that nebulosity is pervasive. And current AI, by definition, can only operate within formal rational methods.
Bender and her coauthors don’t quite say it, but it’s certainly implied in famous statements like: “Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”
From Out of Our Heads: “"We are out of our heads. We are in the world and of it. We are patterns of active engagement with fluid boundaries and changing components. We are distributed."
Winograd says: "My own work in computer science is greatly influenced by conversations with Dreyfus over a period of many years.” He abandoned his Knowledge Representation Language project after weekly lunches with Dreyfus and Searle and began teaching Heidegger in his Stanford computer science courses. (Quoted in Dreyfus’ “Why Heideggerian AI Failed”)
"Solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to 'buy' arbitrary levels of skills for a system, in a way that masks the system's own generalization power." From On the Measure of Intelligence. Granted Chollet might just think we need some kind of continual learning, but this definitely has the Heideggerian flavor.
Searle doesn’t quite get there, but clearly he thinks there’s a semantic thing going on that computers can’t do: "Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else." From the Stanford Encyclopedia of Philosophy. Also I’m obligated to point out that Searle was of questionable moral character.
Vallor's argument turns on the distinction between AI's "backward-facing" architecture — which extrapolates from past data — and human beings as "creatures of autofabrication," future-oriented beings who must "choose to make ourselves and remake ourselves, again and again."
“It makes no sense to imagine that future AI might have needs. They don't need sociality or respect in order to work well. A program either works, or it doesn't. For needs are intrinsic to, and their satisfaction is necessary for, autonomously existing systems — that is, living organisms. They can't sensibly be ascribed to artefacts.” - from Aeon.
“A bird needs wings to fly, but the bird’s flight isn’t inside its wings; it’s a relation between the whole animal and its environment. Flying is a kind of embodied action. Similarly, you need a brain to think or to perceive, but your thinking isn’t inside your brain; it’s a relation between you and the world. Cognition is embodied sense-making.” Mind & Life Institute — “What is Mind? An Enactive Approach to Understanding Cognition,” Mind & Life Institute, August 2022
"LLMs have learned to generalize language. Just language, not the meaning of language necessarily." Newsweek — Newsweek interview, 2024. Also see discussion of Brooks on touch above.
“To fall for the illusion that computational systems ‘who’ have never had a single experience in the real world outside of text are nevertheless perfectly reliable authorities about the world at large is a deep mistake, and, if that mistake is repeated sufficiently often and comes to be widely accepted, it will undermine the very nature of truth on which our society — and I mean all of human society — is based.” — The Atlantic, “Gödel, Escher, Bach, and AI,” July 2023
“As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.” — 2018 Quanta Magazine. Pearl is pointing to the lack of causal understanding, again a “world model”. Grounding in reality may not entail embodiment, though.
There’s a very moving documentary about these ideas called Being In the World. It’s available for free on YouTube!
The danger of this model of sacredness as “getting swept up” is getting swept up in evil. Heidegger was a Nazi and he wanted people to be consumed by fascism. This is a serious problem that Dreyfus and Kelly address (somewhat unsatisfactorily). This ethical dimension is, for me, what always made fully getting on board with this idea of the meaning of life difficult, as appealing as they seem.




If you haven't encountered it already, I think you would get a lot out of the ideas being developed over on Wellbi's Substack (https://rischnarck.substack.com/).
I think intelligence and consciousness are intertwined in common parlance but capable of being parsed as independent processes. The current generation of LLM-based AIs already seem plenty intelligent to be both useful and dangerous. But I don't think they are conscious yet, and it's not because they aren't embodied per-se, it's because they don't seem to have any path to a state of cognitive dissonance whereby accumulated inconsistencies in attempts at reckoning eventually lead to a cascading failure of internal coherence and termination of execution.