We're far from AGI

We're far from AGI

LLMs are interesting and useful. But hype and expectations are high, while discussions of LLMs and AGI ignore many areas of human cognition.

Featured on Hashnode

(I'll start with a huge thank you to Joe Paxton at Google, for discussions about the topics here. Not that Joe agrees about everything, but I enjoy the conversation!)


Although I've worked mostly in Tech, I'm a psychologist and philosopher. I have a PhD in clinical + personality psychology, an MA and incomplete PhD in modern philosophy, a BA in psychology and religion ... and two decades in Tech research.

In this long post, I list a few topics in philosophy and psychology that are missing from most discussions about LLMs — especially as to whether LLMs or other AI systems display AGI (artificial general intelligence).

First of all, I'll stipulate that it all depends on what you mean by "general intelligence." If you assume that general intelligence means "plausibly passing a Turing test, under some conditions" then we're there, case closed. (Maybe we've been there since 1966 with Eliza, which you can try here.)

Yet "general intelligence" is surely much broader than that. There are many aspects of human intelligence and cognition that are not amenable to a Turing test. When we broaden this to consider animals, the set of cognitive capabilities is even larger. We should consider a large set of such intellectual capabilities before making conclusions about general intelligence.

My goal here is to give an admittedly incomplete list of several aspects of cognition that are related to intelligence and yet are missing from LLM discussions of AGI. I don't argue that all of these are required for general intelligence, but rather their existence means that their role in general intelligence should be considered.

I can also head off one potential misunderstanding: I'm not saying that LLMs are bad or useless because of these limitations. I'm simply arguing a longer form of the following: (1) we don't understand much of human cognition, (2) many of the things we don't understand are missing from LLMs, so (3) it is premature to say that LLMs have anything like AGI. I have separate thoughts about how and when LLMs may be useful but that is another discussion (e.g., this post looks at LLMs to learn coding).

FWIW, I take a quite different tack than work such as Gardner's "multiple intelligences" (Frames of Mind, 1983). I'm not interested in "kinds" of intelligence as much as in structural and philosophical aspects, such as intentionality. Also, I don't mean "intelligence" in the sense of what an IQ test measures; the discussion here is much broader. However, I have written separately about IQ tests for AI vs. humans.

Here goes! Following is a non-exhaustive list of areas that are mostly missing in discussions of LLMs and AGI. I mention a few classic texts, too, not because those reflect the most recent or best work, but because they are works one should know.


Group 1: Philosophical Issues

These are classic areas of philosophical theory that relate to how we interact with language and intentional acts.

  • Intentionality (expressed, perceived; assumed). When we say or do something, the act includes an intention to refer to something outside the mind, plus a shared social context of understanding that intention. Suppose I say, "Watch out!" The intention varies dramatically according to whether I am hiking with a partner; observing a nearby aircraft while sitting in a copilot seat; arguing with my spouse; or confronting someone while I'm holding a baseball bat. The other person will (usually) recognize the intentional object, and this shared intentionality forms a core part of the linguistic exchange. Examples are everywhere and obvious; in philosophy, Searle's Speech Acts goes into such issues in detail. LLMs have no intentionality because they do not refer to things, or have states of mind, but respond only to other words — and yet humans project intentionality onto them.

  • Truth orientation. This aspect is discussed for LLMs but bears repeating and is similar to the previous discussion of intentionality. A general assumption of human linguistic interaction is that statements have a relationship to truth (to express the truth, or to lie, or to evade, or to mislead without lying, and so forth). LLMs do not have any such orientation, by design. LLM statements are probabilistic utterances with no relationship to truth. (LLMs may produce true statements, but truth — and a relationship to true states of affairs — is not coded, understood, or intended by them.)

  • Ethics. One might dispute whether LLMs are moral agents, yet LLMs have no ethical concepts that assist them with moral behavior. Such concepts are not part of their design, training, or capacity. They might do things that are right or wrong, but they do not have concepts to explain why those things are right or wrong. Yet ethics — our principles, reflections, motivations, and general thinking — is a core part of human intelligent reasoning. (It is unknown and perhaps unknowable whether there is anything like ethics in cognitively complex animal species.) We cannot say definitively whether or how ethics relates to intelligence, but LLMs do not operate with conceptual ethical capacities.


Group 2: Philosophical + Cognitive Questions

The following areas reflect both psychological questions of how we perceive them, as well as philosophical questions about their importance for human knowledge.

  • Embodiment. Much of human cognition depends directly on bodily experience. In the most direct sense, one's bodily state directly influences cognition (and vice versa). Less directly, it seems certain that human concepts depend in deep ways on the experience of being embodied; and it is extremely unclear what cognitive changes would appear in a differently-bodied or non-bodied entity. One approach is taken by Varela, Thompson, and Rosch, The Embodied Mind. As systems without bodies, LLMs have no embodied cognition; it is unclear how important that is.

  • Time. Humans perceive time, and might be said to be "time beings" (as said by the Zen philosopher, Dogen). However, much about time — as both a physical phenomenon and a psychological phenomenon — is unclear (see Carlo Rovelli, The Order of Time). We do not know how animals perceive time or the extent to which perception of time is significant for intelligence. Chiang's beautiful short story, "Story of Your Life," explores this; read it and then see the superb movie version, Arrival. LLMs have no cognitive existence within time and thus are very different from humans.

  • Concern. A recurring theme in existential philosophy is concern (also expressed as care, or that things matter, or as existential anxiety, or left untranslated as the German Sorge). Concern is related to intentionality but is much broader in expressing that human existence involves questions of being about something and of manifesting concern about ourselves, others, and the world. LLMs have no concern, and we do not know the degree to which this limits or influences general cognition.

  • Emotion. Human emotions are intricately intertwined with cognition — so much so that there is a dedicated journal to investigating the relationship (Cognition and Emotion). Emotions are arguably a part of general intelligence and there are many unanswered questions about how, when, and to what extent emotions are important in general cognition. An early title in this area is R. Picard, Affective Computing (1997). Current AI and LLMs do not have emotional systems and it is unclear how this limits their general abilities.

  • Social embeddedness. Human intelligence (as well as animal intelligence) exists in social contexts that structure the expression of intelligence in obvious and not-so-obvious ways. For instance, intentionality and ethics (discussed separately here) only make sense in respect to presupposed social arrangements. We also know that people vary widely in how they engage and relate to social structures. It is unclear how social arrangements might apply to AGI, and it is also unclear to what extent human-like social structures are necessary for human-like intelligence. LLMs and other AIs are not socially embedded and this poses unknown limits to their cognitive processing and abilities.


Group 3: Cognitive Processing

  • Reflective cognition. Humans not only know things, but know that they know, and can reflect on what they know and don't know (as well as many other things besides knowledge as such). It is unclear to what extent such reflective cognition is a prerequisite for general intelligence; and is unclear to what extent animals have similar, or quite different, or no reflective capabilities. (This is related to consciousness but is more clearly definable.) LLMs and other AIs do not have reflective cognition, and we don't know how that limits their capacities.

  • Higher-order cognitive operations. We not only learn but also learn to learn, and intelligence is manifest in higher-order skills where we change, adapt, reflect, exemplify, teach, and so forth. LLMs are trained but that is different from learning how to learn; and when we consider training, LLMs are better viewed as an extension of humans rather than independent cognition.

  • Unconscious processing. A large proportion of human cognition is unconscious; sometimes that is accessible to consciousness (e.g., ideas "pop into" consciousness) but sometimes it is not (e.g., how our brains construct a visual gestalt from piecemeal saccades of retinal impressions). We have all had the experience of consciously setting aside a problem only to have a solution or insight appear all at once later. Is consciousness required for general intelligence? It is unclear how to think about the importance of consciousness and various types of unconscious processing, and whether they are crucial or epiphenomenal (although it is dated and I have many disagreements with its premises, Dennett's Consciousness Explained provides great examples and an excellent overview of this topic). The dichotomy of conscious/unconscious does not apply to LLMs, and it is unclear what means for general cognition.

  • Innate cognition (or, if you prefer, unconscious knowledge). Sometimes we know things ... and yet we don't know that we know them. Classic discussions involve language acquisition, but here's a simpler example. For two years, our dog would not jump into the back seat of our car; we tried but couldn't teach him to do it. However, one day he saw a deer jump over a fence. When we returned to the car, he jumped in! He had innate knowledge of how to jump, but needed to see a four-legged example to unlock it. Cognition is structured by such innate abilities — genetic, epigenetic, and/or culturally acquired — but there is not a clear understanding of how such factors work at a general level. We do not know what kinds of innate cognition might be important for LLM general intelligence.


Group 4: Other Kinds of Knowledge

  • Conceptual and metaphorical knowledge. Humans use concepts, metaphors, and the like to generalize knowledge and apply it in new situations where there may be no pre-existing examples. The associations involved in such knowledge may use fuzzy relationships, puns, etc. Wittgenstein posed many questions related to this in his later work. (A thought-provoking episode of Star Trek: The Next Generation is recommended: "Darmok and Jalad at Tanagra" [discussion]. A more recent and linguistically more sophisticated work is Chiang's "Story of Your Life," mentioned above.) LLMs do not distinguish or work with conceptual and metaphorical knowledge; this poses unknown limitations on their abilities.

  • Transcendental knowledge (as opposed to experiential knowledge). There is controversy on this among philosophers, but IMO it is clear that humans have access to knowledge that does not depend on direct experience. I'm not talking about metaphysical or supernatural events (see the next point), but rather about "transcendental" in the technical philosophical sense: knowing things about the limits of reason in itself (see Kant), the truths of mathematics (see Gödel), and other such aspects of the world and the mind. These are interesting for AGI in two ways: (1) whether the underlying intellectual capabilities (e.g., to make transcendental inference) are important for general intelligence, and (2) what such knowledge says about the limits of AGI (e.g., Kant's limits of reason; Gödel's view that we can perceive truths despite the formal incompleteness of symbolic systems). LLMs do not have either experiential or transcendental knowledge, as those are usually conceived, but only probabilistic relationships of training items. The cognitive implications are unclear.

  • Transcendental knowledge (metaphysical). This is the realm occupied by human experiences described by terms such as religious experience, spiritual awakening, enlightenment, encountering the absolute, and so forth. These vary from momentary experiences of awe to life-changing episodes of insight and conversion. For purposes here, we do not have to believe or decide that such experiences are true — that is an enormously complex set of questions — but rather simply observe a fact: for many people, such experiences are important, sometimes the most important things in their lives. This poses many questions about psychological mechanisms, and whether some set of cognitive abilities (or mistakes, if you prefer) are intrinsic or valuable to general intelligence. (The classic text is William James, The Varieties of Religious Experience.) It is technically unknowable whether LLMs have transcendental experiences — but I can't think of any way they could, nor any way we might verify that they do (apart from naive textual literalism). More to the point, we don't know how important transcendental experiences are, with regard to general cognition.


Finally

  • Human differences and disabilities. People show differences in each of the areas mentioned above. Each cognitive area takes many forms, and in any person, any sort of cognition may be quite different from what I describe (or might be "missing"). These areas are more like Wittgenstein's "family resemblance" across human cognition and are not a precise set or a normative description. AGI assumes a reduced and yet normative description of cognition. Writings about "AGI" do not take into account how differences and diversity act to broaden both the actual expression, as well as theoretical understanding, of general cognition.

  • There are other areas we don't know (non-human areas of general intelligence). In the list above, a few categories apply to non-human animals, but — as far as we know — most of them do not. That poses questions that include: Are there analogs in animal intelligence that align with such "missing" categories but are not perceptible to us? Are there other categories of such non-linguistic experiences that are unique to various animal species? What do those similarities and differences say about general intelligence? What differences might apply or not apply to computerized intelligence? Discussions of LLMs and AGI seem to assume that only human-like linguistic (and similar symbolic) output are significant to our conception of general cognition.


Conclusion

Are you thinking, "that's a long list"?

That's the point. There are many areas of cognition that we do not understand well. These are either obviously or arguably related to general intelligence and yet are missing from LLM and AGI discussion.

Anyone who makes claims about AGI — while ignoring intentionality, time, concern, embodiment, or many of the other items in the list — is unprepared to talk about general intelligence. The issues on the list above are real and reflect important questions and concepts for intelligence in humans and (sometimes) in animals. One should not dismiss them based on naive presumptions about language and cognition. That doesn't mean that one must take any particular stance on them, but rather that a stance of ignorance is a poor place to begin.

Now, LLMs may be useful even if they have nothing to do with AGI. I'm not arguing otherwise from the concerns here.

The next time you hear claims that LLMs are approaching "general intelligence", or that "cognition is solved" ... stop and ask, "Does that include ___?" Maybe this list will help. And if you see claims that LLMs are "very intelligent" you should be very skeptical (for AI performance on IQ tests in particular, see this post).

Do you disagree with the list? No problem. It is incomplete and includes controversial areas. My definitions are given at a level that I hope is approachable by non-specialists; I do not intend to define every term or observation with academic rigor. In short, it's an overview to spur thinking and discussion!

I hope you enjoyed this theoretical departure from the usual quant topics!

A Few Readings

If you're interested in the themes here, mostly related to AI and AGI as opposed to LLMs as such, here are a few things I recommend — starting with two classics:

D. Hofstadter (1979), Gödel, Escher, Bach. If you are interested in AI and/or cognition and haven't read this, you have a treat in store.

H. Dreyfus (1992). What Computers Still Can't Do. A philosophical take, with great examples, about the limits of artificial reason and capabilities.

A. Tchaikovsky (2018). [fiction] Children of Time (and sequels, Children of Ruin, Children of Memory). Outstanding science fiction series with fascinating depictions of both AI and animal intelligence.

M. Mitchell (2019). Artificial Intelligence: A Guide for Thinking Humans. A computer scientist's reflections and explanations that de-hype AI. (I'm waiting for the second edition!)

S. Wolfram (2023) What is ChatGPT Doing and Why Does It Work? A basic but detailed explanation of how LLMs work — and, indirectly, why they don't have higher order concepts, etc. They are large and complex, but not mysterious in principle.