6 minute read

One criticism of current LLMs is that they can’t reliably do sophisticated math or logic problems out of the box that humans seem to be capable of. In essence, while humans are Turing-complete, today’s LLMs are not.

But wait… are humans even Turing-complete?

We like to think that humans are great at complex reasoning. But put a person in a blank room with nothing to write on, and they’re getting nowhere fast.

Modern human cognition depends on our abilities to write, diagram, or otherwise externalize thought so that we can communicate ideas to others, or simply extend our own working memory.

Disclosure: Some of the links below are affiliate links. This means that, at zero cost to you, I will earn an affiliate commission if you click through the link and finalize a purchase. I wrote this article without affiliate links in mind.

Turing Machines

A Turing Machine has the following pieces:

  • a tape
  • a read/write head for the tape
  • a finite state machine that controls the read/write head

But have you ever wondered where this model came from? I had never read the Turing’s original paper until I came across some commentary about it in the conclusion to Edwin Hutchins’s book about distribute cognition: “Cognition in the Wild.”

This is the key takeaway: Humans are not Turing Machines. Humans paired with pencil and paper are.

Here’s what Turing has to say about his original formalism [Turing 1936]. He is attempting to formalize the process by which human computers calculate numbers.

Tape = Paper

Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child’s arithmetic book. In elementary arithmetic the two-dimensional character of the paper is sometimes used. But such a use is always avoidable, and I think that it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, i.e. on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite. If we were to allow an infinity of symbols, then there would be symbols differing to an arbitrarily small extent.

So the tape in a Turing Machine formalizes the role of paper.

Read/Write Head = Eyes & Hand+Pencil

The behaviour of the computer [the person computing!] at any moment is determined by the symbols which he is observing, and his “state of mind” at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations.

So the read/write head formalizes the eyes and hand+pencil of the person computing numbers.

Finite State Machine = Human

We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be “arbitrarily close” and will be confused. Again, the restriction is not one which seriously affects computation, since the use of more complicated states of mind can be avoided by writing more symbols on the tape.

So it is only the finite state machine that formalizes the human themselves.

Daniel C. Dennett summarizes the process Turing seems to have gone through when developing the Turing Machine in “Consciousness Explained:”

He was thinking, self-consciously and introspectively, about just how he, a mathematician, went about solving mathematical problems or performing computations, and he took the important step of trying to break down the sequence of his mental acts into their primitive components. “What do I do” he must have asked himself, “when I perform a computation? Well, first I ask myself which rule applies, and then I apply the rule, and then write down the result, and then I look at the result, and then I ask myself what to do next, and…” Turing was an extraordinarily well-organized thinker, but his stream of consciousness, like yours or mine or James Joyce’s, was no doubt a variegated jumble of images, decisions, hunches, reminders, and so forth, out of which he managed to distill the mathematical essence: the bare-bones, minimal sequence of operations that could accomplish the goals he accomplished in the florid and meandering activities of his conscious mind.

How Did We Get Here?

Why do we tend to equate humans and Turing Machines? Here’s Hutchins’s account of it. I don’t fully agree, but it is a decent summary.

Now, here is what I think happened. It was discovered that it is possible to build machines that can manipulate symbols. The computer is nothing more than an automated symbol manipulator. And through symbol manipulation one can not only do things we think of as intelligent, like solving logical proofs or playing chess; we know for a fact that through symbol manipulation of a certain type it is possible to compute any function that can be explicitly specified. So, in principle, the computer could be an intelligent system. The mechanical computers conceived by Charles Babbage to solve the problem of unreliability in human compilers of mathematical and navigational tables were seen by his admirers to have replaced the brain: “The wondrous pulp and fibre of the brain had been substituted by brass and iron; [Babbage] had taught wheelwork to think” (H. W. Buxton, cited in Swade 1993). Of course, a century later it would be vacuum tubes that created the “electronic brain.”

But something got lost in this move. The origin myths of cognitive science place the seminal insights of Alan Turing in his observations of his own actions….

Originally, the model cognitive system was a person actually doing the manipulation of the symbols with his or her hands and eyes. The mathematician or logician was visually and manually interacting with a material world. A person is interacting with the symbols and that interaction does something computational. This is a case of manual manipulation of symbols.

Notice that when the symbols are in the environment of the human, and the human is manipulating the symbols, the cognitive properties of the human are not the same as the properties of the system that is made up of the human in interaction with these symbols. The properties of the human in interaction with the symbols produce some kind of computation. But that does not mean that that computation is happening inside the person’s head.

LangChain, Bing Chat, and the Near Future of LLMs

Humans plus pencil and paper are Turing Machines.

So why do we expect LLMs, like GPT, to both behave like Turing Machines yet also approximate human intelligence? Because they run on computers? Fair, but (at least right now) we can only grant GPT the power of Turing-completeness by giving it access to external representations like web search, a Python REPL, or a database – the modern equivalents of Turing’s pencil and paper.

This is why I’m bullish in the near-term on solutions like Bing chat and LangChain that provide such external representations.

No doubt, LLMs will get better, more accurate, capable of processing longer and longer conversations. Yet no matter how much better they get, the question of correctness will always linger. How can I trust this piece of code will run? Is this argument consistent? Are these facts correct?

I believe we won’t be able to answer such questions without external representations. Sure, the transformer architecture (or at least variations of it) is Turing complete [Pérez et al. 2019]. However, today’s networks don’t typically use their outputs as Turing tapes unless explicitly guided to do so (think step-by-step). And today’s networks cannot search the internet or emulate a Python REPL in reasonable time and space.

With formal, external representations we can better understand the “variegated jumble” of an LLM’s “mind” and build much more powerful tools that we can interrogate and maybe even verify. As Turing showed us back in 1936, it’s just like giving a human some pencil and paper and letting them write. ❏


Thanks to Geoffrey Litt for reviewing this post. If you enjoyed it, you might also like my friend Ajay’s account of our adventure trying to get Bing chat to recommend Dune.

Categories:

Updated: