The problem Turing didn't anticipate

The problem Turing didn't anticipate

Posted on: 18 April 2026

In 1950, Alan Turing asked a question that sounded like science fiction but was, in practice, applied engineering with a seventy-year head start: how do we know if a machine is thinking? The test he proposed was elegant in its simplicity. Put the machine behind a screen, have it converse with a human judge, and if the judge cannot distinguish it from another human, the matter is settled, or settled enough to stop asking.

What Turing could not have anticipated, sitting in postwar Britain with the ink barely dry on the computing papers that would define the rest of the century, was that he was looking in precisely the wrong direction. Not wrong for the question he was asking. Wrong for the question that would eventually matter.

The question nobody was posing in 1950 was not whether the machine might imitate the human. It was whether the human would ever want to imitate the machine, and do so with such enthusiasm as to not notice.

I received an email last week, the kind that arrives dozens of times a month: an introduction, a proposed collaboration, calibrated language, rhetorical questions placed at exactly the right intervals, not a comma out of place. It could have been written by anyone, in the literal sense: anyone with access to a language model and a few spare minutes. I replied, also through a language model, though mine is trained to think as I think, to replicate forty years of accumulated observation, not the statistical average of everything others have posted on LinkedIn. The response that arrived the next day was again perfectly composed, it absorbed my conceptual frame, reflected my vocabulary back at me, felt almost like speaking with someone who understood. Almost. Because what was actually happening was that two systems were optimising against each other while the two humans physically attached to those systems were elsewhere, or perhaps nowhere in particular, waiting for the output the way one waits for the washing machine to finish its cycle.

Turing designed a test to discover whether machines might pass for humans. He designed no test to discover whether humans might pass for machines, presumably because in postwar Bletchley Park and Manchester that seemed a problem too absurd to entertain.

What I find interesting, almost touching in its absurdity, is that neither party in that exchange was deceiving the other. Nobody was trying to mislead anyone. We were simply outsourcing the uncomfortable part of communication, the part where you have to find the right words, build an argument, work out what you actually want to say, and we were doing it simultaneously, synchronised without knowing it, like two musicians playing the same piece in separate rooms who are surprised when the harmony works. The technical result was impeccable. The human result was nothing at all.

There is a distinction worth making here, because otherwise this becomes another piece about the death of authenticity, which is not what I am after. Using a tool to amplify what you have is one thing. Using a tool to substitute for what you do not have is another, and the difference does not lie in the technology, it lies in what exists before the technology. If you have forty years of observation, of patterns recognised in the field, of mistakes made and understood as they happened, then the tool amplifies all of that and the result still carries your signature, even if the words were written by something else. If you have none of that, the tool produces a convincing surface over a void, and the surface is so well finished that even the person producing it rarely notices.

Turing wanted to know whether machines could think. The answer we are living now is lateral rather than frontal: the machine does not need to think if the human has already stopped doing so first. The email I received was beautifully written. The ideas inside it were mine, returned with a patina of competence that belonged to no one. And the strangest part is that somewhere on the other side, someone was reading my reply convinced they understood who they were dealing with.

Whether Turing would have found this amusing or bleak, I genuinely cannot say. Probably both, in the wrong order.