Here's to the great John Searle (14.10.2025)
- Tricia Voute
- Apr 13
- 3 min read

I don’t know if you have heard of John Searle, but he was a giant of AI and cognitive science, and a highly significant philosopher. He died last month, and I thought I’d write this article in tribute to him.
I first met his work when I taught, ‘Mind, Brains and Science’. It was the 1984 BBC Reith Lecture and has gone down as one of the most significant ever given. I was blown away by his ability to communicate complex ideas to a non-philosophical audience while, at the same time, coming up with one of the most influential thought experiments ever, The Chinese Room. I’ll share it with you.
It begins with a question. Would you say that a computer, which responds to Chinese questions so fluently it is indistinguishable from a native speaker, truly understands the language?
To answer this, he tells a story.
Imagine you're locked in a room with no knowledge of Chinese. You're handed a rulebook—a kind of code manual—that tells you how to match squiggles on a page (Chinese characters) with other squiggles. Papers come covered in characters; you consult the rulebook and write characters in response, posting them back out. To those reading your response, you appear fluent, but you aren’t. You don’t understand anything at all — you’re just manipulating symbols.
What is his point? Well, it is an attack on Turing. Turing offered a thought experiment of his own, in which he argued that if you were unable to tell the difference between communicating with a computer and a human, you must grant the computer intelligence.
Searle is saying, ‘no way!’
He argues that passing the Turing Test only shows that a system can imitate understanding—not that it actually possesses it. Intelligence, in the Turing sense, is surface-level. For Searle, real understanding—and consciousness—requires more than rule-following. It requires meaning, something no computational system can generate from syntax alone.
Naturally, this is controversial, and people have spent a lot of time attacking Searle and formulating opposing thought experiments to the Chinese Room. But I’m a fan (for what it’s worth), and so is Anil Seth.
Seth is an important British neuroscientist whose work straddles neuroscience, philosophy, and cognitive science. Owing to his interest in how consciousness emerges from physical processes in the brain, he admits the importance of Searle’s work on his own thinking, in particular, Searle’s claim that consciousness is a biological process and not a mechanical one. Seth is pushing science to recognise that conscious has both subjective and objective aspects to it. By subjective, he means your inner voice which no one else can share, and by objective, he means the physical presentation of your brain.
What both Seth and Searle have identified (which Turing failed to do) is the difference between intelligence and consciousness. We might grant computers intelligence, but we shouldn’t be so quick to grant them consciousness. After all, what is the value of intelligence if you don’t understand (subjectively) what is being communicated? Its only use is for us, who receive the information and then work with it. This, of course, is the great hope of AI for our civilisation.
But – and this is my concern – we’re casually assigning consciousness to machines through the language we use every day. We talk about robots being ‘seeing’ and ‘hearing’; worse, we say they are ‘confused’ or ‘hallucinating’. In fact, we are using the language of consciousness all the time, and this human propensity to anthropomorphise things is only going to get worse. Think of China’s first World Humanoid Robot Games in Beijing this August. Admittedly, they weren’t very good at it, but that is just the start; they will get better and better, and the ethical implications are going to be enormous. How do we adapt to this integration of robots into our daily lives? Soon, they will be part of our healthcare, care of the elderly, entertainment, even education. We will say the robot looking after our infirmed parent ‘cares’ for them, or is ‘worried’ about them? Will the teacher be ‘pleased’ or ‘disappointed’?
We need to think about this very carefully. Right now, our concerns focus on job losses, and this is important, but no one is monitoring how we speak. What we say about the world, structures how we understand it. If we talk about these robots as if they had consciousness, soon we will be granting it to them. Once we grant them consciousness though this lazy use of language, we may start to feel moral obligations towards them, and this will blur the moral boundary between living beings and simulations. Where that lead us, is anyone’s guess.




Comments