Can AI Think? Searle’s Chinese Room Thought Experiment

The philosopher John Searle argues that AI can only simulate cognition but not think through his famous “Chinese room argument”.

Feb 16, 2023By Andres Felipe Barrero, MA Philosophy, MSc Philosophy, Ph.D. Candidate

can ai think searle chinese room argument

 

The use of Artificial Intelligence for a vast range of purposes has become increasingly popular. From self-driving cars to creating award-winning pictures, analyzing billions of tweets, and (ironically) writing entire articles. No one can deny that the growing production of digital data and the advancements in computational power have changed our daily lives and the way we think about intelligence. Some scholars like Nicholas Christakis (a Greek American sociologist) have referred to AI as one of the radical technologies that will forever change human interactions.

 

Still, there is a long way ahead; and noted philosopher John Searle has argued that AI can never really think like humans do. To demonstrate this, he has constructed the notorious Chinese room argument, which became highly influential in contemporary philosophy.

 

Setting Up the Chinese Room Argument: Two Types of AI

a reading robot
A Reading Robot, created by the author using an Artificial Intelligence (Midjourney)

 

A distinction between Artificial Intelligence (AI) and Artificial General Intelligence (AGI) is helpful at this point. Most of us already interact with some type of Artificial Intelligence. In the first case, computers and machines mimic human intelligence in a narrow sense; for example, an AI can be good at filtering and analyzing billions of words on Twitter, but really bad at understanding a joke; or great at driving or playing chess, but not both. The limitation in AI is then a lack of flexibility and an absence of continuous learning; models need to be trained before they can be deployed.

 

Jeff Hawkins, an entrepreneur and neuroscientist, explains that, contrary to AI, AGI is all about creating “machines that can rapidly learn new tasks, see analogies between different tasks, and flexibly solve new problems” (2021, p.119). And here one stumbles upon a philosophical discussion: is such AGI even possible? We are not interested in knowing if this “Intelligence” would be good or bad (morally speaking), or bring about utopias or dystopias; rather, we are investigating the possibility of such Intelligence.

Get the latest articles delivered to your inbox

Sign up to our Free Weekly Newsletter

 

Searle’s Chinese Room Argument

Chinese Room argument
The Chinese Room, Via Open University.

 

The philosopher John Searle, influenced by Wittgenstein’s later philosophy, tackled this problem in his book Minds, Brains, and Science (1984). He argued that programs can imitate mental processes made by humans, but only formally i.e., they do not understand what they are doing. Put differently, such intelligence is just following a set of rules (algorithms) without assigning meaning to them. To illustrate his point better he devised a thought experiment: the Chinese Room.

 

Searle asks us to imagine ourselves locked in a room where various baskets of Chinese symbols can be found. In that room, you will find a rule book in your native language with instructions on how to manipulate the Chinese characters. The rule book only provides rules like: “if you see symbol [X], answer with the symbol [Y]”, and so on. In this sense, you never get to know the meaning of those Chinese symbols.

 

Assume, furthermore, that some Chinese characters are slipped under the door by someone outside the room. You can respond to those messages by taking characters from the baskets, ordering them according to the rule book, and slipping them under the door. Let us suppose that the instructions are so clear and detailed that very soon your answers are indistinguishable from those of a native speaker. The person outside the room now thinks you understand and speak Mandarin.

 

Searle then asks: can we conclude that you know Mandarin? It seems that you do not. By way of analogy, as this is exactly what happens with Artificial Intelligence, AI does not understand as humans do. Searle writes: “understanding a language (…) involves more than just having a bunch of formal symbols. It involves having an interpretation, or a meaning attached to those symbols” (2003, 31). Then, it appears that the road to Artificial General Intelligence is blocked by inherent limitations.

 

Technological Advancement and Imitation Games

alan turing speaking with an AI
Alan Turing speaking with an AI, created by the author using an Artificial Intelligence (Midjourney)

 

You may ask: what about technological advancement? Can it eventually overcome these limitations? Machine Learning algorithms are becoming more complex and the amount of information on the internet used to train such models is expanding exponentially. Simply put, it seems as if it is a matter of time before an AI can understand language and not simply reproduce it. The question is not how, but when.

 

Against this line of thinking, Luciano Floridi, a professor at the University of Oxford, agrees with Searle. Regardless of technological advancements, he says, the inherent limitation of AI will remain. It is like multiplying numbers by zero: regardless of how big the number is, the result will always be zero. Going back to Searle’s Chinese room thought experiment, even if the instruction manual gets ticker and more complex, the person inside the room will never understand Mandarin.

 

In another direction, one could observe that in Searle’s Chinese room, the point is that the people outside are convinced that you are fluent in Mandarin. Isn’t that the whole point? Wouldn’t that be sufficient? For Alan Turing, the father of Artificial Intelligence, if someone cannot distinguish between a fellow human and a machine, that program has succeeded! Could simulation be enough?

 

Ex Machina 2015
Image from EX Machina (2015)

 

We do not need to speculate. Examples are easily found: Google’s virtual assistant can make phone calls and arrange appointments without people realizing there are speaking to an AI; Open AI model GPT-3 has been interviewed by YouTubers; finally, you have probably interacted with a chat-bot when requiring assistance with your bank or with the food order you placed. As Turing put it, it is an imitation game.

 

The imitation game, nevertheless, is not sufficient. As mentioned earlier, single algorithms can outperform human beings in some tasks but that does not mean that they are thinking, or that they are learning continuously. Deep learning networks can play chess (IBM’s Deep Blue) or Go (AlphaGo) or even win at Jeopardy on TV shows (IBM’s Watson), but none of them knows they are playing a game.

 

Besides, we tend to forget that during the “confrontation” dozens of engineers, mathematicians, programmers, cables, laptops, and so on, are behind the AI making everything work; they are indeed great puppeteers! Intelligence is more than having the right answers or calculating the right move. As Jeff Hawkins writes: “We are intelligent not because we can do one thing particularly well, but because we can learn to do practically anything.” (2021, p. 134)

 

Does the Key Lie in Our Brains? 

Kasparov vs Deep Blue
Kasparov vs Deep Blue, via Chessbase.com

 

Given the above, the argument made by Searle’s Chinese room stands. So, if simulations fall short of the ideals of Artificial General Intelligence and the images that we have from science fiction e.g., I Robot (2004) or Ex Machina (2015), what is the future of AI? Perhaps a different approach is needed.

 

When IBM’s Deep Blue won against Garry Kasparov in 1997, the Chess grandmaster said: “Anything we can do (…) machines will do it better (…) If we can codify it, and pass it to computers, they will do it better.” (In Epstein, 2019, p. 22). There is a clue in Kasparov’s words: if we can codify what we do. The thing is that we are still trying to understand how is it that human beings are intelligent and how they, for example, develop language skills. Many aspects of our cognition remain mysterious; we are yet to codify the process. Could it be that the road to Artificial General Intelligence is closed by our lack of clarity regarding how the brain works?

 

phrenology human brain
Taylor, John William (Lecturer on phrenology etc. at Morecombe, Lancashire), active 19th/20th century. Via the Wellcome Collection.

 

This is the attitude taken by Jeff Hawkins in his book A Thousand Brains (2021). He believes that an AGI needs to work as our brain does, which includes being able to navigate in the world, that is, having a body. Embodiment is crucial because it is through the body that the brain works: we learn by touching, moving, seeing, hearing, exploring, tasting, wondering about, etc.

 

An AGI would need, similarly, sensors and moving mechanisms. The body does not require to be human-like; only being capable of exploring and navigating the world is essential. That is why neuroscientists, robotics researchers, and AI developers need to work together. The philosophical consequence of this intuition goes against the old dualism of René Descartes: we cannot think without a body.

 

Returning to Searle’s Chinese Room, any AGI would need access to other contextual information: when are these symbols being used? How do people act when sending these symbols? Stretching the thought experiment, windows and sensors would be needed. As you can already notice, these modifications are related to the characteristics of the room rather than to the person who is inside. Using technical terms, it is not only about how a CPU (central processing unit) works (illustrated by the person inside) but about the interaction with its context, and the navigational skills that could be attached to that CPU.

 

Inadvertently, the Chinese room has reproduced a Cartesian depiction of intelligence. If these modifications were made, there is no reason why the room –as a whole– would be prevented from understanding Mandarin.

 

The Consequences of the Chinese Room Argument: Luddites or Digital Utopians?

Robot Helping an Old Lady
Robot Helping an Old Lady, created by the author using an Artificial Intelligence (Midjourney)

 

I think that Searle would agree with Hawkins: if the mysteries of the brain were to be disclosed, an AGI would be feasible. Hawkins is of the opinion that such developments will come in the next two to three decades (2021, p. 145). Everything hinges on figuring out, first, how humans learn and think, and the cognitive interaction between our bodies and the context surrounding us.

 

What would happen next? What are the consequences of having an AGI? According to Max Tegmark, there are Luddites, who believe that the implications will be negative for humanity, while we can find the digital utopians on the other side, believing that the arrival of such technologies marks the outset of a better time for all. Regardless of your position, one thing is for certain: our ability to think and learn should not be taken for granted; while we wait for an AI to think, we should continue to explore our capabilities as human beings.

 

Literature

 

Epstein, D. J. (2019). Range (Kindle-Ver). Penguin Publishing Group.

Hawkins, J. (2021). A Thousand Brains: A New Theory of Intelligence (Kindle Edi). Basic Books.

Searle, J. (2003). Minds, Brains and Science. Harvard University Press.

Tegmark, M. (2017). Life 3.0. Being Human in the Age of Artificial Intelligence. Alfred A. Knopf.



Author Image

By Andres Felipe BarreroMA Philosophy, MSc Philosophy, Ph.D. CandidateAndrés has a background in philosophy from Universidad de la Salle in Bogotá, Colombia, where he finished his undergraduate and master`s studies. He completed a second master's at Universität Hamburg, Germany, where he wrote about philosophical theories of Modernity and Secularization. Currently, he is a Ph.D. Candidate at Universität Bremen. His fields of interest include the Philosophy of Language, Philosophy of Religion, Philosophy of Science, Social Theory, Discourse Studies, Corpus Linguistics, and Natural Language Processing.