In 1980, John Searle he published “Minds, Brains and Programs,” his famous Chinese Room thought experiment purporting to show that “Strong AI” was impossible. He defined Strong AI as the claim that “the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition” (Searle, 1980, p. 417), and later clarified his definition: “the appropriately programmed digital computer with the right inputs and outputs would thereby have a mind in exactly the sense that human beings have minds” He invites us to imagine himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. The central processing unit in your laptop doesn’t know anything about chess, but when it is running a chess program, it can beat you at chess, and so forth, for all the magnificent competences of your laptop. What Searle describes as an ideology is at the very heart of computer science, and its soundness is demonstrated in every walk of life. We’ve turned the knob on Searle’s intuition pump that controls the level of description of the program being followed. There are always many levels. At the highest level, the comprehending powers of the system are not unimaginable; we even get insight into just how the system comes to understand what it does. The system’s reply no longer looks embarrassing; it looks obviously correct. That doesn’t mean that AI of the sort Searle was criticizing actually achieves a level of competence worth calling understanding, nor that those methods, extended in the ways then imagined by those AI researchers, would likely have led to such high competences, but just that Searle’s thought experiment doesn’t succeed in what it claims to accomplish: demonstrating the flatout impossibility of Strong AI.
Philosopher Daniel Dennett's Book Intuition Pumps