A critical look at artificial intelligence and whether simulating intelligence is the same as possessing it.
If a computer could text you so convincingly that you fell in love with it, would it matter if the computer didn't actually 'feel' anything at all?
In 1950, Alan Turing proposed a way to answer the question 'Can machines think?' He called it the Imitation Game. Instead of defining 'thinking'—which is philosophically messy—Turing focused on behavior. The test involves three players: a human judge, a human participant, and a machine. The judge communicates with both via text. If the judge cannot reliably tell which is the human and which is the machine, the machine is said to have passed the test. Turing’s core claim was that if a machine acts indistinguishable from a conscious being, we have no logical reason to deny that it is 'thinking.' This perspective is known as Functionalism: the mind is what the brain does, regardless of what it is made of.
Consider a modern LLM (Large Language Model) like GPT-4. 1. You ask the AI: 'How does it feel to see a sunset?' 2. The AI generates a poetic response about golden hues and tranquility. 3. Because the output is indistinguishable from a human's poetic reflection, it passes a 'mini' Turing Test for that specific interaction.
Quick Check
Does the Turing Test require a machine to actually possess a soul or consciousness to pass?
Answer
No; it only requires the machine's outward behavior to be indistinguishable from a human's.
Philosopher John Searle disagreed with Turing. In 1980, he proposed the Chinese Room thought experiment. Imagine you are locked in a room with a massive rulebook. People slide slips of paper with Chinese characters under the door. You don't know Chinese, but the rulebook tells you: 'If you see symbol , write symbol .' You follow the rules and slide the response back. To the people outside, you seem to speak Chinese perfectly. However, you are just performing syntax (manipulating symbols) without any semantics (understanding meaning). Searle argues that computers are exactly like the person in the room: they follow programs ( logic) but have no 'intentionality' or actual grasp of what the symbols represent.
Imagine a translation app processing the phrase 'The spirit is willing, but the flesh is weak.' 1. The machine uses a statistical probability to find the most likely translation in another language. 2. It outputs the result perfectly. 3. However, the machine does not know what a 'spirit' is or what 'weakness' feels like; it only knows that symbol often follows symbol in its dataset.
Quick Check
In Searle's thought experiment, what represents the 'computer program'?
Answer
The rulebook that tells the person which symbols to output.
The heart of the debate lies in the distinction between Syntax and Semantics. Syntax refers to the formal properties of symbols—their shapes and the rules for combining them (like the grammar of a sentence or the s and s of binary code). Semantics refers to the meaning or the 'aboutness' of those symbols. Searle’s famous syllogism is: 1. Programs are purely formal (syntactic). 2. Minds have mental contents (semantic). 3. Syntax by itself is neither constitutive of nor sufficient for semantics. Therefore, programs are not minds. This challenges the idea of Strong AI—the belief that an appropriately programmed computer actually is a mind, rather than just a simulation of one.
Consider the equation for a circle: . 1. A computer can calculate every point on that circle using the formula (Syntax). 2. A human can visualize the 'roundness' and relate it to a wedding ring or a planet (Semantics). 3. The challenge: Can a machine ever bridge the gap from the formula to the 'experience' of the shape?
Which philosophical position suggests that the mind is defined by its 'inputs and outputs' rather than its internal makeup?
According to Searle, why can't a computer truly understand language?
In the Turing Test, the judge is allowed to see the physical appearance of the participants.
Review Tomorrow
In 24 hours, try to explain the 'Chinese Room' scenario to someone else. Can you remember why the person inside doesn't actually 'know' Chinese?
Practice Activity
Research the 'Total Turing Test.' How does adding a requirement for physical interaction (robotics) change the argument for machine intelligence?