When IBM’s Watson supercomputer triumphed over two top Jeopardy champions in February 2011, the media buzzed with talk of artificial intelligence (AI), just as it had fourteen years earlier when Watson’s predecessor, IBM’s Deep Blue, won a match with world chess champion Garry Kasparov. Bloggers, journalists, and radio hosts were asking a question as old as the field of computer science itself: When will computing machines surpass human intelligence?
But both Deep Blue and Watson exemplify a “disembodied” approach to intelligence that has been strongly challenged by cognitive scientists, especially roboticists, in recent years. In my last article for Footnote, I explored the evolving recognition that human and animal cognition is an embodied affair, involving not only our brains but our entire bodies and even the surrounding environment. This new understanding of cognition has enormous implications for the design of intelligent machines. Truly intelligent machines must also possess an embodied intelligence that goes beyond the skills of Deep Blue and Watson and enables machines to perceive and interact with their environments.1
Machines That Can Think
After its predecessor Deep Thought lost to Kasparov in 1989, Deep Blue did a bit better in its first match against him in 1996, winning one game. In their 1997 rematch, Deep Blue returned to beat Kasparov using software that Kasparov denounced as targeted especially at his style of play. Kasparov spoke to Reuters of feeling as though his “alien opponent” showed “signs of intelligence” – although of a kind that is not exactly human. Indeed, Deep Blue determined its next move using methods that no human player could match, checking thousands of combinations of moves each turn.
Watson beat its competition in similarly alien ways. The primary new skill it displayed lay in deciphering the often cryptic Jeopardy clues in order to pursue a relatively straightforward search for factual information in a huge database. Although Watson did not have direct access to the Internet during the competition, engineers had built much of its database by searching the World Wide Web and storing a local copy of the information for later use.(a)
What Watson, Deep Blue, and many other attempts to build intelligent computers have in common is that they represent a disembodied approach to intelligence. Chess, with its closed set of rules and limited space of movements on a board that can easily be represented on a computer, and Jeopardy, with its ritual formula of answers and questions, provided engineers with programming challenges that did not require any coordination of complex physical movements. The only truly mechanical part of either design was Watson’s ability to press the buzzer.
This disembodied approach to intelligence has been favored by scientists since the early days of computer development. In 1950, Alan Turing proposed his now-famous “Turing Test” of human-machine equivalence. If an astute judge cannot tell a machine and a human apart after a series of open-ended written exchanges, the machine would be judged to possess human-level intelligence.(b) The machine participants in Turing’s “Imitation Game” were disembodied by design in order to remove what he regarded as irrelevant cues. Turing believed that limiting the interaction to typewritten text had “the advantage of drawing a fairly sharp line between the physical and intellectual capacities of a man.”2
But designing a computer that can, for example, carry on an ordinary conversation has turned out to be a particularly tough nut to crack. Much of what we do in conversation reflects deep knowledge of how we are situated in physical space, as well as exquisite sensitivity to the nuances of spoken language. Watson lacked live speech recognition, but instead received the Jeopardy answers as digitally encoded text while the human contestants heard the clue from host Alex Trebek. Apple’s Siri has voice recognition capability, but is notoriously prone to contextual misinterpretation (although its programmers also seem to have a good sense of humor). These purely verbal approaches to machine cognition seem destined to produce machines with an uncannily alien intelligence that does not resemble human cognition.
The Missing Biology of Machines
In the early days of AI, it was assumed that the hardest jobs for robots would be things like playing chess and responding to questions – quintessentially human activities that no dog or ostrich can achieve. It turned out, however, that walking and finding food was just as difficult, if not harder.(c) Roboticists learned that what had taken evolution so long to accomplish was not going to be achieved within just a few years of hardware and software engineering. Careful observation of animals revealed that nature had a lot of tricks at its disposal and that both human and animal intelligence involve our entire bodies, not just our brains.
This is why the kinds of artificial intelligence found in Deep Blue, Watson, and the iPhone’s Siri are so “alien”. Although two of them talk (albeit a bit strangely), none of them walk. Nor are they capable of seeking out the basic materials and energy they need to sustain themselves. They cannot interpret sensory information beyond narrow types of input, such as human speech, that they are programmed to recognize. In other words, these systems lack some of the most fundamental properties of intelligent life.
Existing artificial systems lack not only these sensory capacities, but also some of the most rudimentary forms of short-term memory, with very little ability to flexibly link what they are currently doing to what they did ten seconds, ten minutes, or ten hours ago. Without sensory perception or the demands of moving to find food and mates and avoid danger, Deep Blue, Watson, and Siri are far less human than dogs or even ostriches.
Embodying Machines
Fortunately, robotics has changed since Turing’s time. Our growing understanding of the physical embodiment and environmental embeddedness of human and animal cognitive systems is providing scientists with inspiration for the design of artificially intelligent machines. The field of applied robotics has been progressing in the development of embodied cognition, with goals that are arguably more practical than producing a machine that can hold a convincing conversation or triumph in a television game show.
Autonomous robots are designed to be freestanding, mobile, and capable of operating in a variety of environments. These are not the industrial robots that perform repetitive tasks in fixed factory situations. Whether it is iRobot’s Roomba vacuum or Google’s self-driving car, the autonomous robot has to sense physical events in its environment and react appropriately to changes that cannot be predicted more than a few moments in advance.
The embodied approach to intelligence has also spawned robots that can glean information about objects through sight and touch and even rearrange them to visualize and solve problems. Examples include the Massachusetts Institute of Technology’s Ripley robot, designed for social interactions so that it can better learn language, and the European robot ARMAR-III, created by scientists at Germany’s Karlsruhe Institute of Technology to interact with humans in common household situations. By interacting with objects in their environment, Ripley and ARMAR-III learn the physics of everyday actions and are able to obey commands.
This video shows ARMAR-III hard at work in the kitchen.
The groundwork for robots with real-time reactivity to their surroundings was originally laid in the field of cybernetics, centered on the analysis of feedback loops in the control of animal behavior.(d) Cyberneticists developed the idea that a balance of positive and negative feedback loops is necessary to maintain a system within a functional range. Feedback loops allow the results of previous actions to influence future actions; for example, shivering raises body temperature, which can stop the shivering.
A key idea from cybernetics is that cognition involves the precise coordination of sensorimotor feedback loops in constant interaction with the environment. These loops exploit the temporal dynamics not only of nervous systems, but also of the physical bodies and environments in which they are embedded. As an example of the importance of modifications to the environment in such feedback loops, seeing food diminish in front of you may be a stronger cue to stop eating than an internal signal of being full. People tricked with a bottomless soup bowl eat on average two-thirds more but don’t report being any more full.
From Technology to Humanity
Looking at human cognitive capacities from an embodied and embedded perspective changes both our understanding of the human mind and our view of how to replicate human-like intelligence in machines. In my book Moral Machines, coauthored with Wendell Wallach, we suggest that our everyday moral capacities depend not so much on abstract calculations of utility or conformity to rules, but on the ability of embodied and embedded agents to detect and respond to signs of discomfort and distress caused by their own actions, and perhaps even to have some version of emotion.3 The development of machines with such capabilities awaits progress in our understanding of how biological intelligence is embodied. As these developments continue, the machines we build may come to seem less alien and perhaps become at least as comprehensible to us as we are to one another.
This article is a companion to Professor Allen’s first article on embodied cognition in humans and other animals. This article is also part of a series on robots and their impact on society.
Endnotes
- Rolf Pfeifer, Max Lungarella, and Fumiya Iida (2007) “Self-Organization, Embodiment, and Biologically Inspired Robotics,” Science, 318(5853): 1088-1093. Rodney A. Brooks (1991) “New Approaches to Robotics,” Science, 253: 1227-1232.
- Alan M. Turing (1950) “Computing Machinery and Intelligence,” Mind: A Quarterly Review of Psychology and Philosophy, 59(236): 433-460.
- Wendell Wallach and Colin Allen (2008) Moral Machines: Teaching Robots Right from Wrong, Oxford University Press.
Sidenotes
- (a) Press coverage following Watson’s Jeopardy triumph focused on Watson’s confidence indicator. Because Jeopardy penalizes players heavily for wrong answers, an over-confident player can quickly end up with a negative score. So too can a panicking player, but emotions are not part of Watson’s program – another “inhuman” advantage possessed by the machine. Watson’s algorithms for pressing the buzzer included a calculation of the probabilities of the three best answers presented by the search algorithms. Only if the best of the three exceeded a specific threshold did Watson attempt to buzz in.
- (b) The Loebner Prize is awarded annually to the technology that performs best on the Turing Test. The closest a program has come to passing was in 2008, when Artificial Solutions’ robot Elbot fooled three of the twelve judges into thinking it was human. If it had deceived one more person, it would have passed Turing’s quota of one-third of judges.
- (c) The Koroibot project was announced in October 2013, bringing together several European universities to create a robot with a human gait and the ability to adjust to changes in terrain. Google’s Dec 2013 purchase of Boston Dynamics has immediately positioned it as a leader in biomorphic locomotion (i.e. physical movement similar to that of living beings), with the Big Dog and Petman robots that have been under development for several years.
- (d) The Oxford dictionary defines cybernetics as “the science of communications and automatic control systems in both machines and living things.” According to Google Ngrams, the term “cybernetics” hit its peak usage in 1969, but many of the central ideas have recently found their way into robotics and cognitive science.