This article was also published on Robohub and in UC Berkeley’s Greater Good Magazine.
From the sci-fi classic “Bladerunner” to the recent films “Her” and “Ex Machina,” pop culture is filled with stories demonstrating our simultaneous fascination with and fear of artificial intelligence (AI).
This interest is rooted in questions about where the line between human and artificial intelligence will be, and whether that line might one day disappear. Will robots eventually be able to not only think but also feel and behave like us? Could a robot ever be fully human?
A new multidisciplinary field called developmental robotics is paving the way to some answers.(a) Rather than writing programs that try to mimic specific human behaviors like love, developmental roboticists build machines that learn and develop the way humans do as they grow from newborn infants to adults. The goal is to model human learning and then create machines that can learn in similar ways.
My research at Kyoto University focused on building robots with human-like emotional architecture who learn emotional behavior from the people they interact with, particularly their human caregivers. It offers insights into how we might one day be able to create machines with a full range of emotions comparable to our own.
How Do Humans Develop Emotion?
For a developmental roboticist, the first step in tackling the problem of robot emotion is understanding how humans develop the capacity for emotion. Though this process is still a bit of a mystery, the field of developmental psychology is beginning to unlock some of its secrets.
Around the age of two, when toddlers start to speak, they begin to learn the emotional names for their internal states. The word “sad”, for instance, refers to a certain set of physiological and psychological feelings, along with associated expressions of these feelings through tone of voice, facial appearance, and body movement.(b) Sadness is often linked to slower-paced speech, a frowning mouth, and sluggish body movement. Anger, on the other hand, is generally associated with intense, abrupt speech; downturned eyebrows; and quick, aggressive movements.
As we get older, we use these behaviors to express our internal states and to recognize emotion in others. We even see emotion in non-human objects, such as a sad piece of music or an excited pet. We may also do self-inspections to deduce our own emotions – for example, someone noticing her voice rising as a way to identify when she is feeling frustrated. All of this emotional expression and perception happens quickly, involuntarily, and subconsciously, conveying a great deal of information in a concise way.
How do we develop these forms of emotional expression? Are they learned or innate (or some combination of both)? For a long time, the prevailing view was that human emotional expressions are biologically determined, particularly when it comes to basic emotions like happiness, sadness, anger, fear, disgust, and surprise.(c) However, new research suggests that how humans express emotion may, at least in part, depend on how they are taught to do so by their caregivers and peers.(d)
Cross-cultural studies suggest that cultural environment plays a role in the development of emotion. According to research by Stanford psychologist Jeanne Tsai, emotional expression and ideals tend to differ across Eastern and Western cultures. Individuals in Western cultures identify “feeling good” as a high-arousal positive (HAP) affect, whereas Eastern cultures prefer a low-arousal positive (LAP) affect.1 In other words, Western cultures favour high-arousal emotions such as excited joy and elation, whereas Eastern cultures favour low-arousal emotions such as calm joy and bliss.
To illustrate, one study found that Asian Canadians prefer smiles between 20 and 60% intensity, whereas European Canadians prefer smiles from 80 to 100% intensity.2 Research has also demonstrated that people have a harder time identifying the emotions connected to facial expressions and vocal cues of people from other cultures than those from their own culture.3
Interactions with caregivers at a young age may play a particularly important role in the development of emotion. Research shows that when a rhesus monkey is separated early in life from its mother, its genes express differently in brain regions controlling socio-emotional behaviors. This primate study suggests that early parental care – or the absence of it – can profoundly change an infant’s future emotional behaviour, even at the genetic level.4
Though studies of development in humans are rarer due to ethical issues, observations of children raised in emotionally deprived institutional environments show that early life experiences can have lasting effects on emotional intelligence. For example, individuals who grew up in Eastern European orphanages with little social interaction or attention from caregivers had difficulty later in life matching appropriate faces to happy, sad, and fearful scenarios (though they were able to match angry faces).5
Building Emotional Robots
How can we use our knowledge of how human emotion develops to build robots with the capacity for emotion? The idea behind developmental robotics is to create robots that learn behaviors the same way human children do. Typically, a software model is programmed to represent a part of the robot’s “brain.” Then, the robot is exposed to an environment to stimulate the training of that model, for example, through interactions with a human caregiver. In my research, I tested the idea that caregivers can play a role in helping robots develop emotion, just as they play a role in emotional development for human infants.
First we must ask: what would it mean for a robot to have emotion, and how would we know if it did? Neuroscientist Antonio Damasio6 defines emotion as, “the expression of human flourishing or human distress, as they occur in the mind and body.”(e) I have proposed that we define flourishing for a robot as a state of “all-systems-go” or homeostasis, where the battery, motors, and other parts are in working order and the core temperature is normal. We can imagine this as similar to a human infant being well fed, rested, and in good health. Distress is when something is wrong, which could result from a hot motor or CPU, low battery, or the saturation of microphone sensors with loud noises or vision sensors with extremely bright light. This parallels a newborn feeling distress from hunger, a wet diaper, or a loud sound.
In my research, I had human caregivers interact with robots in a variety of ways, expressing emotions such as happiness, sadness, and anger, while the robots are in both flourishing and distressed states. The caregiver behaviors parallel ways in which developmental psychologists have observed parents interacting with human infants.7 For example, when the robot is in a flourishing state, the caregiver plays with the robot in a joyous way, modelling happiness. When the robot is in a physically distressed state, the caregiver may display empathy, showing sadness while comforting the robot.(f)
The result? The robot learned to express its internal states based on whatever models it was taught by its caregivers. Changing how the caregivers behave affects how the robots later express their internal states – in other words, how they show emotion. If the caregiver spoke to the robot in an empathetic way when it showed distress, for instance saying “poor robot” in a slow and sorrowful voice, the robot would learn to express a distressed state as something similar to sadness, using a slow voice and movements. If the caregiver scolded the robot when it was in distress, expressing frustration or anger, the robot would later express a distressed state using the aggressive, intense patterns we typically associate with anger.(g)
We could conduct similar experiments with various types of positive emotions. If a caregiver expresses calm and peaceful happiness to a robot in a flourishing state, this might lead the robot to express flourishing in the same relaxed, calm way. A caregiver who expresses more energetic, boisterous joy could produce a robot that expresses flourishing in a more intense, high-energy manner. Looking at the world around us, we can see families, households, and even cultures that demonstrate how human emotional expression can vary in similar ways.
Can A Robot Love?
In an article entitled “Can Robots Fall in Love, and Why Would They?,” leading AI philosopher Daniel Dennett described two possibilities for creating robots with emotions. The first is that an AI could be programmed to act like it was in love and, on the surface, appear to have emotions. Essentially, “a robot could fake love.”(h)
The second and less obvious route is to create an architecture less like current computers and more like the human brain. This system would not be a hierarchy controlled from the top down; instead, behaviors would emerge “democratically” from low-level, competing elements, much like they do in biological nervous systems. With this structure, Dennett writes, you could potentially create a computer that truly loved, though doing so would be, “beyond hard.”
While still in its early stages, my research offers an approach to building emotional robots that follows Dennett’s “emergent” model. Rather than hard-coding emotions into a robot using fixed rules, we might be able to create a robot with an emotional architecture similar to a human’s, wherein first-hand experiences with emotions like happiness and love teach the robot how to express these emotions in the future.
Emotions color every human interaction and are the foundation for living in a social world. As robots become a more integral part of our daily lives, we will benefit if they can understand and respond to our emotional states. Emotional robots may be able to communicate with us in ways we intuitively understand, for example showing a sluggish walk when their battery needs recharging, instead of a confusing panel of lights and beeps. The ultimate goal is not necessarily to create robots that can fall in love or fulfill all our human emotional needs, but to build machines that can interact with us in a more human way, rather than requiring us to behave more like machines.
- Tsai, J. L. (2007). “Ideal Affect: Cultural Causes and Behavioral Consequences.” Perspectives on Psychological Science, 2(3): 242-259.
- Hess, U., M.G. Beaupré, and N. Cheung. (2002). “Who to whom and why – cultural differences and similarities in the function of smiles.” In M. Abel, An empirical reflection on the smile (pp. 187-216). New York: The Edwin Mellen Press.
- Elfenbein, H. A., and N. Ambady. (2003). “Universals and cultural differences in recognizing emotions.” Current Directions in Psychological Science, 12: 159-164. Juslin, P. N., and P. Laukka. (2003). “Communication of emotions in vocal expression and music performance: Different channels, same code?” Psychological Bulletin, 129(5): 770.
- Sabatini, M.J., P. Ebert, D.A. Lewis, P. Levitt, J.L. Cameron, and K. Mirnics. (2007). “Amygdala gene expression correlates of social behavior in monkeys experiencing maternal separation.” Journal of Neuroscience, 27(12): 3295-3304.
- Fries, A.B., and S.D. Pollak. (2004). “Emotion understanding in postinstitutionalized Eastern European children.” Development and Psychopathology, 16(2): 355-369.
- Damasio, A. (1994). Descartes error: Emotion, reason and the human mind. New York: Grossett/Putnam.
- Fernald, A. (1989). “Intonation and communicative intent in mothers speech to infants: Is the melody the message?” Child development, 1497-1510.
- For a survey of the field of developmental robotics, see: Max Lungarella, Giorgio Metta, Rolf Pfeifer, and Giulio Sandini. (2003). “Developmental robotics: a survey.” Connection Science, 15(4): 151-190. Minoru Asada, Koh Hosoda, Yasuo Kuniyoshi, and Hiroshi Ishiguro. (2009). Cognitive Developmental Robotics: A Survey. IEEE Transactions on Autonomous Mental Development, 1(1): 12-34.
- Juslin, P.N., and J.A. Sloboda. (2010). Handbook of music and emotion: theory, research, applications. Oxford, UK: Oxford University Press.
- Cornelius, Randolph R. (2000). Theoretical approaches to emotion. Presented at ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion. Newcastle, UK: Sept. 5-7.
- Kitamura, C., and C. Lam. (2009). “Age-specific preferences for infant-directed affective intent.” Infancy, 14(1): 77-100.
- Calkins, S. D. (2002). “Does aversive behavior during toddlerhood matter? The effects of difficult temperament on maternal perceptions and behavior.” Infant Mental Health Journal, 381-394.
- Ortony, A., G. Clore, and A. Collins. (1988). The cognitive structure of emotions. Cambridge, UK: Cambridge University Press.
- (a) Developmental robotics sits at the intersection of life sciences like developmental psychology, neuroscience, and evolutionary biology and engineering sciences such as computer science and robotics. It is a relatively new field that started in the 1990s.8
- (b) Expression is just one of the components of emotion. Many researchers define emotion as a reaction to an event consisting of the following simultaneous physical and mental components: expression, physiological state, subjective feeling, cognitive processing, and action. For example, if I injure myself, I may yelp in pain (expression), my heart pumping faster in distress (physiological state), and I may feel afraid (subjective feeling), yet tell myself not to panic (cognitive processing) as I grab the phone to call for help (action). All of these elements together constitute my emotional reaction to the experience of being injured.9
- (c) Paul Ekman, for example, advocates a biological view of emotion called Darwinian theory.10 Based on field research with an isolated tribe in Papua New Guinea, he developed the idea that there are six basic emotions: happiness, sadness, anger, fear, disgust, and surprise. Despite being cut off from other cultures, the villagers he studied could match photos of standard facial expressions (like sadness) to the situation that would cause the associated emotion (such as death of a loved one). However, the idea of universal emotions is still hotly debated.
- (d) The idea that emotion is learned is called the social constructivist theory of emotion. It is the youngest of four major theories explaining the development of human emotion.10
- (e) Our experience tells us, of course, that we can feel mental distress even if our physical bodies are running perfectly fine. Damasios definition allows for a mind-body distinction: the mental representation of the body (the brain thinks were hot) may or may not correspond to the physical reality of the body (we are actually hot), but either way, we may be in distress.6
- (f) Developmental psychologists have documented how parental treatment of infants varies over time.11 Children up to three months old are exposed mostly to comforting faces and voices – when they show their distress by crying, their caregivers soothe them with vocal patterns similar to sadness. At six months, parents start to express praise, which looks and sounds like happiness, as children learn new skills. At nine months, when infants typically start to crawl, caregivers produce more prohibitive voices and faces, the patterns of which resemble anger.
- (g) Studies of humans have shown that caregiver responses to infants lead them to express emotion in different ways. In one short-term longitudinal study, researchers found that infants who experienced negative parenting showed higher anger and frustration levels than those who experienced positive parenting.12
- (h) One of the most famous examples of this approach to robot emotion is the “OCC Model” proposed by Andrew Ortony, Gerald L. Clore, and Allan Collins in their seminal book The Cognitive Structure of Emotions.13 This logic-based approach aims to create a complex set of rules telling a machine how it should “feel” in any given scenario. For example, rules might include, “If I am cut off by another car while driving, I will be angry,” or, “If my owner arrives home from work, I will be happy.”