Back

Part 2: AI's Creative Future [A Series]

Robotic Emotions: Will AI Ever Understand Emotions the Way We Do?

In the first installment of our AI series, we defined what AI is and explored why so many people are fearful of its rapid development. (You can read Part 1 here.) Now we'll explore AI's emotional capability.

Artificial intelligence can do a lot of impressive things that were formerly reserved for humans. It drives cars and trucks, transcribes our speech, translates foreign languages, recognizes our faces, and navigates our trips, among many other functions. It's quickly automating several areas of our lives, at least those that are straightforward and require some level of logic or strategy. These activities can be translated to code; their problems can be solved through data analysis. But what about emotions?

Emotion is "the affective aspect of consciousness". It's a mental reaction to specific stimuli, it's unique to the individual, and it inspires different physical and mental changes in the person experiencing the emotion. Can AI experience emotion if it doesn't have a conscience?

For example, when you ask Siri or Alexa a question on your mobile or home device, she isn't listening to determine if you're in a good or bad mood simply from hearing your voice. She's listening for cues (your inquiry) that indicate a problem, coupled with context such as your GPS location. She then interprets that problem in the scheme of her data and produces an answer. But what if she could read your mood and then tailor her response to it? What if AI-enabled devices could conduct sentiment analysis (i.e. recognize and respond to our emotions)?

IBM is currently using its Watson Tone Analyzer to try and make this a reality. The Tone Analyzer scans writing, tweets, reviews, and voice tones to determine a customer's mood. Based on that mood, it suggests a specific set of customer service responses that an agent can use to navigate a conversation. Companies can use the Tone Analyzer to improve call center experiences, ensuring agents correctly identify customer cues and respond appropriately, and to develop best practices for future calls. Beyond Verbal, a self-described Emotions Analytics Company, is working in this same space.

Microsoft's Project Oxford is working toward the same goal but by using facial recognition software and an emotion-detecting API.

However, despite these groundbreaking projects, The Independent special contributor Leandro Minku thinks we're still decades away from speech and facial recognition coming full circle. He argues that AI can only scan and read emotion but it's not yet able to successfully interact with it. It's the understanding and context that's the biggest problem to solve. He goes on to cite one example where AI studied human emotion to draw conclusions.

In the example, AI studied the emotions of students to determine which ones needed help. Through facial analysis, machines could detect frustration and engagement levels, thus determining if a student found the work too easy, too difficult, or just right. With this information, researchers could improve the students' experience by adjusting teaching and learning techniques and customizing their experiences.

In this instance, AI was successfully able to read emotions, but no interaction took place. Like IBM's Watson Tone Analyzer, the emotions were detected and then recommendations were made. However, there was no direct interaction between the machine and the subject (i.e. the machine automatically shifting the learning experience without human interference or conducting a dialogue with the student based on the findings). To Minku's point, solving the interaction aspect is what's necessary to move forward.

The 2015 film Ex Machina explored this topic, through the use of surrealism and horror. In the movie, an advanced female robot used human emotion to manipulate and attack humans, and eventually gain her freedom. It was an extreme example of what could happen should AI ever gain the ability to interact with human emotion in a  "human" way, but the movie also reflected some of the fears people have about giving robots feelings.

A March study, reported by Science Magazine, revealed that humans felt distress upon suspecting robots could mimic their emotions.

However, perhaps it's impossible for AI to possess and use emotions as we know them. For us, emotions are psychological. They cause behavioral and physiological responses. They're incorporated into the fiber of our being, and they're produced through incredibly complex processes in our brains. With AI, robots can study data and mimic what we do but, considering we're still learning more and more about our brains with time, it doesn't seem AI could accurately reproduce and respond to our emotions with the depth and breadth of even a teenager or young adult.

If AI's  "emotional response" is merely the result from studying data, does it qualify as actual feelings? And furthermore, if a particular AI is developed by engineers who represent a narrow group, will the AI inadvertently pass on their inherent biases? Are the people creating AI representative of a broad swath of people (race, religion, sexual orientation, gender, etc) and experiences? How will AI's  "emotional response" work for all people if all people haven't been considered or included on the front end of its development?

Human emotion is complex, and there are many factors to consider when attempting to mimic it in robots. As IBM and several studies have shown, AI can recognize emotions (with modest accuracy) and suggest a way to for humans to effectively continue the interaction. But to produce true emotion without a conscience? To conduct an emotional interaction with a human leading the way? That may not be possible for a long time. Maybe not ever.

Read Part 3.

Get Started
Swiftly transcribe and caption any
audio or video
Learn more

Related Posts