A “basis of the hypothesis is that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. This was the ambition of the Dartmouth Summer Research Project on Artificial Intelligence  held in 1956. This study was conducted by John McCarthy and Marvin Minsky, the founding fathers of the Artificial Intelligence (AI) concept. At that time, AI was about transposing into a machine a huge amount of human intelligence, including language, vision and reasoning. AI was about designing thinking machines, but emotion were not up for consideration.. However some fields, such as Robotics, demanded some consideration of emotions in their integration and communication with their human users.
62 years after the Dartmouth Summer Camp, where are emotions in today’s artificial intelligence?
The question whether AI machines can have their own emotions, is open to debate. AI and neuroscience researchers agree that current forms of AI cannot have their own emotions. They have no body, no hormones, no memory of their interaction with the world and have not gone through the process of learning life. They have no emotional memory equivalent to that of Man, with its construction starting in childhood, carrying on with the learning of life in adolescence and adulthood.
Artificial General Intelligence (AGI) is the most recent AI concept, and is described as an artificial intelligence capable of carrying out many different activities, like humans.
However in 2017, there is still no operational AGI, and it will take many years to reach this level of AI. The current studies around AGI do not extend to emotional capacity. Start-ups working on AGI aim to create expert systems capable of solving very complex problems, while maintaining rational reasoning, and ultimately the rationale is not to show any emotion.
On the other hand, huge breakthroughs have been achieved in the AI field to design machines that can mechanically interpret our emotions without having their own, or interacting with humans by simulating empathy. This still remains an unbalanced in terms of communication. The detection of human emotion is a fairly mature field. It relies on various sensors: video, microphones and biometrics that reflect us. The recognition of emotions in front of the camera is quite old.
It is standardized by the FACS description system for Facial Action Coding , created in 1978 by American psychologists Paul Ekman and Wallace Friesenen .
The French company Datakalab  uses technology that simultaneously analyzes several faces, such as the spectators of an event or a conference. This solution can determine the interest of an audience for a presentation, or compare the interest between two speakers as in May 2017 presidential election debates between Emmanuel Macron and Marine Le Pen. The model can also assess the level of customer stress.. Datakalab does not only use video, but also information from voices and, optionally, biometric wristbands.
French company XXII  is undertaking a study into a bioinspired artificial intelligence platform model for retail, security and autonomous vehicles. It exploits algorithms for the recognition of emotions and micro-expressions, recognition and identification of gestures or behaviors to identify assaults, falls and other dangers .
Now its also possible to capture the emotions of users by analysing their written production. This is the work of of startups that analyse feelings in social networks or the quality of CV’s. The French startups Natural Talk  and Cognitive Matchbox  each designed an optimized call routing model for call centres that analyse the personality and emotions of customers from their written messages to direct them to the best agent. They take advantage of IBM Watson’s natural language processing capabilities such as Personality Insights, Natural Language Understanding, Tone Analyzer, Document Conversion, Twitter Insight and Natural Language Classifier.
Once the emotions have been detected, it is possible to interpret them. Emotions depends on context and e culture, just like language (voice and intonations) and gestures. AI-based tools can analyse the correlation between emotions and the events that generate them. Many applications could stem from this , such as evaluating the impact of content, in advertising or fiction. These techniques are most often based on machine learning. For the most complex cases, as in the unsupervised training of chatbots, training can rely on neural networks. They are able to adjust the nature of the answers to the questions according to the emotional context of the dialogue between the chatbot and users. Interpretation is also involved in the evaluation of emotions generated by content such as music or other creative formats. In the case of creative generated by AI-based tools, this creates a feedback loop between AI-based creativity and users, in order to determine the generated content that has the best emotional quotient.
Current AI can even react acording to human emotions. Models then exploit the interpretation of the reactions of the emotions. Interactive tools can also help to adjust the emotional level of our own productions. For example, Google’s DeepBreadth tool  advises users on what to do when writing responses to emails. It warns the user of an inappropriate level of aggression.
Finally, AI can display emotions by simulating them.
These are ways to anthropomorphize interactions with users by using their emotional codes. Synthetic speech is the most common way to emit verbal emotions. Even if progress is made (such as the Tacotron 2 designed by Google), we are still a long way from a solution. Even the most advanced solutions like Lyrebird [9are low in technical ability and their famous video featuring a Barack Obama synthesis, both in video and audio left the former US President looking too much “in control” of his emotions .
- John McCarthy, Marvin L Minsky, Nathaniel Rochester, and Claude E Shannon. A proposal for the Dartmouth Summer research project on artiﬁcial intelligence, august 31, 1955. AI magazine, 27(4):12, 2006
- Ekman, P., and Friesen, W. V. (1978). The facial action coding system (FACS): A technique for the measurement of facial action. Palo Alto, CA: Consulting Psychologists Press.
- CES 2018 – https://www.youtube.com/watch?v=wznM951LmjI
Read more on the same topic: