Researchers at the Department of Artificial Intelligence (DIA) of the Universidad Politécnica de Madrid’s School of Computing (FIUPM) have, in conjunction with Madrid’s Universidad Rey Juan Carlos, developed an algorithm that is capable of processing 30 images per second to recognize a person’s facial expressions in real time and categorize them as one of six prototype expressions: anger, disgust, fear, happiness, sadness and surprise.

Applying the facial expression recognition algorithm, the developed prototype is capable of processing a sequence of frontal images of moving faces and recognizing the person’s facial expression. The software can be applied to video sequences in realistic situations and can identify the facial expression of a person seated in front of a computer screen. Although still only a prototype, the software is capable of working on a desktop computer or even on a laptop.

The system analyses the face of a person sitting in front of a camera connected to a computer running the prototype. The system analyses the person’s face (up to 30 images per second) through several boxes, each “attached” to or focusing on part of the user’s face. These boxes monitor the user’s facial movements until they manage to determine what the facial expression is by comparison with the expressions captured from different people (333 sequences) from the Cohn-Kanade database.

The system’s success rate on the Cohn-Kanade database is 89%. It can work under adverse conditions where ambient lighting, frontal facial movements or camera displacements produce major changes in facial appearance.

This software has a range of applications: advanced human-computer interfaces, improved relations with the e-commerce consumers, and metaverse avatars with an unprecedented capability to relate to the person they represent.

This software can enrich advanced human-computer interfaces because it would enable the construction of avatars that really do simulate a person’s facial expression. This is a really exciting prospect for sectors like the video games industry.

Electronic commerce could also benefit from this technology. During the e-commerce buying process, the computer would be able to identify potential buyers’ gestures, determine whether or not they intend to make a purchase and even gauge how satisfied they are with a product or service by helping to reduce the ambiguities of spoken or written language.

Applied to metaverses like Second Life, this software would also enable the avatars representing system users to act out the feelings of the user captured through facial expressions.

Although there are some facial analysis products on the market, none specifically target the analysis of user facial expressions. While most similar systems developed by other researchers focus on just part of expression recognition, the developed prototype does the whole job:

1) locates and monitors the face in the image using an algorithm that works despite changes of illumination or user movement, and

2) classifies the user’s facial expression.

3) it also incorporates an original algorithm that calculates the likely evolution of the analysed user’s facial expressions.