Maria Antony Kodiyan, Nikitha Benny, Oshin Maria George, Tojo Joseph, Jisa David
Human communication has two main aspects; verbal (auditory) and non-verbal(visual). Facial expression, body movement and physiological reactions are the basic units of non-verbal communication. Facial expression recognition has attracted increasing attention in computer vision, pattern recognition, and human-computer interaction research communities. Expression recognition is increasingly being used in the mobile and robotics industry. This project deals in the recognition of certain facial expressions in real time which faces many obstacles starting from correctly identifying a face and keeping track of key points on the face to then mapping these points to the right expression. Through the cascaded use of feature detection techniques, adaptive histogram equalization, contour mapping, and filtering this application recognizes 8 key points on the face and compares them to a calibrated neutral face model to extract the current expression. This application can be used to indicate an emotion to those that cannot comprehend expression or to react to the detected expression by playing music based on the users current mode, detecting if the user is lying, and allowing robots to have human-like interaction with the user.