Resumo: | Stress is the body's natural reaction to external and internal stimuli. Despite being something natural, prolonged exposure to stressors can contribute to serious health problems. These reactions are reflected not only physiologically, but also psychologically, translating into emotions and facial expressions. Once this relationship between the experience of stressful situations and the demonstration of certain emotions in response was understood, it was decided to develop a system capable of classifying facial expressions and thereby creating a stress detector. The proposed solution consists of two main blocks. A convolutional neural network capable of classifying facial expressions, and an application that uses this model to classify real-time images of the user's face and thereby verify whether or not it shows signs of stress. The application consists in capturing real-time images from the webcam, extract the user's face, classify which facial expression he expresses, and with these classifications assess whether or not he shows signs of stress in a given time interval. As soon as the application determines the presence of signs of stress, it notifies the user. For the creation of the classification model, was used transfer learning, together with finetuning. In this way, we took advantage of the pre-trained networks VGG16, VGG19, and Inception-ResNet V2 to solve the problem at hand. For the transfer learning process, were also tried two classifier architectures. After several experiments, it was determined that VGG16, together with a classifier made up of a convolutional layer, was the candidate with the best performance at classifying stressful emotions. Having presented an MCC of 0.8969 in the test images of the KDEF dataset, 0.5551 in the Net Images dataset, and 0.4250 in the CK +.
|