EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Human Emotional State Recognition Using 3d Facial Expression Features

Download or read book Human Emotional State Recognition Using 3d Facial Expression Features written by Yun Tie and published by LAP Lambert Academic Publishing. This book was released on 2011-06 with total page 148 pages. Available in PDF, EPUB and Kindle. Book excerpt: In recent years there has been a growing interest in improving all aspects of the interaction between human and computers. Emotion recognition is a new technique based on affective computing that is expected to significantly improve the quality of human-computer interaction system and communications. Most existing works address this problem using 2D features, but they are sensitive to head pose, clutter, and variations in lighting conditions. In light of such problems, two 3D visual feature based approaches are presented, the 3D Gabor feature and the 3D elastic body spline features from video sequences. The most significant contributions of this work are detecting and tracking fiducial points automatically from video sequences to construct a generic 3D face model, and the introduction of EBS deformation features for emotion recognition. These methods open a new research direction for human computer communication with applications to security systems, the intelligent home, a learning environment, and educational software.

Book Human Emotion Recognition from Face Images

Download or read book Human Emotion Recognition from Face Images written by Paramartha Dutta and published by Springer Nature. This book was released on 2020-03-26 with total page 276 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book discusses human emotion recognition from face images using different modalities, highlighting key topics in facial expression recognition, such as the grid formation, distance signature, shape signature, texture signature, feature selection, classifier design, and the combination of signatures to improve emotion recognition. The book explains how six basic human emotions can be recognized in various face images of the same person, as well as those available from benchmark face image databases like CK+, JAFFE, MMI, and MUG. The authors present the concept of signatures for different characteristics such as distance and shape texture, and describe the use of associated stability indices as features, supplementing the feature set with statistical parameters such as range, skewedness, kurtosis, and entropy. In addition, they demonstrate that experiments with such feature choices offer impressive results, and that performance can be further improved by combining the signatures rather than using them individually. There is an increasing demand for emotion recognition in diverse fields, including psychotherapy, biomedicine, and security in government, public and private agencies. This book offers a valuable resource for researchers working in these areas.

Book Affective Computing

    Book Details:
  • Author : Jimmy Or
  • Publisher : IntechOpen
  • Release : 2008-05-01
  • ISBN : 9783902613233
  • Pages : 452 pages

Download or read book Affective Computing written by Jimmy Or and published by IntechOpen. This book was released on 2008-05-01 with total page 452 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing.

Book MPEG 4 Facial Animation

Download or read book MPEG 4 Facial Animation written by Igor S. Pandzic and published by John Wiley & Sons. This book was released on 2003-01-31 with total page 328 pages. Available in PDF, EPUB and Kindle. Book excerpt: Provides several examples of applications using the MPEG-4 Facial Animation standard, including video and speech analysis. Covers the implementation of the standard on both the encoding and decoding side. Contributors includes individuals instrumental in the standardization process.

Book Visual Affect Recognition

Download or read book Visual Affect Recognition written by Ioanna-Ourania Stathopoulou and published by IOS Press. This book was released on 2010 with total page 268 pages. Available in PDF, EPUB and Kindle. Book excerpt: It is generally known that human faces, as well as body motions and gestures, provide a wealth of information about a person, such as age, race, sex and emotional state. This monograph primarily studies the perception of facial expression of emotion, and secondarily of motion and gestures, with the purpose of developing a fully automated visual affect recognition system for use in modes of human/computer interaction. The book begins with a survey of the literature on emotion perception, followed by a decription of empirical studies conducted with human participants and the construction of a face image database . On the basis of this work, a visual affect recognition system was developed, consisting of two modules: a face detection subsystem and a facia expression recognition subsystem. Details of this system are demonstrated and analyzed, and extensive performance evaluations and test results are provided. Finally, current research avenues leading to visual affect recognition via analysis of body motin and gestures are also discussed."

Book 3D Facial Expressions Recognition Using Shape Analysis and Machine Learning

Download or read book 3D Facial Expressions Recognition Using Shape Analysis and Machine Learning written by Ahmed Maalej and published by . This book was released on 2012 with total page 127 pages. Available in PDF, EPUB and Kindle. Book excerpt: Facial expression recognition is a challenging task, which has received growing interest within the research community, impacting important applications in fields related to human machine interaction (HMI). Toward building human-like emotionally intelligent HMI devices, scientists are trying to include the essence of human emotional state in such systems. The recent development of 3D acquisition sensors has made 3D data more available, and this kind of data comes to alleviate the problems inherent in 2D data such as illumination, pose and scale variations as well as low resolution. Several 3D facial databases are publicly available for the researchers in the field of face and facial expression recognition to validate and evaluate their approaches. This thesis deals with facial expression recognition (FER) problem and proposes an approach based on shape analysis to handle both static and dynamic FER tasks. Our approach includes the following steps: first, a curve-based representation of the 3D face model is proposed to describe facial features. Then, once these curves are extracted, their shape information is quantified using a Riemannain framework. We end up with similarity scores between different facial local shapes constituting feature vectors associated with each facial surface. Afterwards, these features are used as entry parameters to some machine learning and classification algorithms to recognize expressions. Exhaustive experiments are derived to validate our approach and results are presented and compared to the related work achievements.

Book Emotion Recognition

Download or read book Emotion Recognition written by Amit Konar and published by John Wiley & Sons. This book was released on 2015-01-27 with total page 580 pages. Available in PDF, EPUB and Kindle. Book excerpt: A timely book containing foundations and current research directions on emotion recognition by facial expression, voice, gesture and biopotential signals This book provides a comprehensive examination of the research methodology of different modalities of emotion recognition. Key topics of discussion include facial expression, voice and biopotential signal-based emotion recognition. Special emphasis is given to feature selection, feature reduction, classifier design and multi-modal fusion to improve performance of emotion-classifiers. Written by several experts, the book includes several tools and techniques, including dynamic Bayesian networks, neural nets, hidden Markov model, rough sets, type-2 fuzzy sets, support vector machines and their applications in emotion recognition by different modalities. The book ends with a discussion on emotion recognition in automotive fields to determine stress and anger of the drivers, responsible for degradation of their performance and driving-ability. There is an increasing demand of emotion recognition in diverse fields, including psycho-therapy, bio-medicine and security in government, public and private agencies. The importance of emotion recognition has been given priority by industries including Hewlett Packard in the design and development of the next generation human-computer interface (HCI) systems. Emotion Recognition: A Pattern Analysis Approach would be of great interest to researchers, graduate students and practitioners, as the book Offers both foundations and advances on emotion recognition in a single volume Provides a thorough and insightful introduction to the subject by utilizing computational tools of diverse domains Inspires young researchers to prepare themselves for their own research Demonstrates direction of future research through new technologies, such as Microsoft Kinect, EEG systems etc.

Book Facial Analytics for Emotional State Recognition

Download or read book Facial Analytics for Emotional State Recognition written by Konstantinos Papazachariou and published by . This book was released on 2017 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: For more than 75 years, social scientists study the human emotions. Whereas numerous theories developed about the provenance and number of basic emotions, most agreed that they could categorize into six categories: angrer, disgust, fear, joy, sadness and surprise. To evaluate emotions, psychologists focused their research in facial expressions analysis. In recent years, the progress in digital technologies field has steered the researchers in psychology, computer science, linguistics, neuroscience, and related disciplines towards the usage of computer systems that analyze and detect the human emotions. Usually, these algorithms are referred in the literature as facial emotion recognition (FER) systems. In this thesis, two different approaches are described and evaluated in order to recognize the six basic emotions automatically from still images.An effective face detection scheme, based on color techniques and the well-known Viola and Jones (VJ) algorithm is proposed for the face and facial characteristics localization within an image. A novel algorithm which exploits the eyes' centers coordinates, is applied on the image to align the detected face. In order to reduce the effects of illumination, homomorphic filtering is applied on the face area. Three regions (mouth, eyes and glabella) are localized and further processed for texture analysis.Although many methods have been proposed in the literature to recognize the emotion from the human face, they are not designed to be able to handle partial occlusions and multiple faces. Therefore, a novel algorithm that extracts information through texture analysis, from each region of interest, is evaluated. Two popular techniques (histograms of oriented gradients and local binary patterns) are utilized to perform texture analysis in the abovementioned facial patches. By evaluating several combinations of their principal parameters and two classification techniques (support vector machine and linear discriminant analysis), three classifiers are proposed. These three models are enabled depending on the regions' availability. Although both classification approaches have shown impressive results, LDA proved to be slightly better especially regarding the amount of data management. Therefore, the final models, which utilized for comparison purpose, were trained using LDA classification.Experiments using Cohn-Kanade plus (CK+) and Amsterdam Dynamic Facial Expression Set (ADFES) datasets demonstrate that the presented FER algorithm has surpassed other significant FER systems in terms of processing time and accuracy. The evaluation of the system involved three experiments: intra-testing experiment (train and test with the same dataset), train/test process between CK+ and ADFES and finally the development of a new database based on selfie-photos, which is tested on the pre-trained models. The last two experiments constitute a certain evidence that Emotion Recognition System (ERS) can operate under various pose and light circumstances.

Book Handbook of Face Recognition

Download or read book Handbook of Face Recognition written by Stan Z. Li and published by Springer Science & Business Media. This book was released on 2011-08-22 with total page 694 pages. Available in PDF, EPUB and Kindle. Book excerpt: This highly anticipated new edition provides a comprehensive account of face recognition research and technology, spanning the full range of topics needed for designing operational face recognition systems. After a thorough introductory chapter, each of the following chapters focus on a specific topic, reviewing background information, up-to-date techniques, and recent results, as well as offering challenges and future directions. Features: fully updated, revised and expanded, covering the entire spectrum of concepts, methods, and algorithms for automated face detection and recognition systems; provides comprehensive coverage of face detection, tracking, alignment, feature extraction, and recognition technologies, and issues in evaluation, systems, security, and applications; contains numerous step-by-step algorithms; describes a broad range of applications; presents contributions from an international selection of experts; integrates numerous supporting graphs, tables, charts, and performance data.

Book Verification of Emotion Recognition from Facial Expression

Download or read book Verification of Emotion Recognition from Facial Expression written by Yanjia Sun and published by . This book was released on 2016 with total page 110 pages. Available in PDF, EPUB and Kindle. Book excerpt: Analysis of facial expressions is an active topic of research with many potential applications, since the human face plays a significant role in conveying a person0́9s mental state. Due to the practical values it brings, scientists and researchers from different fields such as psychology, finance, marketing, and engineering have developed significant interest in this area. Hence, there are more of a need than ever for the intelligent tool to be employed in the emotional Human-Computer Interface (HCI) by analyzing facial expressions as a better alternative to the traditional devices such as the keyboard and mouse. The face is a window of human mind. The examination of mental states explores the human0́9s internal cognitive states. A facial emotion recognition system has a potential to read people0́9s minds and interpret the emotional thoughts to the world. High rates of recognition accuracy of facial emotions by intelligent machines have been achieved in existing efforts based on the benchmarked databases containing posed facial emotions. However, they are not qualified to interpret the human0́9s true feelings even if they are recognized. The difference between posed facial emotions and spontaneous ones has been identified and studied in the literature. One of the most interesting challenges in the field of HCI is to make computers more human-like for more intelligent user interfaces. In this dissertation, a Regional Hidden Markov Model (RHMM) based facial emotion recognition system is proposed. In this system, the facial features are extracted from three face regions: the eyebrows, eyes and mouth. These regions convey relevant information regarding facial emotions. As a marked departure from prior work, RHMMs for the states of these three distinct face regions instead of the entire face for each facial emotion type are trained. In the recognition step, regional features are extracted from test video sequences. These features are processed according to the corresponding RHMMs to learn the probabilities for the states of the three face regions. The combination of states is utilized to identify the estimated emotion type of a given frame in a video sequence. An experimental framework is established to validate the results of such a system. RHMM as a new classifier emphasizes the states of three facial regions, rather than the entire face. The dissertation proposes the method of forming observation sequences that represent the changes of states of facial regions for training RHMMs and recognition. The proposed method is applicable to the various forms of video clips, including real-time videos. The proposed system shows the human-like capability to infer people0́9s mental states from moderate level of facial spontaneous emotions conveyed in the daily life in contrast to posed facial emotions. Moreover, the extended research work associated with the proposed facial emotion recognition system is forwarded into the domain of finance and biomedical engineering, respectively. CEO0́9s fear facial emotion has been found as the strong and positive predictor to forecast the firm stock price in the market. In addition, the experiment results also have demonstrated the similarity of the spontaneous facial reactions to stimuli and inner affective states translated by brain activity. The results revealed the effectiveness of facial features combined with the features extracted from the signals of brain activity for multiple signals correlation analysis and affective state classification.

Book The Mechanism of Human Facial Expression

Download or read book The Mechanism of Human Facial Expression written by G. -B. Duchenne de Boulogne and published by Cambridge University Press. This book was released on 2006-11-02 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: In Mecanisme de la Physionomie Humaine, the great nineteenth-century French neurologist Duchenne de Boulogne combined his intimate knowledge of facial anatomy with his skill in photography and expertise in using electricity to stimulate individual facial muscles to produce a fascinating interpretation of the ways in which the human face portrays emotions. This book was pivotal in the development of psychology and physiology as it marked the first time that photography had been used to illustrate, and therefore "prove," a series of experiments. Duchenne's book, which contained over 100 original photographic prints pasted into an accompanying Album, was rare, even when it first appeared in 1862. Duchenne was a superb clinical neurologist and in this study he applied his enormous experience in neurological research to the question of the mechanism of human facial expression. Duchenne has been little cited and little known in this century; his book has been virtually unobtainable, and copies are available in only a few libraries in the United States and Europe.

Book Multimodal Emotion Recognition Using 3D Facial Landmarks  Action Units  and Physiological Data

Download or read book Multimodal Emotion Recognition Using 3D Facial Landmarks Action Units and Physiological Data written by Diego Fabiano and published by . This book was released on 2019 with total page 24 pages. Available in PDF, EPUB and Kindle. Book excerpt: To fully understand the complexities of human emotion, the integration of multiple physical features from different modalities can be advantageous. Considering this, this thesis presents an approach to emotion recognition using handcrafted features that consist of 3D facial data, action units, and physiological data. Each modality independently, as well as the combination of each for recognizing human emotion were analyzed. This analysis includes the use of principal component analysis to determine which dimensions of the feature vector are most important for emotion recognition. The proposed features are shown to be able to be used to accurately recognize emotion and that the proposed approach outperforms the current state of the art on the BP4D+ dataset, across multiple modalities.

Book Engineered and Learned Features for Face and Facial Expression Recognition

Download or read book Engineered and Learned Features for Face and Facial Expression Recognition written by Said Moh'd Said Elaiwat and published by . This book was released on 2015 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Face and facial expression recognition play a crucial role in many applications such as biometrics, human computer interactions and non-verbal communications. The human face can provide important clues/cues to identify people, and determine their emotional state, even without their explicit cooperation. However, variations in illumination conditions, facial pose, occlusion and facial expression (for face recognition), can dramatically degrade the performance of face and facial expression recognition systems. To address these challenges, this thesis presents novel feature extraction methods based on hand-engineered global and local features geared towards the problem of face recognition in still images. Novel feature learning methods are also proposed for the task of video based face and facial expression recognition. The proposed methods are capable of providing robust and distinctive facial features in the presence of variations in illumination, occlusion, pose and image resolution. The thesis starts by investigating the ability of Curvelet transform to extract robust global features for the task of 3D face recognition under different facial expressions. The benefits of fusing 3D and 2D Curvelet features is also investigated to achieve multimodal face identification. While such an approach proposed above extracts robust features from semi-rigid regions, it is often hard to automatically detect such regions across different datasets. Thus, a novel Curvelet local feature approach is proposed to extract local features rather than global features. The proposed approach relies on a novel multimodal keypoint detector capable of repeatably identifying keypoints on textured 3D face surfaces. Unique local surface descriptors are then constructed around each detected keypoint by integrating curvelet elements of different orientations. Unlike previously reported curvelet-based face recognition algorithms, which extract global features from textured faces only, our algorithm extracts both texture and 3D local features. The thesis also addresses the problem of face recognition from low resolution videos (e.g, security camera). This problem introduces new challenges requiring a method capable of exploiting the temporal information or/and appearance variations within image sequences (videos) during the feature extraction.To address these issues, a novel feature learning RBM-based model is proposed to automatically extract the best features, which can represent the semantic knowledge within videos (image sets). The structure of the proposed model involves two hidden sets used to encode the dominant appearances (facial features) and temporal information within videos (image sets). To learn the proposed model, an extension of the standard Constructive Divergence algorithm is proposed to facilitate the encoding of two different feature types (i.e.,facial features and temporal information). For video based facial expression recognition, the thesis also proposes a novel feature learning RBM-based model to learn effectively the relationships (or transformations) between image pairs associated with different facial expressions. The proposed model has the ability to disentangle these transformations (e.g. pose variations and facial expressions) by encoding them into two different hidden sets. The first hidden set is used to encode facial-expression morphlets, while the second hidden set is used to encode non-facial-expression morphlets. This is achieved using an algorithm, dubbed Quadripartite Contrastive Divergence.

Book MultiMedia Modeling

    Book Details:
  • Author : Yong Man Ro
  • Publisher : Springer Nature
  • Release : 2019-12-27
  • ISBN : 3030377318
  • Pages : 860 pages

Download or read book MultiMedia Modeling written by Yong Man Ro and published by Springer Nature. This book was released on 2019-12-27 with total page 860 pages. Available in PDF, EPUB and Kindle. Book excerpt: The two-volume set LNCS 11961 and 11962 constitutes the thoroughly refereed proceedings of the 25th International Conference on MultiMedia Modeling, MMM 2020, held in Daejeon, South Korea, in January 2020. Of the 171 submitted full research papers, 40 papers were selected for oral presentation and 46 for poster presentation; 28 special session papers were selected for oral presentation and 8 for poster presentation; in addition, 9 demonstration papers and 6 papers for the Video Browser Showdown 2020 were accepted. The papers of LNCS 11961 are organized in the following topical sections: audio and signal processing; coding and HVS; color processing and art; detection and classification; face; image processing; learning and knowledge representation; video processing; poster papers; the papers of LNCS 11962 are organized in the following topical sections: poster papers; AI-powered 3D vision; multimedia analytics: perspectives, tools and applications; multimedia datasets for repeatable experimentation; multi-modal affective computing of large-scale multimedia data; multimedia and multimodal analytics in the medical domain and pervasive environments; intelligent multimedia security; demo papers; and VBS papers.

Book 3D Face Analysis

    Book Details:
  • Author : Zhao, Xi
  • Publisher :
  • Release : 2010
  • ISBN :
  • Pages : 185 pages

Download or read book 3D Face Analysis written by Zhao, Xi and published by . This book was released on 2010 with total page 185 pages. Available in PDF, EPUB and Kindle. Book excerpt: This Ph.D thesis work is dedicated to automatic facial analysis in 3D, including facial landmarking and facial expression recognition. Indeed, facial expression plays an important role both in verbal and non verbal communication, and in expressing emotions. Thus, automatic facial expression recognition has various purposes and applications and particularly is at the heart of "intelligent" human-centered human/computer(robot) interfaces. Meanwhile, automatic landmarking provides aprior knowledge on location of face landmarks, which is required by many face analysis methods such as face segmentation and feature extraction used for instance for expression recognition. The purpose of this thesis is thus to elaborate 3D landmarking and facial expression recognition approaches for finally proposing an automatic facial activity (facial expression and action unit) recognition solution.In this work, we have proposed a Bayesian Belief Network (BBN) for recognizing facial activities, such as facial expressions and facial action units. A StatisticalFacial feAture Model (SFAM) has also been designed to first automatically locateface landmarks so that a fully automatic facial expression recognition system can be formed by combining the SFAM and the BBN. The key contributions are the followings. First, we have proposed to build a morphable partial face model, named SFAM, based on Principle Component Analysis. This model allows to learn boththe global variations in face landmark configuration and the local ones in terms of texture and local geometry around each landmark. Various partial face instances can be generated from SFAM by varying model parameters. Secondly, we have developed a landmarking algorithm based on the minimization an objective function describing the correlation between model instances and query faces. Thirdly, we have designed a Bayesian Belief Network with a structure describing the casual relationships among subjects, expressions and facial features. Facial expression oraction units are modelled as the states of the expression node and are recognized by identifying the maximum of beliefs of all states. We have also proposed a novel method for BBN parameter inference using a statistical feature model that can beconsidered as an extension of SFAM. Finally, in order to enrich information usedfor 3D face analysis, and particularly 3D facial expression recognition, we have also elaborated a 3D face feature, named SGAND, to characterize the geometry property of a point on 3D face mesh using its surrounding points.The effectiveness of all these methods has been evaluated on FRGC, BU3DFEand Bosphorus datasets for facial landmarking as well as BU3DFE and Bosphorus datasets for facial activity (expression and action unit) recognition.

Book Facial Expression Recognition Using Optical Flow and 3D HMM and Human Action Recognition Using Cuboid and Topic Models

Download or read book Facial Expression Recognition Using Optical Flow and 3D HMM and Human Action Recognition Using Cuboid and Topic Models written by and published by . This book was released on 2016 with total page 266 pages. Available in PDF, EPUB and Kindle. Book excerpt: The objective of this research is to provide an insight and advanced approach on recognition of motions by computer, ranging from human facial expression to a larger scale human simple actions. Histogram of Optical Flow was used as a descriptor for extracting and describing the facial motion features and three dimensional spatio-temporal HMM was used for learning and classifying human emotions - Happiness, Sadness, Anger, Fear, Disgust and Surprise. For analyzing human actions; walking, jogging, running, boxing, hand waving and hand clapping, a spatio-temporal cuboid model was used for feature extraction and description, and a topic model was used for learning and classifying motions. Topic model was originally developed for discovering the abstract topics that occur in a collection of documents. This is to attempt to illustrate what Alan Baddeley claimed in, "Essentials of Human memory" that semantic concepts stored in our memory could be in various form including images: The most plausible assumption is probably that concepts are stored in some abstract code which may be translated into a verbal or linguistic form or into an image when the need arises, just as information stored in a computer may, given the appropriate commands and peripheral equipment, be output as an image on the screen, as hard copy on a printer, as codes on a disk, or as sounds over the telephone. In each case the information stored is the same, but the mode of display is different. My belief is that humans recognize things by first learning, abstracting, conceptualizing, generalizing, grouping, internalizing and storing them in memory. Afterwards they could recall and recognize same or similar things by interpolation and integration capabilities.

Book Facial Emotion Recognition Using Asymmetric Pyramidal Networks with Gradient Centralization and Learnable Preprocessors

Download or read book Facial Emotion Recognition Using Asymmetric Pyramidal Networks with Gradient Centralization and Learnable Preprocessors written by Huanyu Zang and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Facial expression recognition (FER) is a promising but challenging area of computer vision (CV).Many researchers have devoted significant resources to exploring FER in recent years, but an impediment remains: classifiers perform well on fine-resolution images but have difficulty recog- nizing in-the-wild human emotional states. In order to address the issue mentioned above, we introduced some novel designs and implemented them in neural networks. More specifically, we utilized an asymmetric pyramidal network (APNet) and employed multi-scale kernels instead of identical-size kernels. In addition, square kernels were replaced by square, horizontal, and vertical convolutions. This structure can increase the description ability of convolutional neural networks (CNN)and transfer multi-scale features between different layers. Grouped convolution was adopted as well. It reduces the training time and improves the model's performance. Additionally, when training CNN, we applied gradient centralization to stochastic gradient descent with momentum (SGDMGC), which centralizes gradients to have zero mean and makes the training process more efficient and stable. Furthermore, the learnable preprocessor is placed before feeding initial images into the classification network. This structure generates images for a better recognition performance than high visual quality ones. To verify the effectiveness of the proposed design, we used four of the most popular in-the-wildemotion datasets, FER-2013, FER+, RAF-DB, and SFEW, for our experiments. The results of our experiment and comparisons with state-of-the-art (SOTA) designs from others demonstrate that APNet with SGDMGC outperforms most of methods with a single model and even has comparable performance to methods with multiple models. By conducting an ablation study on our model, we found that removing any proposed technique or techniques would lead to performance degradation. The results thus prove that each design has boosted the performance of the proposed model on FER tasks. Next, simulation results from APNet with SGDMGC combined with the learnable preprocessors are displayed as well. This combination has a competitive performance compared with other SOTA model fusion methods.