# Introduction he difference between machine and human is not only that a human being is intelligent but also that he/she is emotional (Marinez-Miranda et al., 2005). The emotions enable human to interact intelligently and effectively with other humans. Same concept could be extended to human computer interaction (HCI). It deals with various procedures and methods through which humans interact with computer. According to Foley (1996) HCI is a socio-technological discipline whose goal is to bring computer and communication system to society and its people in such a way that both become accessible and hence useful in working, learning, communicating and recreational lives (Foley, 1996). The study related to HCI draws from supporting knowledge on both the machine and the human side. On the machine side computer graphics, operating systems, and programming languages are relevant while on the human side communication, social sciences, cognitive psychology, and human performance are relevant. As computers become more pervasive in culture, researchers are increasingly looking for new and innovative ways to design these interfaces more interactive and efficient. By embedding emotions in the interaction of human with machine, machine would be in a position to sense the mood of the user and change its interaction accordingly. The system will be friendlier to the user and its responses will be more similar to human behavior. Motivations for emotional computing are manifold. From a scientific point of view, emotions play an essential role in decision making, as well as in perception and learning. Emotions influence various cognitive processes of people (Lisetti & Nasoz, 2005) including perception and organization of memory (Bower, 1981), categorization and preference (Zajonc, 1984), goal generation, evaluation, and decision-making (Damasio, 1994), strategic planning (Ledoux, 1992), focus and attention (Derryberry & Tucker, 1992), motivation and performance (Colquitt et al., 2000), intention (Frijda, 1986), communication (Birdwhistle, 1970;Ekman & Friesen, 1975;Chovil, 1991), and learning (Goleman, 1995). A common everyday task is driving, and yet research suggests that people emote while driving and their driving is affected by their emotions (James & Nahl, 2000). The inability to control one's emotions while driving is often identified as one of the major causes for accidents. Also by knowing the user's emotions, computer agents can become more effective in tutoring. A computer agent can learn the student's preferences and offer better interactions. Surveillance is another application domain in which the reading of emotions may lead to better performance in predicting the future actions of subjects. In this way, the emotion driven technology can enhance the existing systems for the identification and prevention of terrorist attacks in public places. Certainly not all computes need to pay attention to emotions, or have emotional abilities. Some machines are useful as rigid tools, and it is fine to keep them that way. The paper begins by identifying the challenges in problem domain of emotion recognition. A complete framework of emotion recognition using rule based approach independent of any modalities (like speech or facial expressions) is then introduced. Core of this approach is feature analysis which has been explored using 'emotion profiling'. Finally the whole approach of rule based emotion recognition has been implemented using a running case scenario of facial expressions. Finally the performances of recognizing the target emotions have been reported. We conclude the paper by summarizing the results and consider some challenges facing the researchers in this area. # Emotion Recognition IS Challenging! Research related to emotion recognitions is tough because understanding emotion is difficult. To address the problem of emotion recognition various modalities like speech Khanna & Kumar, 2011), facial expression, gesture, keyboard interaction (Khanna & Kumar, 2010), etc. had been explored. Some of the major challenges in this domain are as follows. # a) Choice of Features The number of features used in the process of recognizing emotions from different modalities varies and depends on the application. Having a large number of features increases the complexity of the system, normally results in longer system training time and demand rich set of training data. Hence selection of features is a critical task. # b) Choice of Machine Learning Techniques Depending on the context and the type of data, the classification algorithms used for emotion recognition have been constantly evolving in course of time. Various recognition methods have been used in the literature. One major dimension for variability among algorithms is the nature of knowledge representation used by these algorithms. # c) Emotional Database Issues Data is of utmost importance. Having an appropriate database that is collected with a particular application and target user profile in mind can be expected to minimize the confusions that occur while organizing and labeling the emotional database. For example issues like emotion elicitation method (i.e., whether the elicited emotion displays are posed or spontaneous), size (the number of subjects), modality (audio, visual, etc.), emotion description (category or dimension), and labeling scheme is tedious job. Hence choice of database is another concern. # d) Choice of Emotions This requires identification of the emotional states which have a bearing on HCI. It is not important to track all variants of emotion as a principle. Literature defines various subsets of emotions based on desired granularity and other parameters. Researchers still do not agree on what an emotion is and many of them do not consider a specific subset of emotions as 'basic set'. Hence defining and identifying the emotional state is a challenge. # e) Choice of Modalities and Fusion The studies show multiple different modalities as the source for emotion like face, voice, gesture, etc. The accuracy of recognition from different sources may vary with time, for example, facial expression recognition view under good illumination. But in reality, one uses a combination of all these and they do not exist independently. Indeed, at times, the signals from the different sources may conflict each other, indicating different emotional states. However, most of the time the different sources provide additional information reinforcing the estimates made using one source and thus in determining the states with better confidence. Given the difficulties in mapping emotional states to recognizable characteristics in the various individual modalities, it becomes important to use multiple sources together. Picard (1997) observes that affect recognition is most accurate when it combines multiple modalities, information about the user's context, situation, goal, and preferences. But too much information from different modalities simultaneously seems to be confusing for human judges (Picard, 1997). Does this pertain in HCI too, needs to be addressed? Hence due to multimodalities, problems related t o data fusion are common. Humans simultaneously employ modalities of sight and sound. Does this tight coupling persist when the modalities are used for human behavior analysis, as suggested by some researchers, or not, as suggested by others? Does this depend on the machine learning techniques employed? In literature, some attempts like (De Silva & Ng, 2000), (Sebe et al., 2006) and (Zeng et al., 2007) have considered the integration of information from facial expressions and speech. Kim and Andre (2006) concentrated on the integration of physiological signals and speech signals for emotion recognition based on short term observation (Kim & Andre, 2006). In general there are two broad approaches which combine the inputs from different sources-feature based fusion and decision based fusion. Feature based fusion involves simply merging the features of each modality into a single feature vector. Decision based fusion is based on the fusion of decisions from each modality where the input coming from each modality is processed independently and these results are combined at the end. Several works like (Corradini et al., 2003), (Liao, 2002), (Kettebekov & Sharma, 2000), and (Sharma, 1998) discussed many issues and techniques of multimodal fusion. Finding an optimal fusion type for a particular combination of modalities is not straightforward. Hybrid fusion attempts to combine the benefits of both feature level and decision level fusion method. This may be a good choice for some multimodal fusion problems. However, based on existing knowledge and methods, how to combine the information coming from different modalities for the target set of emotions is still an open problem. In this paper we propose a rule based approach to recognize target emotions. This approach remains independent of modalities (speech or facial expressions or others). # Rule based Approach A rule based system, in general, consists of ifthen rules, a bunch of facts, and an interpreter controlling the application of the rules. One of the major strength of rule based representation is its ability to represent various uncertainties. Uncertainty is inherently part of most human decision making. This uncertainty could arise from various sources like incomplete data or domain knowledge used being unreliable. If -then rules is often represented like 'If A, B, C ----> then D, with certainty 'X', where X represents the degree of belief or confidence in the rule (Kumar et al., 2007). To handle uncertainties, there are two broad approaches, those representing uncertainty using numerical quantities and those using symbolic methods. For example, Bayesian reasoning (Shortliffe & Buchanan, 1975), Evidence theory (Gordon & Shortliffe, 1984) and Fuzzy set approaches (Negoita, 1985) are numerical models. On the other hand, symbolic characterization of uncertainty is mostly aimed at handling incomplete information, for example Assumption Based Reasoning (Doyle, 1979), Default Reasoning (Reiter, 1980) and Non-monotonic Logic (McDermott & Doyle, 1980). In our domain, the basic problem is that there are hardly any feature or feature combinations which can infer any emotion to complete certainty. Therefore, we concentrate on numerical approaches for handling the uncertainty. We have adopted the 'Confirmation Theory' as used in MYCIN approach (Shortliffe & Buchanan, 1975). This approach works well with rule based representation of domain knowledge. Shortliffe and Buchanan, 1975 developed the Certainty Factor (CF) model in the mid-1970s for MYCIN, an expert system for the diagnosis and treatment of infections of the blood (Shortliffe & Buchanan, 1975). Since then, the CF model has been widely adopted for uncertainty management in many rule based systems. Each rule is assigned CF by domain experts. Higher CF indicates that the conclusion can be asserted with higher confidence when the conditions are true. CF denotes change in belief in a hypothesis given some evidence. A value of +1.0 indicates absolute belief and -1.0 indicates absolute disbelief. The method generally used to propagate the measure of uncertainty in the antecedents and the uncertainty attached to the rule to the conclusions being derived is briefly explained below. This propagation is done in two steps (Kumar, et al., 2007). ? The different antecedents in the rule, in general, have different values of uncertainty attached to them. As a first step, we aggregate these values into a single CF, using the option considering the strength of the weakest link in a chain as the strength of the chain. This is defined as: CFantecedents = {minimum of CFs of all antecedents} (1) ? Then this measure (uncertainty for the set of antecedents) is combined with the measure of uncertainty attached to the rule to give a measure of uncertainty for the conclusion of the rule. CF of the conclusion from rule = {CF associated with rule R1} * {CFantecedents}, provided CFantecedents >= threshold} ( It can be seen that the CF obtained for a conclusion from a particular rule will always be less than or equal to the CF of the rule. This is consistent with the interpretation of the CF used by MYCIN, that is, the CF of a rule is the CF to be associated with the conclusion if all the antecedents are known to be true with full certainty. In a typical rule based system, there may be more than one rule in the rule base that is applicable for deriving a specific conclusion. Some of them will not contribute any belief to the conclusion, because CF of antecedents is less than the threshold. The contributions from all the other rules for the same conclusion have to be combined. For MYCIN model, initially CF of a conclusion is taken to be 0.0 (i.e. there is no evidence in favour or against) and then as different rules for the conclusion fires, the CF gets updated. MYCIN uses a method that incrementally updates the CF of the conclusion as more evidence for and against is obtained. Let CF old be the CF of the conclusion so far, say, after rules R 1 , R 2 ,...Rm have been fired. Let CF in be the CF obtained from firing of another rule Rn. The new CF of the conclusion (from rules R 1 , R 2 ???..R m and R n ), CFnew, is obtained using the formulae given below. CF new = CF old + CF in * (1 -CF old ) when (CF old , CF in >0) (3) CF new = CF old + CF in * (1 + CF old ) when (CF old , CF in < 0) (4) CF new = (CF old + CF in ) / (1 -min (|CF old |, |CF in |)) otherwise(5) We adopt this calculus in our model and explained later with a running example in section VI. Before that the concept emotion profiling and the compete framework of emotion recognition system has been introduced. # IV. # Concept of Emotion Profiling Emotion profile (EP) is introduced for understanding the variation of each feature for different emotional states. This is the core domain of rule based system to classify the target emotions. We define the emotion profile as the degree by which a given feature could reasonably differentiate among target emotions. If E is denoted as set of emotions and 2E is the set of subsets of emotions, then EP of feature (F i ) is defined as EP(F i ) = {X i | X i ? 2 E ; i = 1,2,?N} such that all elements of E occurs once and only once in the emotion profile. There could be two extreme scenarios mentioned below as 'worst scenario' and 'best scenario'. # EP(F i ) = {{E 1 ,E 2 ,E 3 ,E 4 ,E 5 ,E 6 ????..E N }} represents 'worst scenario' because the feature Fi is not able to differentiate between any of the target emotions. This is normally due to the variation in the feature value being independent of the emotional state, and generally means the feature is not a useful one for this purpose. # EP(F i ) = {{E 1 }, {E 2 }, {E 3 }, {E 4 }, {E 5 },???{E N }} EP(F 1 ) = {{H}, {D}, {F, S}, {A, N}} where D, H, F, S, A and N stands for 'disgust', 'happy', 'fear', 'sad', 'neutral', respectively. This is further validated by certain rules (illustrated in section VI). As all the features considered for our problems of emotion recognition are numeric in nature, we considered the average value as the final value to define the range and hence to understand the partition between emotions. This process is very much useful in finding the useful set of features. Relevant set of features acts as an ingredient for emotion classification problem. The next section will illustrate the complete process with a concrete example of emotion recognition using facial expressions. V. # General Framework for Emotion Recognition The conceptual framework for emotion recognition includes preprocessing, feature extraction, feature analysis, selection of the features, formulation of rules and measuring performance to classify the target emotional states. This will be explained using facial expression as an input in next section. # a) Preprocessing and Feature Extraction The objective of preprocessing is to make the input data in a standard format and suitable for extracting the desired features. Feature extraction involves identifying relevant features and formulating algorithms to extract these features from their respective input data. # b) Feature Analysis and Emotion Profiling Once the basic feature set is ready, the next step is analysis of these features. The question, 'how does each of these features vary with the emotion' needs to be answered here. Each feature has been analyzed carefully by looking its emotion profile respectively. Usually all features don't contribute to the same extent to recognize different emotional states. # c) Formulation of Rules Using Features Influential and useful features can be used to define rules, as follows: ? Emotion profile had been created for each feature to analyze its ability to distinguish among the target emotional states, and accordingly useful features were shortlisted. ? Rules are formed using each of these features for different target emotional states. A feature may yield one or more rules. Generally these rules have the form: if feature F 1 has value less than T 1 and feature F 1 has value greater than T 2 then conclude emotion = e 1 . For each rule, the cut off points T 1 and T 2 for a given emotion class is taken to be the approximate average of the value of that emotion with its immediate emotion neighbor. ? To each rule, we associate CF values for each emotional class. These values of CFs are decided as per guidelines mentioned in Table 1. There may be multiple rules associated with each feature. Multiple rules when fired simultaneously (based on values of different features) may saturate the values of CF associated with them. To minimize this possibility, we have chosen relatively lower range of CF values. Given our observation that most features do not provide a high degree of discrimination for any of the target emotion, a high value did not appear justified for any individual feature. The chosen range also allows the CF value to climb steadily to a high range, when there are many features supporting an emotion. The rules may point to a specific emotional state or a set of emotional states. If the distance of an emotion with its neighboring emotion is found to be less than 5% to 6% of the entire spread (overall range i.e. difference between upper value and lower value) for that features value, then these emotions are grouped as a subset. Allocation of these CF value to the target classes is done based on the three interclass rules (IR-1, IR-2 and IR-3). This is derived based on analysis of the emotion profile. # IR-1 (High Interclass Distance): If the interclass distance of an emotional class (either singleton or non-singleton) with its neighbors (left side as well as right side) is more than 15% of the entire spread for that feature, then the chances of a confusion with the neighboring class is low and hence the CF value associated with this class for that feature is considered to be 0.3. # IR-2 (Medium Interclass Distance): If the interclass distance of a emotional class (either singleton or non-singleton) with its neighbors (left side as well as right side) is in between 6% to 15% of the Emotion Profiling: Ingredient for Rule based Emotion Recognition Engine represents 'best scenario' as the feature F i is strong enough to differentiate between every individual emotions. For example, if feature f 1 (distance between nose and lip) observed to differentiate the emotional states 'disgust' and 'happy' but not able to differentiate between 'fear' and 'sad' and 'anger' from 'neutral' (as their range of values are very close) then we represent emotion profile of the feature f as entire spread for that feature, then the CF value associated with this class is considered to be 0.2. # IR-3 (Low Interclass Distance): If the interclass distance of a emotional class (either singleton or non-singleton) with its neighbors (left side as well as right side) is less than 6% of the entire spread for that feature, then the CF value associated with this class is considered to be 0.1. # d) Recognizing Emotions a using Rules Overall system's performance for recognizing emotions was measured with the final value of CF corresponding to all the emotional states for all images in the test set. The highest value of final CF is considered. # VI. Case Study for Facial Expression The standard database, Cohn-Kanade (CK) (Kanade, et al., 2000) of the static images have been used, where individuals are constrained to look straight at the camera and they are photographed with single colored background and illumination conditions do not vary drastically. Therefore, preprocessing issues are not a concern here. Total of 184 images from 57 subjects (32 female and 25 male subjects) have been selected for the emotional states of neutral, anger, happy, fear, sad, and disgust. # a) Feature Extraction The frontal view face model (Pantic & Rothkrantz, 2000b) is composed of many elements like mouth, nose, eyes and brows (Figure 1 and Table 2). By using a set of 18 points in the frontal view image, total of 21 features (f 3 , f 4 , f 5 , f 6 , f 7 , f 8 , f 9 , f 10 , f 11 , f 12 , f 13 , f 14 , f 15 , f 16 , f 17 , f 19 , f 20 , f 21 , f 22 , f 23 , f 24 as shown in Figure 1, mostly in the form of inter-point distances had been extracted. For example, the feature f 3 is the distance between left eye E 1 . Each of these points has been extracted from the image. The distances are compiled and are used for further analysis. All these distances were obtained for different emotions including the neutral state for all subjects. Facial expressions are often characterized by variation of a feature from its value in the neutral state, rather than its absolute value in a given state. Therefore, standardization of these features w.r. # b) Feature Analysis and Emotion Profiling As discussed earlier all features might not be useful in forming the rules. Individually each of these has to be analyzed. For example, the feature, lip distance (horizontal distance-f 16 and vertical distance-f 17 ) could be seen as varying with emotions (Figure 2 and Figure 3). The emotion profile of these feature (i.e. The lip movement (horizontal lip distance, f 16 and vertical lip distance, f 17 ) provides good separation between 'happy', 'sad', 'fear' w.r.t 'neutral' state (as per table-1).The emotions 'anger' and 'disgust' appear to be very close with each other for f 16 . But the feature f 17 is able to discriminate 'anger', 'sad', 'fear' and 'happy' but 'disgust' if found to be in the vicinity of 'neutral'. This Symmetrical pairs of features (like left eye vertical distance, f 9 and right eye vertical distance, f 10 ) do not always have the same emotion profile. For example, f 9 clearly differentiates between 'disgust' and 'fear', but doesn't show a reasonable separation between other pairs of emotions e.g. {'anger', 'happy'} and {'neutral', 'fear'}. The feature f 10 differentiates reasonably well between all the target emotions. The same is true with f 12 feature (distance between left lip and left eye). But f 13 feature (distance between right lip and right eye) differentiate a cluster of emotion {'happy',' disgust'} and {'sad', 'fear'}. The symmetrical features, f 12 and f 13 show the same results for 'neutral' and 'anger' only. It is observed that total of eleven features (i.e. f 3 , f 4 , f 9 , f 10 , f 11 , f 12 , f 13 , f 14 , f 15 , f 16 , and f 17 ) shows significant variation across the target emotional states among all twenty one features. Hence these 11 features will be most relevant and useful in designing rules further for recognizing emotions. Similarly, the feature f17 also varies across emotions (Figure 3). It is observed that 'neutral' along with 'disgust' is forming a non-singleton class while rest of the emotions is acting as singleton classes. It is observed that for 'sad' emotion the cutoff points (i.e. T 1 and T2) to be considered are -30 and -3. Depending on distances between these classes, CFs has been allocated and rules have been formed. We found a total of five conditions each for the feature f 16 # c) Formulation of Rules From the trend of feature f 16 (Figure 2), it is seen that the emotions 'neutral', 'sad', 'fear' and 'happy' are distinguishable individually, whereas the emotions, 'disgust' and 'anger' are found to be close together (as the distances with its neighbour are found to be in the range of 5% to 6% of the entire spread). Depending on the interclass distances of these classes CFs has been allocated (as per Table 1) and rules have been formed. For each rule (of the type if -then), the cutoff point (i.e., upper limit, T2 and lower limit, T1) belonging to the emotion class is taken to be the average of the value of that class with its immediate emotional class. From the figure 2, it is clear that for 'sad' emotion the cutoff points (i.e. T1 and T2) to be considered are 5 and 14, forming the singleton class and due to high inter class distances the CF values is to be considered as 0.3 (see Table 1 Such kind of exercise is done for each of the selected features. Symmetrical pair of features like (f 3 , f 4 ), (f 9 , f 10 ), (f 12 , f 13 ) and (f 14 and f 15 ) do not vary in the same way across different emotions and hence the resulting rules may differ. Total of 11 rules have been formed for emotion identification using facial static images. # d) Recognizing Emotions using Rules All these rules have been tested on the database and final value of CF has been computed corresponding to each of the 6 emotional states. The emotion with the highest value of final CF is considered and counted against the expected emotion class for each image for all the subjects. For example, Table 3 shows the computed values of CF corresponding to all the six emotions -sad (S), neutral (N), anger (A), happy (H), fear (F) and disgust (D). A row in this table indicates an input image of an individual subject (s 1 ) in a particular emotional state. Final outcome for the same is indicated in these CF values under the six columns labelled from CFSad to CFDisgust. For example, row 3 corresponds to subject-1 (i.e. s 1 ) in 'angry' state; the table shows the maximum value of CF under the emotion class of 'anger' (i.e. 0.91) showing correct identification. Similarly, the maximum value of CF for the subject-1 (i.e. s 1 ; row 6) is 0.87 and is for the target emotion of disgust. Though the value belonging to 'anger' is coming close to this value, we are considering the highest value of CF to identify the target emotion associated with the input image. Hence, the computed emotion matches with the 'predicted emotion' which is 'disgust' in this case and 'anger' in the previous case. Similarly computed value of CF has been analyzed for each of the emotions. Kulkarni et al., 2009). The average expression recognition rate of all of these systems is around 82% (in the range of 64% to 100%). Some of these studies have used limited testing data for training and for testing. In comparison, the overall correctness of recognizing emotions using our rule based approach from facial expression is found to be 86.43%. The recognition rates are found to be 80% and 88.89% for female and male subjects respectively. Recognition rate of 'anger' and 'fear' is high for male subjects as compared to female subjects. For example, it is observed that rate of recognition for 'anger' is coming out to be 100% for male and 69% for female. This rule based approach could be extended to any other modalities easily as it is based on the set of rules which could be extracted from different modalities (e.g. facial expression, speech or others). The overall process remains same i.e. to design the rules all the relevant features needs to be studied in more detail in the similar fashion. Emotion Profiling of each feature acts as an important ingredient as it is useful to map the relevant feature set for target emotional states. Influential and useful features were selected for defining the rules. Performance of the system could be improved by modifying, adding and deleting rules. # Conclusion and Future Work Emotion is assuming increasing importance in HCI, in general, with the growing feeling that emotion is central to human communication and intelligence. While various aspects of this problem have been addressed in the literature, the full problem has not received much attention so far. The primary concern in emotion recognition is inaccurate knowledge and data. There are hardly any features or feature combinations which can infer any emotion to complete the certainty. In general, there are no features that are universally effective for recognizing all emotions. There are some features which provide reasonable discrimination among various subsets of emotions. Hence the concept of 'emotion profile' is useful for extensive analysis and evaluation of individual features. We used the confirmation theory as used in MYCIN system where the values of CF are allocated to the emotional classes based on interclass distances. These have been derived based on the analysis of the emotion profile of individual features. Rule based systems have certain advantages. Because of the uniform syntax, each rule can be easily analyzed. The syntax is usually quite simple, so it is easy to 12014![Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XIV Issue III Version I](image-2.png "1 © 2014") ![t their neutral value was done. These parameters were normalized in the following manner: Normalized Value = (Measured Value -Neutral State Value) / Neutral State Value (6) Hence forth, in the remaining paper the reference made to use these normalized values as feature value as an input variable.](image-3.png "") Variation of f16 across Emotions40Measured (Pixels)10 20 30Distance0-10Anger Disgust NeutralSadFearHappyf16 -6.28-5.6010.5318.2836.29Variation of f17 across Emotions8060Measured (Pixels)0 20 40Distances-40 -20-60AngerSadNeutral DisgustFearHappyf17 -52.56-7.9805.6749.4267.16 2Rothkrantz, 2000b)Features Feature Descriptionf 3Distance AEf 4Distance A 1 E 1f 5Distance 3F, 3 is the centre of AB(See Figure 1)f 6Distance 4F 1 , 4 is the centre of A 1 B 1(See Figure 1) Emotion Profiling: Ingredient for Rule based Emotion Recognition Enginef 7Distance 3Gf 8Distance 4G 1f 9Distance FGf 10Distance F 1 G 1f 11Distance CK, C is 0.5HH 1f 12Distance IBf 13Distance JB 1f 14Distance CIRule 1: Using dist_horizontal_lip (f 16 ) for emotion identificationf 15Distance CJ(i) if (dist_horizontal_lip <= -3)f 16Distance IJCFDis=0.2; CFAng=0.2; (ii) if ((dist_horizontal_lip > -3) &&f 17Distance KL(dist_horizontal_lip <= 5))f 19Image intensity in circle (r(0.5BB 1 ), C(2)) above line (D, D 1 )CFNeu=0.3; (iii) if ((dist_horizontal_lip > 5) && (dist_horizontal_lip <= 14))f 20Image intensity in circle (r(0.5BB 1 ),CFSad=0.3;C(iv) if ((dist_horizontal_lip > &&(dist_horizontal_lip <= 27))CFFear=0.3;(v) if (dist_horizontal_lip > 27)CFHap=0.3;Example Rule 2: Using dist_vertical_lip (f 17 ) foremotion identification(i) if (dist_vertical_lip < -30)CFAng=0.3;(ii) if ((dist_vertical_lip < -3) && (dist_vertical_lip>= -30))CFSad=0.2;(iii) if ((dist_vertical_lip < 27) && (dist_vertical_lip> -3))CFNeu=0.3; CFDis=0.3;(iv) if ((dist_vertical_lip >= 27) && (dist_vertical_lip <58))CFFear=0.3;(v) if (dist_vertical_lip >= 58)CFHap=0.3; 3Year 2014D D D D D D D D ) F( © 2014 Global Journals Inc. (US) understand the rules without an explicit translation. Rules could be considered as independent pieces of knowledge about the domain and this independence leads to a high degree of modularity. Performance of the system could be improved by modifying / adding / deleting rules. This rule based system is applicable for any modalities like speech, gesture, facial expressions, etc. if provided with set of features. To validate further a study was done on speech and keyboard usage modality using the above mention rule based system. Given the vast scope of the work needed to build reliable emotion recognition system and use the same for enhancing the HCI, and the unavailability and difficulty in collecting reliable datasets for emotion recognition, this work covers only a part of the journey. A number of aspects require further investigation and refinement. To mention a few limitations against the use of certainty factor is that they have no sound theoretical basis; though, they often work well in practice. We allocated the values of CF to the emotional classes based on heuristic rules as defined in section III. These have been derived based on the analysis of the individual features across different emotions. In this work, we have ignored the possibility of having more than one emotional state at a time. Also the investigation to alternative uncertainty models like the Dempster-Shafer Theory is still open. Demspter Shafer theory provides more flexibility in assigning belief to various subsets of emotions. The databases used for the expression analysis are all based on subjects who "performed" a series of different expressions. There is a significant difference between expressions of a spontaneous and of a deliberate nature. Without a database of spontaneous expressions, the expression analysis system cannot be robust enough. This database issue is common for all the modalities -may be speech, facial expressions, etc. The multimodal data fusion for emotion recognition remains an open challenge as several problems still persist, related to finding optimal features, integration and recognition. Completely automated multimodal emotion recognition system is still at the preliminary phase, shows very limited performance and is mostly restricted to the lab environment. * Emotions in Human and Artificial Intelligence JMarinez-Miranda AAldea Computers in Human Behaviour Journal RDTennyson 2005 21 * JDFoley JTEC Panel Report on Human Computer Interaction Technologies in Japan 1996. March 1996. January, 2013 * Affective intelligent car interfaces with emotion recognition HCI CLLisetti FNasoz 11th International Conference on Human Computer Interaction Las Vegas, USA 2005. 2005. July 22-27 * Mood and Memory GBower American Psychologist 36 2 1981 * On the Primacy of Affect RZajonc American Psychologist 39 1984 * Descartes' Error ADamasio 1994 Avon Books New York, NY * Brain Mechanisms of Emotion and Emotional Learning JLedoux Current Opinion in Neurobiology 2 1992 * Neural Mechanisms of Emotion DDerryberry DTucker Journal of Consulting and Clinical Psychology 60 3 1992 * Toward an integrative theory of training motivation: a meta-analytic path analysis of 20 years of research JAColquitt JALepine RANoe Journal of Applied Psychology 85 2000 * NHFrijda The Emotions New York Cambridge University Press 1986 * Kinesics and Context: Essays on Body Motion and Communication RLBirdwhistle 1970 University of Pennsylvania Press * Unmasking the Face: A Guide to Recognizing Emotions from Facial Expressions PEkman WVFriesen 1975 Prentice Hall, Inc New Jersey * Discourse-Oriented Facial Displays in Conversation NChovil Research on Language and Social Interaction 25 1991 * Emotional Intelligence, Bantam Books DGoleman 1995 New York * Road Rage and Aggressive Driving: Steering Clear of highway Warfare LJames DNahl 2000 Prometheus Books Amherst, NY * Recognizing Emotions from Human Speech PKhanna SKumar Think Quest, International Conference on Contours of Computing Technology in association with Mumbai India Springer Publications 2010 * Application of Vector Quantization in Emotion Recognition from Human Speech Khanna SKumar Springe Series in Communications in Computer and Information Science (CCIS), ICISTM 2011 * Recognizing Emotions from Keyboard Stroke Pattern PKhanna SKumar International Journal of Computer Applications 9 11 2010 * Affective computing RWPicard 1997 The MIT Press Cambridge, MA * Bimodal Emotion Recognition, Automatic Face and Gesture Recognition DeSilva &Ng IEEE International Conference 2000 * Emotion Recognition Based, On Joint Visual Emotion Profiling: Ingredient for Rule based Emotion Recognition Engine and Audio Cues NSebe ICohen TGevers TSHuang Pattern Recognition, International Conference on 2006 1 * Audio-Visual Affect Recognition ZZeng TuJilin Liu Huang Roth&Pianfetti Levinson IEEE Transactions on multimedia 9 2 2007 * Emotion recognition using physiological and speech signal in short-term observation, Perception and Interactive Technologies: LNAI JKim EAndré 2006 Springer-Verlag 4201 Berlin Heidelberg * Multimodal input fusion In Human computer interaction on the example of the on-going nice project ACorradini MMehta NBernsen JCMartin Proceedings of the NATO-ASI conference on Data Fusion for Situation Monitoring, Incident Detection, Alert and Response Management the NATO-ASI conference on Data Fusion for Situation Monitoring, Incident Detection, Alert and Response ManagementYerevan (Armenia 2003 * HLiao Multimodal Fusion, Master's thesis 2002 University of Cambridge * Understanding Gestures in Multimodal Human Computer Interaction SKettebekov RSharma International Journal on Artificial Intelligence Tools 9 2 2000 * RSharma VPavlovic THuang 1998 * Toward Multimodal Human Computer Interface Proceedings of the IEEE the IEEE 86 * Rule Based Expert Systems -A Practical Introduction SKumar SRamani SMRaman KS RAnjaneyulu RChandrasekar 2007 Narosa Publishers * A Model of Inexact Reasoning in Medicine EHShortliffe BGBuchanan Mathematical Biosciences 23 1975 * The Dempster-Shafer Theory of Evidence, 272-292 in text reference JGordon EHShortliffe 1984. 1984 Buchanan and Shortliffe * Expert Systems and Fuzzy Systems CVNegoita 1985 Benjamin/Cummings * Truth Maintenance System JADoyle Artificial Intelligence 12 1979 * A Logic for Default Reasoning RReiter Artificial Intelligence 13 1980 * Non-monotonic Logic I DMcdermott JDoyle Artificial Intelligence 13 1980 * Comprehensive Database for Facial Expression Analysis TKanade JCohn YTian Proceedings of the International Conference on Automatic Face and Gesture Recognition the International Conference on Automatic Face and Gesture Recognition 2000 * Automatic facial Emotion recognition AAzcarate FHageloh KVan De Sande RValenti 2005 University of Amsterdam Technical Report * Classifying Facial Emotions by Back propagation Neural Networks with Fuzzy Inputs JZhao GKeearney Proceedings of Conference on Neural Information Processing Conference on Neural Information Processing 1996 * Automatic Facial Analysis A Survey BFasel JLuettin Pattern Recognition 36 2003 * Automatic Analysis of Facial Expressions: The State of the Art MPantic LRothkrantz IEEE Transaction on Pattern Analysis and Machine Intelligence 22 2000a * Authentic Facial Expression Analysis NSebe MSLew YSun LCohen TGevers TSHuang Image and Vision Computing 25 2007 * Expert System for Automatic Analysis of Facial Expression MPantic LJ MRothkrantz Journal of Image and Vision Computing 18 2000b * Recognition of Six Basic Facial Expressions and Their Strength by Neural Network HKobayashi FHara Proceedings of International Workshop Robot and Human Communication International Workshop Robot and Human Communication 1992 * Face Recognition Using Active Appearance Models GJEdwards TFCootes CJTaylor Proceedings of European Conference on Computer Vision European Conference on Computer Vision 1998 2 * Automatic Classification of Single Facial Images MJLyons JBudynek SAkamatsu IEEE Transactions on Pattern Analysis and Machine Intelligence 21 1999 * Facial Expression Recognition Using Model Based Feature Extraction and Action Parameters Classification CLHuang YMHuang Journal of Visual Communication and Image Representation 8 1997 * HHong HNeven Malsburg 1998 * Online Facial Expression Recognition Based on Personalized Galleries Proceedings of International Conference on Automatic Face and Gesture Recognition International Conference on Automatic Face and Gesture Recognition * Facial expression (mood) recognition from facial images using committee neural networks SSKulkarni NPReddy SIHariharan Bio Medical Engineering 2009