Affective computing
Encyclopedia
- Affective Computing is also the title of a textbook on the subject by Rosalind PicardRosalind PicardRosalind W. Picard is Professor of Media Arts and Sciences at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-director of the Things That Think Consortium...
.
Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects
Affect (psychology)
Affect refers to the experience of feeling or emotion. Affect is a key part of the process of an organism's interaction with stimuli. The word also refers sometimes to affect display, which is "a facial, vocal, or gestural behavior that serves as an indicator of affect" .The affective domain...
. It is an interdisciplinary field spanning computer sciences
Computer Sciences
Computer Sciences can refer to:*The general field of computer science*Computer Sciences Corporation, the Fortune 500 Information Technology company...
, psychology
Psychology
Psychology is the study of the mind and behavior. Its immediate goal is to understand individuals and groups by both establishing general principles and researching specific cases. For many, the ultimate goal of psychology is to benefit society...
, and cognitive science
Cognitive science
Cognitive science is the interdisciplinary scientific study of mind and its processes. It examines what cognition is, what it does and how it works. It includes research on how information is processed , represented, and transformed in behaviour, nervous system or machine...
. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard
Rosalind Picard
Rosalind W. Picard is Professor of Media Arts and Sciences at MIT, director of the Affective Computing Research Group at the MIT Media Lab, and co-director of the Things That Think Consortium...
's 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy
Empathy
Empathy is the capacity to recognize and, to some extent, share feelings that are being experienced by another sapient or semi-sapient being. Someone may need to have a certain amount of empathy before they are able to feel compassion. The English word was coined in 1909 by E.B...
. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.
Detecting and recognizing emotional information
Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistanceGalvanic skin response
Skin conductance, also known as galvanic skin response , electrodermal response , psychogalvanic reflex , skin conductance response or skin conductance level , is a method of measuring the electrical conductance of the skin, which varies with its moisture level...
.
Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different modalities speech recognition
Speech recognition
Speech recognition converts spoken words to text. The term "voice recognition" is sometimes used to refer to recognition systems that must be trained to a particular speaker—as is the case for most desktop recognition software...
, natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....
, or facial expression detection, and produce either labels (i.e. 'confused') or coordinates in a valence-arousal space. Literature reviews such as, and
provide comprehensive coverage of the state of the art.
Emotion in machines
Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine. While human emotions are often associated with surges in hormones and other neuropeptides, emotions in machines might be associated with abstract states associated with progress (or lack of progress) in autonomous learning systems. In this view, affective emotional states correspond to time-derivatives (perturbations) in the learning curveLearning curve
A learning curve is a graphical representation of the changing rate of learning for a given activity or tool. Typically, the increase in retention of information is sharpest after the initial attempts, and then gradually evens out, meaning that less and less new information is retained after each...
of an arbitrary learning system.
Marvin Minsky
Marvin Minsky
Marvin Lee Minsky is an American cognitive scientist in the field of artificial intelligence , co-founder of Massachusetts Institute of Technology's AI laboratory, and author of several texts on AI and philosophy.-Biography:...
, one of the pioneering computer scientists in artificial intelligence, relates emotions to the broader issues of machine intelligence stating in The Emotion Machine
The Emotion Machine
The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind is a book by cognitive scientist Marvin Lee Minsky. The book is a sequel to Minsky's earlier book Society of Mind....
that emotion is "not especially different from the processes that we call 'thinking.'"
Emotional speech
One can take advantage of the fact that changes in the autonomic nervous system indirectly alter speech, and use this information to produce systems capable of recognizing affect based on extracted features of speech. For example, speech produced in a state of fear, anger or joy becomes faster, louder, precisely enunciated with a higher and wider pitch range. Other emotions such as tiredness, boredom or sadness, lead to slower, lower-pitched and slurred speech.Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody
Prosody (linguistics)
In linguistics, prosody is the rhythm, stress, and intonation of speech. Prosody may reflect various features of the speaker or the utterance: the emotional state of the speaker; the form of the utterance ; the presence of irony or sarcasm; emphasis, contrast, and focus; or other elements of...
features such as pitch variables and speech rate are analyzed through pattern recognition.
Speech recognition is a great method of identifying affective state, having an average success rate reported in research of 63%. This result appears fairly satisfying when compared with humans’ success rate at identifying emotions, but a little insufficient compared to other forms of emotion recognition (such as those which employ physiological states or facial processing). Furthermore, many speech characteristics are independent of semantics or culture, which makes this technique a very promising one to use.
Algorithms
The process of speech affect detection requires the creation of a reliable database, broad enough to fit every need for its application, as well as the selection of a successful classifier which will allow for quick and accurate emotion identification.Currently, the most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbour (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), decision tree algorithms and hidden Markov models (HMMs). Various studies showed that choosing the appropriate classifier can significantly enhance the overall performance of the system. The list below gives a brief description of each algorithm:
- LDCLinear classifierIn the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class it belongs to. A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics...
– Classification happens based on the value obtained from the linear combination of the feature values, which are usually provided in the form of a feature vector. - k-NNK-nearest neighbor algorithmIn pattern recognition, the k-nearest neighbor algorithm is a method for classifying objects based on closest training examples in the feature space. k-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until...
– Classification happens by locating the object in the feature space, and comparing it with the k nearest neighbours (training examples). The majority vote decides on the classification. - GMM – is a probabilistic model used for representing the existence of sub-populations within the overall population. Each sub-population is described using the mixture distribution, which allows for classification of observations into the sub-populations.
- SVMSupport vector machineA support vector machine is a concept in statistics and computer science for a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis...
– is a type of (usually binary) linear classifier which decides in which of the two (or more) possible classes, each input may fall into. - Decision tree algorithmsDecision tree learningDecision tree learning, used in statistics, data mining and machine learning, uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value. More descriptive names for such tree models are classification trees or regression trees...
– work based on following a decision tree in which leaves represent the classification outcome, and branches represent the conjunction of subsequent features that lead to the classification. - HMMsHidden Markov modelA hidden Markov model is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved states. An HMM can be considered as the simplest dynamic Bayesian network. The mathematics behind the HMM was developed by L. E...
– a statistical Markov model in which the states and state transitions are not directly available to observation. Instead, the series of outputs dependent on the states are visible. In the case of affect recognition, the outputs represent the sequence of speech feature vectors, which allow the deduction of states’ sequences through which the model progressed. The states can consist of various intermediate steps in the expression of an emotion, and each of them has a probability distribution over the possible output vectors. The states’ sequences allow us to predict the affective state which we are trying to classify, and this is one of the most commonly used techniques within the area of speech affect detection.
Databases
The vast majority of present systems are data-dependent. This creates one of the biggest challenges in detecting emotions based on speech, as it implicates choosing an appropriate database used to train the classifier. Most of the currently possessed data was obtained from actors and is thus a representation of archetypal emotions. Those so-called acted databases are usually based on the Basic Emotions theory (by Paul Ekman), which assumes the existence of six basic emotions (anger, fear, disgust, surprise, joy, sadness), the others simply being a mix of the former ones. Nevertheless, these still offer high audio quality and balanced classes (although often too few), which contribute to high success rates in recognizing emotions.However, for real life application, naturalistic data is preferred. A naturalistic database can be produced by observation and analysis of subjects in their natural context. Ultimately, such database should allow the system to recognize emotions based on their context as well as work out the goals and outcomes of the interaction. The nature of this type of data allows for authentic real life implementation, due to the fact it describes states naturally occurring during the human-computer interaction (HCI).
Despite the numerous advantages which naturalistic data has over acted data, it is difficult to obtain, and usually has low emotional intensity. Moreover, data obtained in a natural context has lower signal quality, due to surroundings noise and distance of the subjects from the microphone. The first attempt to produce such database was the FAU Aibo Emotion Corpus for CEICES (Combining Efforts for Improving Automatic Classification of Emotional User States), which was developed based on a realistic context of children (age 10-13) playing with Sony’s Aibo robot-pet. Likewise, producing one standard database for all emotional research would provide a method of evaluating and comparing different affect recognition systems.
Speech Descriptors
The complexity of the affect recognition process increases with the amount of classes (affects) and speech descriptors used within the classifier. It is therefore crucial to select only the most relevant features in order to assure the ability of the model to successfully identify emotions, as well as increasing the performance, which is particularly significant to real-time detection. The range of possible choices is vast; with some studies mentioning the use of over 200 distinct features. It is crucial to identify those that are redundant and undesirable in order to optimize the system, and increase the success rate of correct emotion detection. The most commonly speech characteristics are categorized in the following groups:- Frequency characteristics
- Accent shape – affected by the rate of change of the fundamental frequency.
- Average pitch – description of how high/low the speaker speaks relative to the normal speech.
- Contour slope – describes the tendency of the frequency change over time, it can be rising, falling or level.
- Final lowering – the amount by which the frequency falls at the end of an utterance.
- Pitch range – measures the spread between maximum and minimum frequency of an utterance.
- Time-related features:
- Speech rate – describes the rate of words or syllables uttered over a unit of time
- Stress frequency – measures the rate of occurrences of pitch accented utterances
- Voice quality parameters and energy descriptors:
- Breathiness – measures the aspiration noise in speech
- Brilliance – describes the dominance of high Or low frequencies In the speech
- Loudness – measures the amplitude of the speech waveform, translates to the energy of an utterance
- Pause Discontinuity – describes the transitions between sound and silence
- Pitch Discontinuity – describes the transitions of fundamental frequency.
Facial affect detection
The detection and processing of facial expression is achieved through various methods such as optical flowOptical flow
Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and the scene. The concept of optical flow was first studied in the 1940s and ultimately published by American psychologist James J....
, hidden Markov model
Hidden Markov model
A hidden Markov model is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved states. An HMM can be considered as the simplest dynamic Bayesian network. The mathematics behind the HMM was developed by L. E...
, neural network processing
Neural network
The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes...
or active appearance model. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody or facial expressions and hand gestures) to provide a more robust estimation of the subject's emotional state.
Emotion classification
By doing cross-cultural research in Papua New Guinea, on the Fore Tribesmen, at the end of the 1960s Paul EkmanPaul Ekman
Paul Ekman is a psychologist who has been a pioneer in the study of emotions and their relation to facial expressions. He has been considered one of the 100 most eminent psychologists of the twentieth century...
proposed the idea that facial expressions of emotion are not culturally determined, but universal. Thus, he suggested that they are biological in origin and can therefore be safely and correctly categorised.
He therefore officially put forth six basic emotions, in 1972:
- AngerAngerAnger is an automatic response to ill treatment. It is the way a person indicates he or she will not tolerate certain types of behaviour. It is a feedback mechanism in which an unpleasant stimulus is met with an unpleasant response....
- DisgustDisgustDisgust is a type of aversion that involves withdrawing from a person or object with strong expressions of revulsion whether real or pretended. It is one of the basic emotions and is typically associated with things that are regarded as unclean, inedible, infectious, gory or otherwise offensive...
- FearFearFear is a distressing negative sensation induced by a perceived threat. It is a basic survival mechanism occurring in response to a specific stimulus, such as pain or the threat of danger...
- HappinessHappinessHappiness is a mental state of well-being characterized by positive emotions ranging from contentment to intense joy. A variety of biological, psychological, religious, and philosophical approaches have striven to define happiness and identify its sources....
- SadnessSadnessSadness is emotional pain associated with, or characterized by feelings of disadvantage, loss, despair, helplessness, sorrow, and rage. When sad, people often become outspoken, less energetic, and emotional...
- SurpriseSurprise (emotion)Surprise is a brief emotional state experienced as the result of an unexpected event. Surprise can have any valence; that is, it can be neutral/moderate, pleasant, or unpleasant. If a person experiences a very powerful or long lasting surprise, it may be considered shock.-Reality...
However in the 1990s Ekman expanded his list of basic emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles. The newly included emotions are:
- AmusementAmusementAmusement is the state of experiencing humorous and entertaining events or situations, and is associated with enjoyment, happiness, laughter and pleasure...
- ContemptContemptContempt is an intensely negative emotion regarding a person or group of people as inferior, base, or worthless—it is similar to scorn. It is also used when people are being sarcastic. Contempt is also defined as the state of being despised or dishonored; disgrace, and an open disrespect or willful...
- ContentmentContentment"Contentment" seems realistically defined as "enjoyment of whatever may be desired". That definition is realistic because the more contented an individual or community becomes the less extreme so more acceptable their desires will be...
- EmbarrassmentEmbarrassmentEmbarrassment is an emotional state of intense discomfort with oneself, experienced upon having a socially unacceptable act or condition witnessed by or revealed to others. Usually some amount of loss of honour or dignity is involved, but how much and the type depends on the embarrassing situation...
- Excitement
- GuiltGuiltGuilt is the state of being responsible for the commission of an offense. It is also a cognitive or an emotional experience that occurs when a person realizes or believes—accurately or not—that he or she has violated a moral standard, and bears significant responsibility for that...
- Pride in achievementPridePride is an inwardly directed emotion that carries two common meanings. With a negative connotation, pride refers to an inflated sense of one's personal status or accomplishments, often used synonymously with hubris...
- ReliefReliefRelief is a sculptural technique. The term relief is from the Latin verb levo, to raise. To create a sculpture in relief is thus to give the impression that the sculpted material has been raised above the background plane...
- SatisfactionContentment"Contentment" seems realistically defined as "enjoyment of whatever may be desired". That definition is realistic because the more contented an individual or community becomes the less extreme so more acceptable their desires will be...
- Sensory pleasurePleasurePleasure describes the broad class of mental states that humans and other animals experience as positive, enjoyable, or worth seeking. It includes more specific mental states such as happiness, entertainment, enjoyment, ecstasy, and euphoria...
- ShameShameShame is, variously, an affect, emotion, cognition, state, or condition. The roots of the word shame are thought to derive from an older word meaning to cover; as such, covering oneself, literally or figuratively, is a natural expression of shame....
Facial Action Coding System
Defining expressions in terms of muscle actionsA system has been conceived in order to formally categorise the physical expression of emotions. The central concept of the Facial Action Coding System, or FACS, as created by Paul Ekman and Wallace V. Friesen in 1978 are Action Units (AU).
They are, basically, a contraction or a relaxation of one or more muscles. However, as simple as this concept may seem, it is enough to form the base of a complex and devoid of interpretation emotional identification system.
By identifying different facial cues, scientists are able to map them to their corresponding Action Unit code. Consequently, they have proposed the following classification of the six basic emotions, according to their Action Units (“+” here mean “and”):
Emotion | Action Units |
---|---|
Happiness | 6+12 |
Sadness | 1+4+15 |
Surprise | 1+2+5B+26 |
Fear | 1+2+4+5+20+26 |
Anger | 4+5+7+23 |
Disgust | 9+15+16 |
Contempt | R12A+R14A |
Challenges in facial detection
As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. The accuracy of modelling and tracking has been an issue, especially in the incipient stages of affective computing. As hardware evolves, as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including Neighbourhood Averaging, linear Gaussian smoothingGaussian blur
A Gaussian blur is the result of blurring an image by a Gaussian function. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail...
, Median Filtering, or newer methods such as the Bacterial Foraging Optimization Algorithm.
It is generally known that the degree of accuracy in facial recognition (not affective state recognition) has not been brought to a level high enough to permit its widespread efficient use across the world (there have been many attempts, especially by law enforcement, which failed at successfully identifying criminals). Without improving the accuracy of hardware and software used to scan faces, progress is very much slowed down.
Other challenges include
- The fact that posed expressions, as used by most subjects of the various studies, are not natural, and therefore not 100% accurate.
- The lack of rotational movement freedom. Affect detection works very good with frontal use, but upon rotating the head more than 20 degrees, “there’ve been problems”.
Body gesture
Gestures could be efficiently used as a means of detecting a particular emotional state of the user, especially when used in conjunction with speech and face recognition. Depending on the specific action, gestures could be simple reflexive responses, like lifting your shoulders when you don’t know the answer to a question, or they could be complex and meaningful as when communicating with sign language. Without making use of any object or surrounding environment, we can wave our hands, clap or beckon. On the other hand, when using objects, we can point at them, move, touch or handle these. A computer should be able to recognize these, analyze the context and respond in a meaningful way, in order to be efficiently used for Human-Computer Interaction.There are many proposed methods to detect the body gesture. Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance-based. The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, Appearance-based systems use images or videos to for direct interpretation. Hand gestures have been a common focus of body gesture detection, apparentness methods and 3-D modeling methods are traditionally used.
Physiological monitoring
This could be used to detect a user’s emotional state by monitoring and analysing their physiological signs. These signs range from their pulse and heart rate, to the minute contractions of the facial muscles. This area of research is still in relative infancy as there seems to be more of a drive towards affect recognition through facial inputs. Nevertheless, this area is gaining momentum and we are now seeing real products which implement the techniques. The three main physiological signs that can be analysed are: Bood Volume PulsePulse
In medicine, one's pulse represents the tactile arterial palpation of the heartbeat by trained fingertips. The pulse may be palpated in any place that allows an artery to be compressed against a bone, such as at the neck , at the wrist , behind the knee , on the inside of the elbow , and near the...
, Galvanic Skin Response, Facial Electromyography
Facial electromyography
Facial Electromyography refers to an electromyography technique that measures muscle activity by detecting and amplifying the tiny electrical impulses that are generated by muscle fibers when they contract....
Overview
A subject’s Blood Volume Pulse (BVP) can be measured by a process called photoplethysmography, which produces a graph indicating blood flow through the extremities. The peaks of the waves indicate a cardiac cycle where the heart has pumped blood to the extremities. If the subject experiences fear or is startled, their heart usually ‘jumps’ and beats quickly for some time, causing the amplitude of the cardiac cycle to increase. This can clearly be seen on a photoplethysmograph when the distance between the trough and the peak of the wave has decreased. As the subject calms down, and as the body’s inner core expands, allowing more blood to flow back to the extremities, the cycle will return to normal.
Methodology
Infra-red light is shone on the skin by special sensor hardware, and the amount of light reflected is measured. The amount of reflected and transmitted light correlates to the BVP as light is absorbed by hemoglobin which is found richly in the blood stream.
Disadvantages
It can be cumbersome to ensure that the sensor shining infra-red light and monitoring the reflected light is always pointing at the same extremity, especially seeing as subjects often stretch and readjust their position whilst using a computer.
There are other factors which can affect your Blood Volume Pulse. As it is a measure of blood flow through the extremities, if the subject feels hot, or
particularly cold, then their body may allow more, or less, blood to flow to the extremities, all of this regardless of the subject’s emotional state.
Facial Electromyography
Facial Electromyography is a technique used to measure the electrical activity of the facial muscles by amplifying the tiny electrical impulses that are generated by muscle fibers when they contract.The face expresses a great deal of emotion, however there are two main facial muscle groups that are usually studied to detect emotion:
The corrugator supercilii muscle, also known as the ‘frowning’ muscle, draws the brow down into a frown, and therefore is the best test for negative, unpleasant emotional response.
The zygomaticus major muscle is responsible for pulling the corners of the mouth back when you smile, and therefore is the muscle used to test for positive emotional response.
Galvanic Skin Response
Galvanic Skin Response (GSR) is a measure of skin conductivity, which is dependent on how moist the skin is. As the sweat glands produce this moisture and the glands are controlled by the body’s nervous system, there is a correlation between GSR and the arousal state of the body. The more aroused a subject is, the greater the skin conductivity and GSR reading.It can be measured using two small silver chloride electrodes placed somewhere on the skin, and applying small voltage between them. The conductance is measured by a sensor. To maximize comfort and reduce irritation the electrodes can be placed on the feet, which leaves the hands fully free to interface with the keyboard and mouse.
Visual aesthetics
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing website as data source. They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images.Potential applications
In e-learningE-learning
E-learning comprises all forms of electronically supported learning and teaching. The information and communication systems, whether networked learning or not, serve as specific media to implement the learning process...
applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased. Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state. Affective computing sends a message via color or sound to express an emotional state to others.
Robotic systems
Robot
A robot is a mechanical or virtual intelligent agent that can perform tasks automatically or with guidance, typically by remote control. In practice a robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Robots can be autonomous, semi-autonomous or...
capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pet
Digital pet
A digital pet is a type of artificial human companion. They are usually kept for companionship or enjoyment. People may keep a digital pet in lieu of a real pet....
s, use affective computing abilities to enhance realism and provide a higher degree of autonomy.
Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry. Affective computing has potential applications in human computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.
One idea, put forth by the Romanian researcher Dr. Nicu Sebe in an interview, is the analysis of a person’s face while they are using a certain product (he mentioned ice cream as an example). Companies would then be able to use such analysis to infer whether their product will or will not be well received by the respective market.
One could also use affective state recognition in order to judge the impact of a tv advertisement though a real-time video recording of that person and through the subsequent study of his or her facial expression. Averaging the results obtained on a large group of subjects, one can tell whether that commercial (or movie) has the desired effect and what the elements which interest the watcher most are.
Affective computing is also being applied to the development of communicative technologies for use by people with autism.
Application examples
- Wearable computerWearable computerWearable computers are miniature electronic devices that are worn by the bearer under, with or on top of clothing. This class of wearable technology has been developed for general or special purpose information technologies and media development...
applications make use of affective technologies, such as detection of biosignals - Human–computer interactionHuman–computer interactionHuman–computer Interaction is the study, planning, and design of the interaction between people and computers. It is often regarded as the intersection of computer science, behavioral sciences, design and several other fields of study...
- KismetKismet (robot)Kismet is a robot made in the late 1990s at Massachusetts Institute of Technology by Dr. Cynthia Breazeal. The robot's auditory, visual and expressive systems were intended to allow it to participate in human social interaction and to demonstrate simulated human emotion and appearance...
- Educational TechnologyEducational technologyEducational technology is the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources." The term educational technology is often associated with, and encompasses, instructional theory and...
See also
- Affect control theoryAffect control theoryIn control theory affect control theory proposes that individuals maintain affective meanings through their actions and interpretations of events...
- Affective designAffective designThe notion of affective design emerged from the field of Human-Computer Interaction and more specifically from the developing area of affective computing .-Aims:...
- EmotionEmotionEmotion is a complex psychophysiological experience of an individual's state of mind as interacting with biochemical and environmental influences. In humans, emotion fundamentally involves "physiological arousal, expressive behaviors, and conscious experience." Emotion is associated with mood,...
- Emotion Markup LanguageEmotion Markup LanguageAn Emotion Markup Language has first been defined by the W3C Emotion Incubator Group as a general-purpose emotion annotation and representation language, which should be usable in a large variety of technological contexts where emotions need to be represented...
(EmotionML) - ChatbotChatbotChatbot may mean:* Chatterbot, a chatter robot is a type of conversational agent, a computer program designed to simulate an intelligent conversation with one or more human users via auditory or textual methods....
- CyberEmotionsCyberEmotionsCyberEmotions is a large-scale integrating project funded by the European Commission under the Seventh Framework Programme in FET ICT domain theme 3: ‘Science of complex systems for socially intelligent ICT’...
- Sentic computingSentic computingSentic computing is a multi-disciplinary approach to opinion mining and sentiment analysis at the crossroads between affective computing and common sense computing, which exploits both computer and social sciences to better recognize, interpret and process opinions and sentiments over the Web.In...
- Sentiment analysisSentiment analysisSentiment analysis or opinion mining refers to the application of natural language processing, computational linguistics, and text analytics to identify and extract subjective information in source materials....
External links
- Affective Computing Research Group at the MIT Media Laboratory
- Computational Emotion Group at USC
- Emotive Computing Group at the University of Memphis
- 2011 International Conference on Affective Computing and Intelligent Interaction
- Brain, Body and Bytes: Psychophysiological User Interaction CHI 2010 Workshop (10-15, April 2010)
- International Journal of Synthetic Emotions
- IEEE Transactions on Affective Computing (TAC)