Inferential Emotional Tracking

From Chen & Whitney (2019) . Emotion recognition is an essential human ability critical for social functioning. It is widely assumed that identifying facial expression is the key to this, and models of emotion recognition have mainly focused on facial and bodily features in static, unnatural conditions. We developed a method called affective tracking to reveal and quantify the enormous contribution of visual context to affect (valence and arousal) perception. When characters’ faces and bodies were masked in silent videos, viewers inferred the affect of the invisible characters successfully and in high agreement based solely on visual context. We further show that the context is not only sufficient but also necessary to accurately perceive human affect over time, as it provides a substantial and unique contribution beyond the information available from face and body. Our method (which we have made publicly available) reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces.

From Chen & Whitney (2020). The ability to recognize others’ emotions is critical for social interactions. It is widely assumed that recognizing facial expressions predominantly determines perceived categorical emotion, and contextual information only coarsely modulates or disambiguates interpreted faces. Using a novel method, inferential emotion tracking, we isolated and quantified the contribution of visual context versus face and body information in dynamic emotion recognition. Even when faces and bodies were blurred out in muted videos, observers inferred the emotion of invisible characters accurately and in high agreement based solely on visual context. Our results further show that the presence of visual context can override interpreted emotion categories from face and body information. Strikingly, we find that visual context determines perceived emotion nearly as much and as often as face and body information does. Visual context is an essential and indispensable element of emotion recognition: Without context, observers can misperceive a person’s emotion over time.

From Chen & Whitney (2020). Understanding the emotional states of others is important for social functioning. Recent studies show that context plays an essential role in emotion recognition. However, it remains unclear whether emotion inference from visual scene context is as efficient as emotion recognition from faces. Here, we measured the speed of context- based emotion perception, using Inferential Affective Tracking (IAT) with naturalistic and dynamic videos. Using cross-correlation analyses, we found that inferring affect based on visual context alone is just as fast as tracking affect with all available information including face and body. We further demonstrated that this approach has high precision and sensitivity to sub-second lags. Our results suggest that emotion recognition from dynamic contextual information might be automatic and immediate. Seemingly complex context-based emotion percep- tion is far more efficient than previously assumed.