6 169 Eyes on you: ensuring empathic accuracy or signaling empathy? prolonged presentation (Regenbogen et al., 2012). It is of note, however, that participants were instead less empathically accurate during negative versus positive videos, pointing to the distinct impact of the valence of the videos on participants’ feelings of empathy versus their levels of empathic accuracy. Hence, feeling empathy and being empathically accurate in inferencing what others might feel is not the same. A signaling function of eye gaze has been previously mentioned in the literature (Cowan et al., 2014; Kobayashi & Hashiya, 2011; Mason et al., 2005), although empirical evidence was lacking. Kobayashi and Hashiya (2011), for example, introduced the “gaze-grooming” hypothesis, stating that gaze has evolved into a contact-free, social grooming function in humans to form and maintain social bonds. Our results are in line with this “gaze-grooming” hypothesis and the various target stories deriving from distinct targets show empirical evidence for the generalizability of this signaling function of gaze to a variety of social situations. Strengths and limitations This study uniquely examined to what extent gazing at the eye region of others contributes to participants’ EA under ecologically valid circumstances. The methodological design of the EA task not only allows for a corresponding assessment of the feelings of both perceiver and target in positive and negative situations, but also incorporates the assessment of fluctuations in their affect over time. Furthermore, the novel addition of individual ratings about perceivers’ affect and state empathy after each video informed us on how participants subjectively experienced the emotional target stories and gives additional insight in the validity of the task. While the richness of the dynamic stimuli, including both verbal and non-verbal information, are a major advantage of the present study, future studies could focus on the individual contribution of the verbal and visual content to EA (Zaki et al., 2009). As perceivers were presented with videos of unknown targets, they were well aware that they were not involved in an actual bidirectional conversation. This may have lowered their motivation to be empathically accurate and may have affected our findings. Related to this, the videos do not mimic bidirectional interactions, but rather mimic listening to a monologue. It is important to mention that these are two different types of interactions that occur under different circumstances. As the EA task more closely mimics the latter, our findings are probably most generalizable to closely resembling situations in real life, such as (mental) health settings in which practitioners are listening to personal stories of their clients. Furthermore, participants were placed in a chin rest while watching the videos to limit head motion. Although they reported low irritability during the task and EA levels were comparable to prior studies using the EA task, it is possible that they experienced the chin rest as unpleasant, which might have affected their performances. Lastly, it is of note that the participants in this study are adults aged between 35 and 64 years (Mage = 48; SD = 5.50) and the results of the present study need to be interpreted in the context of this age group.
RkJQdWJsaXNoZXIy MjY0ODMw