2 33 Neural and affective responses to prolonged eye contact with one’s own adolescent child and unfamiliar others matrix size = 80 × 80, with 38 slices, voxel size = 2.75 mm3. The task was programmed and presented electronically using E-prime 2.0 (Tools Psychology Software, 2012) and participants could see the task through a mirror attached to the head coil. Foam inserts were used to restrict head motion if necessary. Data preprocessing and analyses Affective responses Self-report affective responses were analyzed in R (R Core Team (2013), version 3.6.1), with the following packages: Lme4 for mixed model analysis, psych for descriptive statistics, and ggplot2 for creating figures (Bates et al., 2012; Revelle, 2012; Wickham et al., 2016). Questions that were not answered by the participants within a set time of 8 s were reported as missing values and excluded from the affective response analyses, but not from neuroimaging analyses, which resulted in 18 missing affective responses out of 3160 responses in total (0.6%). To assess both the impact of eye contact (direct versus averted gaze) and how this may vary as a function of the target identity in the video, we used a generalized linear mixed regression model with gaze direction (2 levels: Direct gaze, averted gaze) and target (4 levels: Own child, unfamiliar child, unfamiliar adult, self) as predictors of parents’ self-reported affective responses. We ran three separate generalized linear mixed regression models, one for each subjective rating: Feelings of connectedness, feelings about the targets, and mood of parents. We tested for main effects of gaze direction and target, and their interaction. Eye tracking data analyses We used a customized MATLAB (MathWorks, Inc., Natick, MA, version 9.5) script to preprocess raw eye tracking data into measures of eye gaze per parent per video clip. Raw gaze data of parents’ right eye was used to calculate information on gaze position and duration. Furthermore, validity of gaze data was calculated as the percentage successfully recorded eye tracking data per video clip as an estimate of data quality. Individual trials of which the validity was below 70% were excluded from further analysis, which is within the common range (Bojko, 2013). Visual inspection of the gaze data was performed to detect aberrations in data quality. Gaze data of 16 participants could not be collected due to technical problems with the eye tracker in the scanner and gaze data of another 15 participants could not be collected due to an unsuccessful calibration procedure prior to the scan session, resulting in a final sample of gaze data of 48 participants. Reasons for failure of the calibration procedure were difficulty tracking the eyes when participants wore MR-compatible glasses (if they could not perform the task without glasses) or when participants had light-colored eyes. In addition, 22 trials of five participants were excluded (min. 1 and max. 10 trials per person) due to missing gaze data for >30% of the
RkJQdWJsaXNoZXIy MjY0ODMw