Proefschrift

92 Chapter 4 scanner bore. We used a customized MATLAB (MathWorks, Inc., Natick, MA, version 9.5) script to preprocess raw eye movement data into information on gaze position and duration. Using an established algorithm for face and facial feature detection (Viola & Jones, 2001) we created rectangular areas of interest (AOIs) around the left and right eye of the targets in all videos that were combined into a single AOI of the eye region for further analyses. The primary gaze measure was the percentage of dwell time within the eye region per video relative to the total video duration, in which dwell time is defined as the time spent looking within an AOI. The eye tracker was calibrated and validated using a nine-point calibration grid from EyeLink’s calibration protocol. Collection of gaze data of 25 adolescents (HC: n = 17, DEP: n = 8) was unsuccessful due to technical problems or a failed calibration procedure (e.g., due to sight deficiencies, participants wearing glasses, or participants having light-colored eyes). In addition, 19 trials of nine adolescents (HC: n= 6, DEP: n=3) were excluded due to >30% missing gaze data. This resulted in a final set of gaze data of 53 (out of 78) adolescents (HC: n = 42; DEP: n = 11), including 829 trials (out of 848; 2.2% missing). fMRI data acquisition and neuroimaging analyses MR images were acquired using a Philips 3.0T Achieva MRI scanner equipped with a SENSE-32 channel head coil. For the eye contact task, T2*-weighted echo planar imaging was used and a structural 3D T1 scan was acquired (see Supplement S4.3 for details on scan parameters). MRI data were preprocessed and analyzed using SPM12 (Wellcome Trust Centre for Neuroimaging, University College London). Functional MR images were slice-time corrected, corrected for field-strength inhomogeneity’s using b0 field maps, unwarped and realigned, co-registered to subject-specific structural images, normalized to MNI space using the DARTEL toolbox (Ashburner, 2007), and smoothed using an 8-mm full width at half maximum isotropic Gaussian kernel. Raw and preprocessed data were checked for quality, registration, and movement. Average head movement per adolescent did not exceed 1 voxel (i.e., 3 mm) (M = 0.075 mm, SD = 0.083 mm, range: 0.038-0.252 mm). We corrected for serial autocorrelations using a first order autoregressive model (AR(1)). Low-frequency signals were removed using a high-pass filter (cutoff = 128 s) and we included nuisance covariates to remove effects of run. First, to examine neural responses to gaze direction, target, and their interaction within the sample of HC adolescents, we constructed a generalized linear model with 8 regressors indicating cue onset for each condition and one regressor for onsets of subjective ratings. Cue onset regressors were defined from the onset of the video and modeled for its duration (16-38 s). The subjective rating regressor was defined from the onset of each question and modeled for the duration the question was displayed on the screen, including 1000 ms during which a “Too late!” screen was shown in case adolescents did not answer within the set time period of 8000 ms (self-paced; M = 2946 ms; SD = 1259 ms; range = 749-9002 ms). Six motion parameters (based on the realignment parameters) were included to correct for head motion.

RkJQdWJsaXNoZXIy MjY0ODMw