650694-vOosten

49 1 Methods Candidate experiments, in which researchers randomize attributes of hypothetical political candidates, are flourishing since conjoint experiments were popularized (Hainmueller et al., 2014). There were a few candidate experiments conducted in the 20th century, though since the turn of the century it did not pick up again until 2012 (Weaver, 2012). To compile the selection of this meta-analysis, we systematically searched the scholarly databases Scopus and Web of Science using the search string (conjoint OR factorial OR vignette OR vignet) AND (candidate OR politician OR officeholder) AND (political OR politics OR election) in December 2021. We complemented this search with a set of loose searches of the keywords in various databases like Google scholar and journal websites. After initial screening based on the titles and abstract, this yielded 106 unique references, which we assessed for eligibility based on the full text. Our inclusion criteria are a) independent manipulation of race/ethnicity, b) the dependent variable is some form of candidate evaluation, c) citizens/voters are the target population, and d) the study is published e) between 2012 and 2022. Finally, 43 studies meet these criteria (see Table 1 for an overview and Appendix 2 for a full list of references). See Appendix 1 for the Prisma flow diagram (Moher et al., 2015) of the search process. We were able to retrieve the dataset from authors’ personal websites, online appendices, Harvard Dataverse, or by contacting the authors by e-mail of three quarter of the articles. To generate comparable effect sizes, we ran linear models without any control variables or conditions that vary from study to study. When the datasets contained sufficient information, we weighted the observations and clustered the standard errors. When no dataset was available, we retrieved the results from the article, rescaling the outcomes of the least-specified OLS regression models. We ran subsequent analyses over all the results using the R-packages Robumeta (Fisher, 2017) and Metafor (Viechtbauer, 2019). Because researchers draw upon different populations, we used a random effects model and a restricted maximum likelihood estimator (Quintana, 2015). We present our data with forest plots including name of the study, effect size, majority politicians as the reference category and 95 percent confidence intervals (see replication code). Pooling available datasets enables us to include outcomes that are not necessarily statistically significant or reported in the articles. In doing so, we go beyond what literature studies or systematic reviews can offer (Smith and Hunter, 2014). We ran funnel plots and conducted ranked correlation tests (Quintana, 2015), and the outcomes suggest no problems with publication bias (see Appendix 4). Finally, we ran diagnostic tests through Baujat plots to ascertain which studies are the most influential in the calculation of the summary estimate, see Appendix 5.

RkJQdWJsaXNoZXIy MjY0ODMw