Human vs. AI Recruitment: A Vignette Survey Experiment on Self-Presentation Strategies in the US and UK Labor Markets
32
Presented by: Huyen Nguyen
Do people across demographics self-present differently when facing automated (AI) recruitment? Using a vignette survey experiment on a large sample of working adults in the US and the UK (N = 2952) across industries, this research investigates self-presentation strategies and self-confidence beliefs when facing human vs. automated recruiters, taking into account self-evaluation beliefs of success chances and perceptions of discrimination in the labor market. We found that, in the AI treatment, self-evaluation beliefs about their performance on the interview questions were significantly lower than in the human recruiter treatment. Linguistically, interview answers showing a higher proportion of complex words and scoring higher on analytic style are more likely to be in the AI treatment than the human recruiter treatment. When considering education levels, while interview texts from AI treatment for both higher and lower-educated participants are more likely to have complex words, texts from lower-educated participants are significantly more likely to have an authentic style. Psychometrically, interview answer texts from US participants show no significant difference between the AI and human recruiter treatments. In contrast, interview answer texts from UK participants in the AI treatment are more likely to have “can-do” words than in the human recruiter treatment. Our findings provide novel insights to practitioners and job seekers to understand the behavioral aspects of the labor supply side, especially for organizations that are using or considering embracing AI recruitment in their hiring processes.