Algorithmic Racial Bias in Automated Video Interviews

rss.shrm.org | Louis Hickman, Louis Tay, Sang Eun Woo, and Sidney D’Mello
This is the third in a series of five articles funded by the Society for Industrial/Organizational Psychology Foundation (SIOP Foundation) to study anti-racism efforts in workplaces.
Policymakers and consumer advocacy groups have raised concerns that automatically scored interviews may be biased against (i.e., be less accurate for, or give systematically lower scores to) racial minorities, including Black and African American individuals, when they use facial recognition software to measure nonverbal behaviors (e.g., smiles, eyebrow raises). Facial recognition software tends to be less accurate for—or biased against—Black and African American faces. Such bias could impact automatic interview scores, thereby creating systematic disadvantages for racial minorities if these algorithms are adopted by organizations.
Articles in the series:
We investigated the accuracy and bias (i.e., whether accuracy differed across groups) of facial recognition software for measuring nonverbal behaviors. To do so, we used grant funds from the SIOP Foundation to pay a team of human raters to rate a key set of nonverbal behaviors, and we compared the agreement between the computer and human scores across Black and white interviewees.
Additionally, we examined how using nonverbal behaviors measured by facial recognition software as predictors in machine learning (ML) models that score interviewees affected the ML scores’ psychometric properties, including Black-white group score…
Click Here to Read more /Source link