GeneralHr Library

Why You Can’t Trust AI to Make Unbiased Hiring Decisions

Source | http://motto.time.com : By Sara Wachter-Boettcher

There’s no shortage of research showing that women and people of color get worse treatment than their white male peers in the job market. In one well-known study, academic institutions across the country rated a résumé as more qualified for a lab manager position — and suggested a higher starting salary — when the name at the top read “John” instead of “Jennifer.” Just last month, job platform Hired compared salary data across tech-industry workers and found a similar result: 63% of the time, their study reported, women were offered a lower starting salary than men for the same position at the same company. And a new meta-analysis of two dozen studies related to race and hiring performed since 1989 showed that equally qualified black job candidates get 36% fewer callbacks than their white counterparts. Even more horrifying, this statistic hasn’t seen any meaningful change in 25 years.

Now, a whole host of tech companies are cropping up promising to remove these types of biases from hiring — with the help of artificial intelligence. There’s Koru, which uses surveys to identify current employees’ strengths and weaknesses, and then looks for those same traits in applicants. There’s Pymetrics, which uses “gamified neuroscience and A.I.” to predict success, and then find applicants who fit the same profile. And there’s Ideal, which uses AI to screen résumés and cherry-pick candidates. All these products promise to help companies diversify or eliminate bias — and more are coming.

AI-enabled hiring software may be a booming market, but I won’t be trusting it to level the playing field or eliminate the wage gap anytime soon. Because for all their seemingly scientific methods, algorithms aren’t neutral at all. They’re just as fallible as the humans who made them — and they can easily reinforce all those biases we say we’re trying to get rid of. In fact, if you train AI to be biased, it can actually get worse over time, not better — optimizing for those same biases over and over.

The concept of algorithmic bias affecting employment isn’t new, either. Back in the summer of 2015, researchers from Carnegie Mellon and the International Computer Science Institute wanted to learn more about how Google’s ad-targeting algorithms worked. So they built a piece of software called AdFisher, which simulates web-browsing activities, and set it to work gathering data about the ads shown to fake users with a range of profiles and browsing behaviors. The results were startling: the profiles Google had pegged as male were much more likely to be shown ads for high-paying executive jobs than those Google had identified as female — even though the simulated users were otherwise equivalent.

So how can we ensure AI is a boon for marginalized groups, rather than just a shiny new way to reify the same old problems? It all depends on what, exactly, the AI does — and how it learned to do it.

Read On…..

Show More

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button