AI and machine learning algorithms observe, interpret trends in data and form associations through pattern recognition and then use the established patterns to solve complex problems. AI is a byproduct of human design teams, who inherently imprint upon it, their individual biases.
“While I am not a psychologist, I have still seen enough biases in my lifetime and how it can cause problems in the software technology industry. Unconscious bias, confirmation bias, affinity bias, self-serving bias are few common ones that I relate to our industry. These biases feed into AI algorithms, data and the intent of the AI solution that is being developed,” says Rakesh Kotian, Head – Dun & Bradstreet Technology and Corporate Services India.
AI biases broadly are categorized as data bias and societal AI bias and Kotian believes both are equally big concerns on AI adoption.
Today technology talent is in heavy demand; companies are innovating ways to find and attract talent by applying AI models to have an edge over competition. There has been evidence seen in experimental models that have been biased towards male candidates over female candidates. While this may sound and look like societal bias, it is data bias.
“Performance evaluation and promotion management of employees are very sensitive to companies and their employees. How would Human Machine interface work which would eliminate biases that generated due to the decisions taken in the process. In recruitment model example biases, here as well we see data bias with aspects of societal bias,” Kotian added.
But how do you identify the bias?
There are several commercially available tools that can help identify and mitigate bias in AI models, we need to understand that the most of bias begins in data models.
Gopali Contractor, Managing Director, Lead – AI Practice, Advanced Technology Centers in India (ATCI), Accenture feels that the bias in AI models most often stems from bias in the training data set . “You want your data set to be as representative as possible. For instance, if you are training an AI model to identify signs of a heart attack, and you only train the model on men’s medical records, your AI will not perform equally well on men and women, as women often have different symptoms during a heart attack,” she says.
With the advent of AI and ML taking over traditional CRMs, biases are very tough to find out. Each step taken in the process of AI modelling can have risk of bias creeping in. There are various ways where one can identify biases. But if there is a requisite framework, best practices and tools are in place, it can catch and clean many of these biases.
The context of data is unique to each industry, sector, or an organization. Therefore, data must be looked at holistically and the data scientist should work very closely with the business users to understand potential valid biases in data.
“For example, a bank may find that men are overrepresented in their historical mortgage data, which must be addressed so a loan algorithm isn’t inadvertently trained to only approve men for mortgages; in medicine, where computer vision algorithms are being trained to help detect skin cancer, the AI must be able to perform equally well on all skin tones, so it’s crucial to have racial diversity accurately reflected in the data; and in HR you may want to be sure your algorithm isn’t biased toward a particular age group,” Contractor explains.
Since training data is different and unique for every sector and industry, AI bias would mean different things to different industries.
Bias is all around us. It’s in our nature. It is sometimes through humans that this bias creeps in the AI algorithms. Therefore it’s important to have an AI strategy in place so issues like bias can be identified and mitigated before they can potentially cause harm. An AI strategy will also help to assess which levels of governance are appropriate for which applications.
One of the most critical elements in being able to successfully scale AI is to ensure that it performs reliably and as expected, which means addressing algorithmic and data bias as part of a holistic AI strategy. Having governance in place to preemptively address bias and monitor for consistent performance can give organizations and end-users greater confidence in their AI deployments. Oftentimes organizations are looking to mitigate bias in attributes like race, gender, age, income and even marital status or geographic location.