Advertisements
GeneralHr Library

What It Will Take for Us to Trust AI

Source |Guru Banavar | https://hbr.org

The early days of artificial intelligence have been met with some very public hand wringing. Well-respected technologists and business leaders have voiced their concerns over the (responsible) development of AI. And Hollywood’s appetite for dystopian AI narratives appears to be bottomless.

This is not unusual, nor is it unreasonable. Change, technological or otherwise, always excites the imagination. And it often makes us a little uncomfortable.

But in my opinion, we have never known a technology with more potential to benefit society than artificial intelligence. We now have AI systems that learn from vast amounts of complex, unstructured information and turn it into actionable insight. It is not unreasonable to expect that within this growing body of digital data — 2.5 exabytes every day — lie the secrets to defeating cancer, reversing climate change, or managing the complexity of the global economy.

We also expect AI systems to pervasively support the decisions we make in our professional and personal lives in just a few years. In fact, this is already happening in many industries and governments. However, if we are ever to reap the full spectrum of societal and industrial benefits from artificial intelligence, we will first need to trust it.

Trust of AI systems will be earned over time, just as in any personal relationship. Put simply, we trust things that behave as we expect them to. But that does not mean that time alone will solve the problem of trust in AI. AI systems must be built from the get-go to operate in trust-based partnerships with people.

The most urgent work is to recognize and minimize bias. Bias could be introduced into an AI system through the training data or the algorithms. The curated data that is used to train the system could have inherent biases, e.g., towards a specific demographic, either because the data itself is skewed, or because the human curators displayed bias in their choices. The algorithms that process that information could also have biases in the code, introduced by a developer, intentionally or not. The developer community is just starting to grapple with this topic in earnest. But most experts believe that by thoroughly testing these systems, we can detect and mitigate bias before the system is deployed.

Managing bias is an element of the larger issue of algorithmic accountability. That is to say, AI systems must be able to explain how and why they arrived at a particular conclusion so that a human can evaluate the system’s rationale. Many professions, such as medicine, finance, and law, already require evidence-based audit ability as a normal practice for providing transparency of decision-making and managing liability. In many cases, AI systems may need to explain rationale through a conversational interaction (rather than a report), so that a person can dig into as much detail as necessary.

For full article Read On…….

Advertisements
Tags
Show More

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button
Close
Close
%d bloggers like this: