Artificial intelligence is changing how business—and HR—is done. Company leaders need to be aware of these changes and make sure they are not doing more harm than good.
There are already some clear benefits and applications emerging from the introduction of AI in HR, said Carina Cortez, chief people officer at Cornerstone OnDemand, a cloud-based people development software provider and learning technology company based in Santa Monica, Calif. For instance, there are many opportunities to save time and be more efficient, Cortez said. “By automating some of our work with AI, we can remain focused on big-picture strategy and future-planning.” Personally, she said, “I have recently used AI to draft a job description for a newly created role as well as to summarize themes from feedback sessions I have conducted.” In addition, she noted, “AI can help organizations identify skill gaps and provide personalized learning recommendations to help fill those gaps as well as identify gig or special projects to gain more experience.”
But, along with the benefits come some potential risks—particularly ethical risks.
HR leaders need to be aware of several ethical considerations when adopting AI technologies such as ChatGPT, including algorithmic bias, algorithmic accountability, and how to ensure fairness and transparency in AI-driven HR processes.
Key Ethical Considerations
Jack Berkowitz is vice president of product development at ADP, a payroll, HR and tax services firm headquartered in Roseland, N.J. Since 2019, Berkowitz said, ADP has been focused on ensuring that their AI technology is deployed ethically for the benefit of clients and employees. Transparency and accountability are paramount, he said, adding that employees should have a right to know where and how the technology is being used, be permitted to opt out, and be able to ask questions. HR and vendors have a shared responsibility to maintain these ethical standards and ensure legal compliance.
“We must carefully consider the sources from which AI retrieves information, ensuring that it is accurate, reliable and unbiased,” said Danielle Crane, chief people officer with OneStream, an intelligent finance platform based in Rochester, Mich. Since some emerging technologies may have security vulnerabilities, close collaboration with IT is essential to ensure a robust and secure infrastructure, she said.
Bias is also a concern, and HR professionals are likely aware of instances where AI has led to bias. This is particularly true if they operate in New York City, which recently enacted a law requiring employers to audit for bias and to inform job candidates when these AI tools are used. But it’s not just the hiring process that is subject to bias, Cortez said. “For HR, key ethical considerations relate to making sure AI does not introduce bias into any of our processes—from hiring, to promoting, to pay equity and everything in between,” she said.
Greg Vickrey is director, HCM practice with global technology research and advisory firm ISG, with headquarters in Stamford, Conn. AI solutions will pick up any bias in the data from which they are trained, Vickrey said. That could lead to discrimination in HR-related decisions.
“It’s important to remember that just because we are using an AI tool, this doesn’t preclude the use of human intervention and review,” Cortez said. “AI tools may reflect output that contains stale or limited data. Organizations should assume that a certain amount of data is inaccurate, biased and/or incomplete, which requires controls to mitigate risks.”
What Role Should HR Leaders Play?
“HR leaders should be actively involved in the evaluation and selection of AI systems, analyzing factors like algorithmic bias and its potential impact on DE&I [diversity, equity and inclusion] and fairness efforts,” Cortez said. “These leaders should implement mechanisms to monitor algorithmic bias, regularly auditing and refining AI-driven HR processes. Finally, HR leaders should advocate for ongoing research and development in AI ethics, staying informed about emerging practices and standards to continuously improve the ethical implementation of AI technologies,” she said.
Education and Collaboration
HR leaders, Vickrey said, should:
- Educate themselves and others about the ethical considerations associated with AI adoption.
- Actively participate in the selection and evaluation of AI technologies, including the evaluation of vendors for their commitment to fairness, transparency and accountability.
- Collaborate with legal and compliance teams to develop clear policies and guidelines around AI adoption, including data privacy rules and regulations.
- Train employees on AI ethics, creating channels for reporting ethical concerns and encouraging open dialogue on ethical considerations.
HR can also play a role in facilitating collaboration between key areas of the company to address and monitor AI ethical challenges.
“We are reaching an era where the division between IT departments, managers and human resources needs to blur when it comes to optimizing the value of AI,” said Tim Dasey, who has studied AI and learning science since the 1980s, including 30 years at the Massachusetts Institute of Technology as a developer and leader. He’s the author of Wisdom Factories: AI, Games, and the Education of a Modern Worker (Rejuvenate Publishing, 2023). “The development of organizationwide policies on appropriate AI use will require collaboration with business, legal, customer, employee and human resources professionals,” Dasey said. “Since AI is moving so quickly and any initial policy determinations are likely to need revision as experience is gained, there should be a standing group that accumulates expertise on the matter and adapts policies and norms on a regular basis.”
Since the impacts of AI are so pervasive, Dasey has also proposed the idea of “productivity therapy,” a process that would involve each employee being “literally counseled on how to use AI in their specific job and for their specific personality and optimal work processes.”
Policies also need to be in place to guide the use of AI. Some policy considerations, Cortez said, might include “AI tool approval, contracting, access and authentication, data inputs, outputs and tool monitoring, labeling, compliance, etc.” In addition, Crane added, “Organizations should also develop ‘safe use’ policies and provide training on AI to foster responsible and ethical utilization of these technologies.”
Berkowitz noted that HR professionals have two key constituents—C-level leaders and employees—and acknowledged that this can sometimes put them in conflict. He encouraged HR leaders to be proactive in leading conversations about AI and ethics in their organizations, acknowledging that some may be apprehensive, but noting that these discussions are important.
Training and Development
Training and development are always critical, Vickrey said, but especially when a new technology is being introduced. “Organizations should provide training programs that specifically address the ethical implications of AI adoption,” Vickrey advised. That, he said, “can help HR professionals and employees identify and mitigate ethical risks associated with AI systems.”
In addition, Vickrey suggested, it’s vital for HR professionals to understand AI concepts, capabilities and limitations. “Having a sense of AI literacy will help HR make informed decisions and effectively navigate ethical challenges.”
Given the wide-ranging implications of AI for both positive impact and the introduction of risk, it may be impossible and certainly unwise to ever take humans out of processes impacted by AI. As Dasey noted, there really is no way around the potential for bias that AI can introduce in a wide range of processes. “AI is supposed to have biases—in a mathematical sense—because its job is to pick up patterns,” he said. “The key is to minimize unwanted biases according to human, legal and organizational sensibilities.”
Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.