HR Compliance in a World of GenAI: 6 Important Questions

HR technology is moving fast and furious these days, particularly with the introduction of generative AI. How can HR professionals use this new technology and remain compliant?
Kasara Weinrich, director of sales technology and AI solutions designer at ADP, interviews Helena Almeida, ADP’s vice president and managing counsel, on this topic and provides clear direction and actions for HR.
Kasara Weinrich: We’re witnessing a fundamental shift in what’s possible due to new technological horizons. Artificial intelligence has been around for decades, but generative AI has spurred rapid innovation. HR is now tasked with how to best use this technology to improve productivity, employee service, recruiting, payroll and other HCM domains.
Q1. What thoughts do you have on guiding people as they adapt to generative AI?
A: Managing change is a big piece of the AI story. It’s about incremental change and managing expectations, both from a business perspective, but also from the employees who have to use a new technology that in some cases they feel like they don’t fully understand. People are at different points of experience with AI so you need to acknowledge that and start where they are and where they need to be.
It’s also important to keep in mind that AI is part of the process; it’s not the answer. It’s not the solution you put in that solves the problem, it’s a way to get there. Embracing that attitude will help organizations take the first step into the AI journey.
Q2. How should organizations approach educating and training workers on AI use?
A: Building trust is key to ensuring that no one is left behind, and that everyone feels equipped and comfortable using AI.
AI isn’t going to replace the workforce. Rather, it’s a tool that the workforce will use to achieve bigger and better goals. That means employers must invest in training and development to fully leverage AI’s benefits.
First, the training needs to be specific to a particular use case. Let’s say a recruiter is using AI to generate a job description. While AI might deliver a good job description, it’s the recruiter who understands the specifics of the job and the organization. Where AI really works and sings is when you marry what AI can do in drafting that job description plus what the human knows about the job and about their workforce.
Second, it’s crucial to define appropriate AI use, both for those developing AI products and those using AI as a tool. People must understand the risks that AI presents. They must understand how to ensure information and outcomes are accurate, unbiased and transparent.
Training your employees on AI – what it is and what it’s not – will help bring the workforce into this technical revolution so that people aren’t feeling left behind and they’re ready to use it.
Q3. Use of GenAI has heightened concerns about how data is used. How should HR professionals approach data privacy?
A: Data privacy is about the responsible use of data. Consider what data is necessary for insights and whether it can be masked or anonymized to protect identities.
By focusing on data minimization and being transparent about how data is used, you build trust. This transparency fosters greater permission to use data, driving insights and creating human-centered experiences.
Q4. What is the importance of data integrity to compliance?
A: There’s much discussion about the data fueling AI products – who does the data belong to? Who can use it and how? Much of current and evolving AI legislation focuses on this.
As with many things but especially in AI, the quality of the output is directly tied to the quality of the input. The larger and more diverse the dataset the better the data integrity. For example, ADP’s extensive, unique HCM dataset draws on data from over 40 million workers globally. If the data you are drawing from is accurate, current and deep, it has better integrity. This can help HR be compliant with privacy regulations.
Q5. What can organizations do to stay ahead of the shifting regulatory environment around AI?
A: When you look at the new laws coming out about the use of AI, there are themes such as protecting people against bias in algorithms or informing people when AI is being used. Use these new laws as a guide when building your own AI ethical principles.
Whether you’re creating, using or supporting AI applications, governance over data use is crucial. This requires a partnership across the ecosystem — developers, suppliers and end users alike.
At ADP, we established an AI & Data Ethics Council in 2019 and implemented governance tools to monitor and govern data permissions. This cross-functional team includes experts in global security, global compliance, legal and data who review our ethical principles and regulatory obligations. Compliance-by-design promotes data protection from the start, using minimal data to achieve insights. Organizations should also partner with vendors who have rigorous compliance frameworks.
Organizations should establish core AI ethics standards. For a company that’s just beginning this process, draw together a cross-functional team – IT, legal, security, etc. – to discuss how these laws impact your business and review use cases for your organization.
Resources like the Future of Privacy Forum and the National Institute for Standards and Technology’s AI Risk Management Framework can guide responsible AI development and governance. By building a robust framework now, organizations will be better prepared for future regulations and changes.
Don’t forget that existing laws on discrimination, pay, overtime and FMLA eligibility, for example, still apply in the AI world, but organizations like the U.S. Department of Labor and the EEOC are providing guidance on interpreting these laws in an AI context.
Additionally, new AI-specific regulations are emerging in places like New York City, Colorado and the European Union. These regulations emphasize protecting people from bias in algorithms and ensuring transparency around AI use.
Q6. What excites you most about AI and these emerging technologies?
A: AI has the potential to unleash human creativity by handling routine tasks, allowing people to focus on more meaningful work. It enables connections and collaboration at a human level, enhanced by technology, not replaced by it.
I’m excited about the potential for personalization at scale and the opportunities AI brings to make the workplace more human.
Free Training & Resources
White Papers
Provided by PeopleGuru
Resources
Test Your Knowledge
The Cost of Noncompliance
The Cost of Noncompliance
The Cost of Noncompliance