By now, artificial intelligence (AI) isn’t a distant dream – it’s a reality, and it’s taking the workplace by storm.
But with this great new power comes great responsibility: From managing usage to mitigating bias, company leaders must develop best practices to integrate new AI technology into existing workplace procedures.
As AI usage ramps up in the workplace, it’s important for companies to have a generative AI policy in place to protect their organization and their staff from potential mishaps.
Here’s how to build the right generative AI policy for your company.
What is generative AI?
First things first: What is exactly is generative AI?
In addition to OpenAI’s ChatGPT, other examples of generative AI tools include:
Generative AI is just one type of artificial intelligence that can be used. The difference is that generative AI is used to create new content – images, written content or even video content – while other traditional AI tools can go much broader and deeper, performing functions like automation or analysis.
What are generative AI uses in the workplace?
Generative AI has endless use cases for personal and professional needs. In the workplace, generative AI can help create new content, like:
- Internal and external communications
- Blog posts, and
Consider this real-life scenario: A busy HR pro wants to send out a company-wide memo, but doesn’t want to spend the time crafting the perfect copy. If they input the necessary context and instructions for ChatGPT, it can spit back out a customized memo that can be easily passed along to employees.
It’s important to note that generative AI shouldn’t be the be-all-end-all of your company’s content creation. A human should always oversee the process and fact-check AI-generated content, as it can be incorrect, misleading or biased.
Generative AI policy considerations
With all the potential uses of generative AI, it’s easy to see how things can get murky without guidelines in place. It’s essential to mitigate potential risks with a generative AI policy.
A solid generative AI policy should include:
- What is and isn’t acceptable use
- A robust explanation of the company’s stance on data protection and information on privacy laws
- Disciplinary actions that may be taken if the policy is violated, and
- How to safeguard intellectual property.
Generative AI policy example
Now you know why it’s important to put a policy in place, but what should be included in the policy? Here are a few of the most important parts of a solid generative AI policy.
Responsible and intended use
One of the most important things to outline in a generative AI policy is responsible use of generative AI tools in the workplace and ensuring employees use them as intended.
Outline what employees can and can’t use generative AI tools for. This will likely be team- or department-specific. For example, certain industries may want to allow it for internal use, but not for customer-facing use.
List best practices for responsible use of generative AI, such as:
- Confirming information for accuracy
- Double-checking generative AI content for any bias, and
- Refraining from inputting sensitive data into AI models.
If AI is not used correctly, spell out the consequences. Include disciplinary actions – up to and including termination – that employees may face if they violate the policy.
Transparency and disclosure
It’s important for employers to know what content was produced by generative AI versus an employee.
Clarify expectations for transparency about AI use – and the extent to which it is used. Provide clear guidelines on when and how employees must disclose their use of generative AI.
Exact specifications will need to be customized to your industry and workforce needs. Consider the following:
- Are employees required to let managers know if they use AI for any part of their job duties, or only in specific use cases, and
- Are employees required to document their use of AI for a business outcome? If so, when, where and how often?
Data privacy and protection
Because AI models rely so heavily on data to develop and train the algorithm, data privacy concerns are rampant and can be one of the biggest risks for employers.
The more data you put into AI, the more likely sensitive information will slip through the cracks. To prioritize data privacy, lay out the following guidelines in your generative AI policy:
- Limit access to sensitive information
- Require explicit permission to input company data into AI models, and
- Instruct employees with access to sensitive data to comply with the company’s outlined best practices.
Especially for departments that handle sensitive company information – like HR – data privacy needs to be a main focus for any AI usage.
It’s only a matter of time before we see AI regulation and legislation. In the meantime, it’s crucial to keep an eye on legal developments involving AI in the workplace. And then you’ll need to make policy adjustments as necessary.
The situation is evolving, and we’ve already seen one instance of an AI-discrimination lawsuit that resulted in a six-figure settlement. The company’s expensive lesson can help other employers avoid AI pitfalls.
Best practices for employees
Any other best practices – like acceptable versus unacceptable use – should also be outlined in the policy. For example, you may want to include:
- When and how employees should disclose their use of AI to managers or clients
- What is and isn’t okay to input into generative AI tools, and
- How employees can educate themselves on ethical use of AI tools and how AI tools use their data.
Evolving generative AI policies
Because generative AI is so new, it’s important to stay up to date with evolving regulations, best practices and new technologies that emerge. Because of that, you should include a “subject to change” rule in your generative AI policy and make updates regularly based on any relevant changes.