ChatGPT at Work: Protecting Your Business

Devin R. Bates Commentary


ChatGPT at Work: Protecting Your Business
(Shutterstock)

As artificial intelligence advances, more people are experimenting with it. ChatGPT, a free AI, is reportedly the fastest-growing consumer application in history. While these tools can increase efficiency and productivity, they also come with risks and legal implications, particularly when used in the workplace and especially when used without employer guidance.

In the Workplace

Without necessarily endorsing these, there are myriad ways this technology can be used in workplaces:

This is an Opinion

We'd also like to hear yours. Leave a comment below, tweet to us at @ArkBusiness or
email us.

Customer support: ChatGPT can be trained on frequently asked questions to provide quick, accurate responses to customers. This can reduce customer support staff time, speed up responses, and in theory improve the customer experience.

Document analysis: ChatGPT can analyze and summarize large volumes of documents. This can cut the time required to identify key information and make informed decisions.

Writing help: ChatGPT can help with writing tasks, such as drafting emails or memos, helping employees write clearly, effectively and rapidly.

Knowledge management: ChatGPT can create a knowledge base for employees, offering answers on a variety of topics. This can help new employees get up to speed on company policies and procedures and allow existing employees to access information quickly and easily.

Mitigating Risks

As an employer, you must understand the risks associated with generative AI at work. Here are some steps you can take to limit liability:

Develop a policy: Create a policy for using AI. Define what types of information can and cannot be shared through these tools, specify who is authorized to use them, and limit access only to employees who have a legitimate need for them.

Provide training: Train employees on the proper use of generative AI. This includes understanding the risks and how to avoid legal pitfalls. Special retraining should be provided in confidentiality and privacy — especially if your business is in possession of sensitive information given special protection by law (i.e. information protected by HIPAA) or any other sensitive information. Also retrain employees on handling sensitive business information and intellectual property.

Review regulations and ethics rules: If your profession is governed by regulations, and/or if you operate under a code of ethics, it is wise to review these controlling rules in light of this new technology and tailored to the unique needs of your business and the laws of your state.

Monitor usage: Regularly monitor — through software or regular audits — employee use of generative AI to ensure compliance with company policies and the law.

Make appropriate disclosures and get consent: To the extent that your business elects to allow employees to use generative AI, consider whether that requires disclosure to your customers, and specifically consider whether that requires their informed written consent.

Protect against the rogue employee: Have appropriate noncompete, nonsolicitation, confidentiality and/or other agreements in place to protect against any rogue employee who decides to share confidential information through generative AI, or who decides to use their new found free time to act disloyally.

Consult with legal counsel: Finally, consider consulting with legal counsel to ensure that your policies and procedures comply with applicable laws and regulations.

Generative AI can be valuable for businesses looking to increase efficiency and productivity, but they come with legal risks. By developing clear policies and training, monitoring usage and consulting with legal counsel, employers can limit their liability and ensure that these tools are used appropriately.


Devin R. Bates is a member at Mitchell Williams Selig Gates & Woodyard of Little Rock. His practice includes litigation, intellectual property, education and employment.