Icon (Close Menu)

Logout

Getting Smart About Artificial Intelligence

2 min read

The chatter around generative artificial intelligence – technology which can be used to create new audio, image, text, video and other content – continues to grow as more and more companies are adopting text and video chatbots to communicate with customers, assist with healthcare consults and tasks, and provide support for the financial industry. And while the potential for generative AI for business seems endless, the adoption of AI in this manner should be done thoughtfully and with full consideration of the security, privacy and bias concerns that generative AI presents.

The use of generative AI to create video chatbots is, while still a little unnerving, developing at a rapid pace. But the use of synthetic voices can lead to mischief and, more concerning, security risks.  Banks and other businesses have used voice identification in lieu of passwords for access to accounts for a number of years, touting the security of voiceprints. However, this practice may no longer be as secure. Recently one reporter gained access to his bank account with a synthetic voice, disclosing how these security systems may become more and more fallible as generative voices develop.

Another looming concern for the use of chatbots and other generative AI is the reliance on the technology by humans. It’s okay to like our tools, but unlike using Microsoft Word to summarize meeting notes, chatbots like ChatGPT are retaining the data we provide and learning from it. As a result, this data is no longer securely held by the company and, as ChatGPT notes while I’m chatting with it while I’m writing this article, there’s a risk that the owner of the chatbot would own the content generated by the chatbot. Thankfully ChatGPT acknowledges the continued need for attorneys as this is a “complex legal issue” and recommends that businesses consult with legal professionals to discuss the use of AI technologies.

Finally, as with all artificial intelligence, chatbots are trained on data sources and it retains the biases of that data. There’s a risk, as a result, that these biases can become more firmly entrenched, but practical application of chatbots can also lead to harm to users. These biases could lead to incorrect diagnoses or treatments in healthcare, potentially putting patients at risk. Additionally, if the chatbot data is biased against certain groups of patients, it could perpetuate health disparities and inequities, as one example.

Ultimately, generative AI has great potential. But as with Spider-man, all great potential comes with great responsibility.


 

Meredith Lowry is a patent attorney on the Wright Lindsey Jennings Tech Law team. She is obsessed with business, tech and data.
Send this to a friend