Air Force cybersecurity veteran Chris Wright knows the potential and the perils of artificial intelligence.
A partner and co-founder of Sullivan Wright Technologies, a Little Rock-based consulting firm, he sees AI as a powerful tool for businesses and marketers, as long as humans hold the reins and aren’t fooled by it.

Wright discussed large-language models like ChatGPT and other AI recently in a conference call with Lucy Whiteside of LSW Strategic Communications and Arkansas Business.
ChatGPT and Google’s Bard AI, two of the best-known products, use vast data sets of published language to build algorithms for machine learning, Wright said.
ChatGPT, launched a year ago by OpenAI, already has between 100 million and 200 million users. In media and marketing, AI can create articles, social media and blog posts, sales copy, ad copy and web pages.
It can suggest topics, describe products and generate promotion strategies for companies and products.
It can help businesses communicate with customers by “enabling more accurate search results and generating more complex answers to customers’ questions through chatbots,” Wright said.
But it can also spread misinformation, write robotic phrases and facilitate fraud by cybercriminals.
News outlets and advertisers are already more skeptical about AI-created content than they were a year or so ago, Wright said, and some marketing pros fear machines could take their jobs. “A lot of people are freaking out about it,” Whiteside said. “I’m telling them there’s no need to worry just yet.”
In late August, the Gannett newspaper chain halted its use of AI-generated sports stories after readers complained about repeated odd phrasing and asked if machines were writing them.
They were.
LedeAI, a content generation service, was creating the game stories.
Gannett also took a PR hit because readers called out the AI-generated stories just months after the company let go hundreds of employees, cutting its news division 6%.
“You can’t just use these tools and take the results at face value,” Whiteside said. “You have to be careful to do proper fact-checking on AI-generated content, or errors and misinformation will spread.”
She said she’d seen one AI-generated news release that gave the wrong acronym for Arkansas’ new public school policy, the LEARNS Act.
Wright said criminals are also using AI in phishing attacks and deepfake voice and video manipulation. Imagine getting a call or teleconference message from your boss, asking you to move money between accounts. It sounds like the boss, even slightly looks like her. But it may be a machine.
Wright urges his clients to be skeptical, applying context and critical thinking. “Be cautious of requests that seem out of the norm and set off red flags, even if they appear to be from a trusted source,” he said.
Sullivan Wright Technologies provides cybersecurity, IT and security compliance services. It uses AI “to find ways to make security more approachable and affordable even at the smallest business size,” Wright said. “Even for large companies like FIS and Walmart, it’s pretty expensive. But for small businesses, it’s fairly unobtainable.”
With AI, Wright said, his firm can implement tools to safeguard businesses “without having to have hundreds or thousands of individuals behind it.”
Sullivan Wright uses AI to detect ever-increasing phishing and social other technological attacks, Wright said.
“This is an area I’m happy to see in our tool set, because these attackers are throwing emails at my 800 to 900 individual users all the time, and changing and adapting regularly. We’re seeing that the algorithms and the filtering tools used on the back end are changing with them. Some of that is AI; some of it is human review. But these adjusting algorithms are really helping us cut down attacks that could potentially bring any of our clients down.”