Today’s Computerworld article, “GenAI compliance is an oxymoron. Ways to make the best of it,” covers Three Arc Advisory’s Meghan Anzelc, Ph.D.’s thoughts on cybersecurity, compliance, and customer challenges.

On AI vendor due diligence and third-party risk:

“Some enterprises have explored minimizing the vendor visibility issue by building their genAI programs in-house, said Meghan Anzelc, president of Three Arc Advisory, but that merely reduces the unknowns without eliminating them. That’s because even the most sophisticated enterprise IT operations are going to be leveraging some elements created by others.

‘Even in the ‘build in-house’ scenario, they are either using packages in Python or services from AWS. There is almost always some third-party dependence,’ she said.”

On compliance and regulation:

“Another compliance consideration with genAI is going to be the need to explain far more technical details than CIOs have historically had to when talking with regulators. 

“‘The CIO needs to be prepared to share a fairly significant amount of information, such as talking through the entire workflow process,’ said Three Arc’s Anzelc. ‘”Here is what our intent was.” Listing all of the underlying information, detailing what actually happened and why it happened. Complete data lineage. Did genAI go rogue and pull data from some internet source or even make it up? What was the algorithmic construction? That’s where things get really hard.’

“After an incident, enterprises have to make quick fixes to avoid repeats of the problem. ‘It could require redesign or adjustment to how the tool operates or the way inputs and outputs flow. In parallel, fix any gaps in monitoring metrics that were uncovered so that any future issues are identified more swiftly,’ Anzelc said. 

“It’s also crucial to figure out a meaningful way to calculate the impact of an incident, she added.

“’This could be financial impact to customers, as was the case with Air Canada’s chatbot, or other compliance-related issues. Examples include the potentially defamatory statements made recently by X’s chatbot Grok or employee actions such as the University of Texas professor who failed an entire class because a generative AI tool incorrectly stated that all assignments had been generated by AI and not by human students,’ Anzelc said.

“’Understand additional compliance implications, both from a regulatory perspective as well as the contracts and policies you have in place with customers, suppliers, and employees. You will likely need to re-estimate impact as you learn more about the root cause of the issue.'”

Read the full article here.