Enterprises Cautiously Assess the Risks of Adopting Generative AI
Enterprise CIOs and CTOs say they are evaluating when, not so much whether, they will adopt generative AI, according to a discussion at Citi. The banking giant held a CIO/CTO Forum on Generative AI in the Enterprise in New York to assess enterprise interest in generative AI, sponsored by Citi Ventures and ON Partners. Moderator Matt Carbonara, Managing Director of Citi Ventures, initiated the discussion by asking the panelists to gauge the current adoption level of generative AI in the enterprise. “Despite the enormous popularity of conversational AI tools such as ChatGPT, most enterprises are either evaluating its significance to them or are in wait-and-see mode,” said Arni Raghvender, Director of Technology at AWS, adding:
- 20% of his customers have a definite sense of urgency. If they don’t do something right now, they risk getting disrupted.
- The bulk of them (other than startups) have assigned small teams to understand the use cases. What does it mean for us, how to get going, etc? Is it a separate thing? Or part of the existing ML/AI team?
- 15-20% are in wait-and-see mode.
The other panelists confirmed Raghvender’s assessment. They included: Nimrod Barak, Global Head of Citi’s Innovation Labs; Chris Coulthrust, Microsoft Senior Cloud Solution Architect; Frank Farrall, Deloitte Principal, Cloud Analytics and AI Ecosystem Digital Transformation; and Barric Reed, Partner at BCG X. The group also highlighted multiple challenges to enterprise adoption, such as a clear understanding of the ROI of using their own data for a chatbot, unclear legal issues around commercial use, and the risk of a confidential data breach. On the other hand, they said, companies whose business model was threatened, such as Adobe, responded quickly without waiting to work through the legal issues.
Enterprises are most likely to adopt conversational AI in automated customer service applications and as Copilot to improve the productivity of IT staff, the panelists said. For most enterprises, they said, insufficient information is available to understand the return on investment of integrating conversational AI tools with their own data. Many companies have launched investigations and proof of concept to find the answers. “It’s essential to centralize the governance in earlier stages and avoid too many pockets of innovation before a proper risk framework is established,” Barak said. “A lot of risks are on the inadvertent disclosure of confidential data. Organizations need to have a top-down message about how they will adopt,” noted Farrall.
Risk and Governance
Moreover, “We don’t know all the risks yet,” said Nimrod. “We’ve had our AI Center of Excellence working on this already and will continue to put guardrails in place to mitigate the risks while also developing the right top-down thinking on risks and how best to mitigate them.” Enterprises are facing governance challenges, the panelists explained, asking questions like: Should all work with conversational AI be placed under central control? Or can individual departments and even individual developers experiment on their own? “LLMs are powerful but they lie,” said Raghvender. “How do we trust them?”
Use of Copilot
“Copilot is huge, everyone wants it,” said Coulthrust. “It helps me become a better developer. It gives me a better understanding of new code and a better understanding of business problems.” “In fact, Copilot will enable businesspeople to code,” he added. GitHub Copilot is obviously on the minds of enterprise tech leaders. “Copilot helps produce a really good first draft,” Farrell said. “It helps business with the conversation with the developer.” “What about software engineers? Who would want to work for a company that doesn’t use Copilot?” noted Barak.
There are several legal challenges that must be met, the panelists explained. “Technologists need to understand the law better and lawyers need to understand technology better, as the two will be spending more time together,” said Raghvender. “How do organizations know when they make money on generative AI that someone won’t come after them? There are very different laws around the world. The EU is punitive, while Japan is permissive, for example.” Microsoft’s Coulthrust explained that Microsoft is deliberate about considering governance and understanding the legal implications of its products, including LLMs. Meanwhile, Farrall cited “An important legal point: You may not be able to copyright the answer you get from ChatGPT, but you can copyright the prompt you enter.”
The Battle for Talent: Reverse Mentoring
Regarding the people part of the issue, the panel acknowledged there’s going to be a battle for talent in the conversational AI area. Farrall went as far as to characterize the talent shift as an opportunity for younger IT staff to mentor older staff. “We will probably need a kind of ‘reverse’ monitoring program to help older staff understand how to do this. It’s an opportunity for junior members of teams to mentor senior members,” he said.
Cost of Enterprise Adoption
“People expect ChatGPT to just work in the enterprise, but it takes a lot of work,” Raghvender said. One of the top concerns is how to make it interactive and usable in the enterprise. The cost of running a model is also a concern because you need GPUs or some other accelerator, he added. Each enterprise needs to decide which model will work for their organization. Models don’t work in isolation — they need access to data and need to be fine-tuned in order to feed it to an LLM, he said. “No one really knows the ROI yet — no one really knows how much they’re going to make on the investment. Maybe in Q3/Q4 after the initial POCs, they will know. It may take a few years to build a full-blown business case,” Raghvender said. “Most companies are in the early phases of developing a strategy,” said Coulthrust. “Conversational AI is forcing them to re-evaluate their current IT landscape and the art of the possible.” Meanwhile, Reed said there’s been a lot of conversation about which model is correct, but the big issue is the orchestration above the model — the cost, scale, latency for the UX layer, etc. “The analytics space will be disrupted by conversational AI,” said Barak. “People are getting more used to the chat approach of asking questions. All the dashboards, graphs, etc., may be replaced with chats.”