When I attended the MIT Sloan CIO Symposium in May, I thought that listening to CIOs talk about the latest technologies—in this case, generative AI—it reminded me of another time at the same symposium around 2010, where the discussion was all about the cloud.
It’s remarkable how similar the concerns about AI were to those I heard about the nascent cloud all those years ago: companies were concerned about governance (check), security (check), and responsible use of a new technology (check).
But 2010 was right on the cusp of the consumerization of IT, where employees were looking for the same kind of work-from-home experiences. Soon, they were resorting to “shadow IT” to find those solutions on their own when IT told them no, and not by default at the time. It was easy enough for employees to figure it out on their own, unless things were totally blocked.
Today, CIOs recognize that if they refuse generative AI, employees will likely find a way to use these tools anyway. There are many legitimate concerns about the technology, such as hallucinations or intellectual property, but it also raises concerns about security, compliance and control, particularly around data, which large enterprises demand and require.
But the CIOs speaking at the conference were much more realistic than they were 15 years ago, even if they had similar concerns.
“You know, everything is available and democratized,” said Akira Bell, CIO of Mathematica, during a panel titled “Maintaining Competitive Advantage in the Age of AI.”
“I think someone else said this morning, ‘You know, we can’t control this moment.’ We can’t and we don’t want to be the ‘agents of no,’ telling everybody what they can and can’t do, but what we can do is make sure that people understand the responsibility that they have as actors and users of these tools.”
Today, instead of saying no, Bell is encouraging responsible use of technology and looking for ways to improve its customers’ experience through AI. “So it’s about governing, making sure our data is ready to be used, making sure our employees understand the best practices that exist when they’re using it.”
She said the second piece is really thinking about how they’re using generative AI to enhance their core capabilities and how they might use it on behalf of customers to create, amplify or modify existing service offerings for their customers.
Bell added that the security component also needs to be considered, as all of these elements are important. His organization can offer guidance on how to use these tools in a way that is consistent with the company’s values without blocking access.
Angelica Tritzo, CIO of GE Vernova, a new GE subsidiary focused on alternative energy, is taking a deliberate approach to implementing generative AI. “We have a number of pilot projects at different stages of maturity. Like many others, we probably don’t fully understand the potential, so the cost and benefits aren’t always perfectly aligned,” Tritzo told TechCrunch. “We’re figuring out how to navigate all the technology pieces, how much we need to partner with others and how much we need to do it ourselves.” But the process is helping her understand what works and what doesn’t and how to proceed while also helping employees get comfortable with the process.
Chris Bedi, chief digital officer at ServiceNow, said things will change in the coming years as employees begin to demand access to AI tools. “From a talent perspective, as organizations look to retain talent, which is a hot topic, no matter what the job title, people want their talent to stay. I think it’s going to be unthinkable to ask employees in your company to do their job without GenAI,” Bedi told TechCrunch. Additionally, he believes talent will begin to demand it and ask why you would want them to do the work manually.
To that end, Bedi says his company is committed to teaching its employees about AI and creating an AI-savvy workforce, because people won’t necessarily understand how to best use the technology without guidance.
“We created learning paths, so that every employee in the company could take their AI 101 course,” he said. “We created that and we were selective about who [levels] 201 and 301 because we know the future is AI, so we need to make sure that all of our people feel comfortable with it,” he said.
All of this suggests that while the concerns are the same as during the last wave of technological change, IT leaders may have learned some lessons from this experience. They now understand that it’s not possible to lock everything down. Instead, they need to find ways to help employees use generative AI tools safely and effectively, because if they don’t, employees will likely start using them anyway.