For years, CIOs have wanted to be taken more seriously by senior managers, and the rise of generative artificial intelligence (AI) might finally help them to make the breakthrough they want.
Many organisations are trying to work out their route forward with AI, which creates an opportunity for the CIO to make their mark, according to Trevor Schulze, chief digital and information officer at analytics automation tech company Alteryx.
“This is the opportunity for CIOs to really have a seat at the table. We’ve been talking about a seat at the table for years,” he said at a roundtable event in London.
“I’ve seen more CIOs being put into positions where they are the ones helping guide the company and say where they will and won’t do AI. This is very emerging and not every CIO has this position, but the board is starting to say, ‘We want someone to have this position’, and it’s not the CEO.”
AI strategy should be owned by the CIO, Schulze said, because they know the technology, the potential risks and the business processes and they are one of the few people in the company who can connect all the dots.
“CIOs have to be AI-literate, they have to understand this because, regardless of a team that’s off looking at specialised capabilities, this is going to permeate every aspect of most businesses,” he told Computer Weekly.
There are currently three ways that AI is making its way into businesses, he said. First, companies are building their own projects. Schulze said that in his peer group of CIOs, many were doing proof of concepts and pilot projects and exploring responsible AI last year.
“Most everyone has moved at least one business process into [being] AI-enabled. One doesn’t sound very impressive well, of those same organisations they have five to 10 pilots in flight which they believe will go into production. So, the year of 2024 is when you will start to see more and more production-grade AI capabilities with generative being the key here,” he said.
Recent research by Alteryx with European tech decision-makers found that, since the start of 2023, they have run an average of three AI pilot projects – 74% said these pilots have been successful and 53% stated it was easier than expected to get results.
Schulze said the early adopters were getting good results with AI which was relatively unusual in IT. “Usually early adopters don’t get the immediate value when they take on a new technology capability. What we are finding with predictive and generative AI together is positive [return on investment] almost immediately and so this has caught the attention of the boardroom.”
The second way that AI is making its way into enterprise is via suppliers adding AI capabilities into software or services already used by businesses. That means CIOs have to understand the potential of these features and decide whether to use them.
“Every single vendor out there has an AI thing that a CIO has to evaluate and discuss and deploy,” said Schulze.
The third vector for AI is the rise of shadow AI, which is where someone within the enterprise goes out and buys a service without the knowledge of the tech organisation. This can lead to issues that CIOs need to be aware of around corporate data privacy and security.
“You have these patterns of AI coming into companies and regardless of how you are organised you have to address all three,” Schulze said.
CIOs have to be sceptical and pragmatic, he said, but they also should lean into AI and find those really important productivity enhancements and opportunities for their company. “They are there, across the board. Every industry is going to find that set of business processes where they go, ‘Wow, this is a game-changer’,” he said.
A lot of companies start on the road towards AI with defining a responsible AI policy. That sets out what they are and are not willing to do with generative AI technologies, where the results are not always as easy to predict or control as traditional IT.
“IT has been built on deterministic technology up until today, so when you have stochastic technologies, you have concerns about [whether] the outcome is not non-deterministic in certain cases. You have to determine where you are willing to allow that innovation happen,” Schulze said.
Right now, companies are focusing on lower-risk areas because they want to see how AI plays out. And the research commissioned by Alteryx found that the path to exploiting generative AI can be complex and throw up unexpected problems.
In the survey, businesses reported issues with AI generating infringing copyright or intellectual property rights (mentioned by 40%), as well as receiving unexpected or unintended outputs (36%), but the research said the biggest issue impacting trust in genAI from businesses (62%) was AI hallucinations – when AI produces incorrect predictions or nonsensical outputs. Most of the IT decision-makers surveyed had responsibility for procuring generative AI for their organisation.
A sizeable chunk – 41% of business respondents – said they worried about AI and critical decision-making. And they may have reason to worry – only 33% of leaders said that their business ensures that data used to train generative AI is diverse and not biased. Added to this, only 36% say they have ethical guidelines in place and 52% say they have data privacy and security policies for generative AI use.
Businesses said the challenges including security concerns (41%), data privacy and security (39%), and quality and reliability of outputs (32%). A lack of AI skills continues to be a problem too, with one in five businesses saying they don’t have mandatory AI training in place (20%), and 28% reporting that a deficit of skilled talent is holding them back from scaling generative AI across the organisation.
One often overlooked opportunity for businesses is to use their own data to fuel AI.
“We are swimming in data,” Schulze said, noting that 80% of industrial data is never touched. “We gather it, put it somewhere, we back it up and we never do anything with it,” he said.
“This is an opportunity for the people who own the data to be able to create novel capabilities in AI. This is where the story is going – it’s not about large language models, it’s not about the general-purpose foundational models, it’s about where the unique data is and how organisations take advantage of it.”
Comentarios recientes