Generative AI promises to improve business efficiency, but Gartner has found many projects are failing to get beyond pilot roll-outs
By
-
Cliff Saran,
Managing Editor
Published: 30 Jul 2024 17:00
Research from analyst Gartner has reported that 30% of generative artificial intelligence (GenAI) projects are expected to be abandoned after the proof-of-concept phase by the end of 2025, largely due to challenges such as poor data quality, escalating costs and unclear business value.
Gartner found that earlier adopters across industries and business processes are reporting a range of business improvements that vary by use case, job type and skill level of the worker. According to the survey, respondents reported 15.8% revenue increase, 15.2% cost savings and 22.6% productivity improvement on average.
Looking at generating value from AI, Eyad Tachwali, a senior director at Gartner, said: “When it comes to how to think about the value that can be generated with generative AI, the first thing that we have to do is unpack the different ways that we can use AI. One part is what we call everyday AI, which is basically using AI to help you do your existing tasks better, faster, cheaper and sometimes to better quality.”
He said the value of everyday AI is measured in terms of productivity gains.
The other type of AI Gartner sees is what it calls game-changing AI. “This is where you’re using AI to create net new things,” said Tachwali. “So, if everyday AI is focused on productivity, game-changing AI is focused on creativity.”
Examples include where a pharmaceutical company uses AI to discover a new molecule that can be used to develop a drug.
With GenAI applications, he said IT leaders need to consider multiple factors when determining the cost of the investments they need to make. “There are a lot of variables,” said Tachwali. “It depends on the use cases. It depends on the industry. It depends on the risk appetite of the organisation.”
Typically, organisations may look for quick wins using off-the-shelf products such as ChatGPT or Microsoft Copilot. He said that with such products, cost calculations are relatively straightforward as they are based on the number of users and the cost of the software licence.
However, with game-changing AI initiatives, costs are more difficult to calculate. “You have the capabilities provided by vendors, which are trained on public data, but you’re also using your own organisation’s data. You have the additive cost: the IT infrastructure costs; the cost of the data; the application development costs.”
There are also what Tachwali calls multiplicative cost elements that he said can increase running costs. For instance, along with per-user licensing costs, token-based pricing is often used to enable IT decision-makers to improve the accuracy of the responses produced by a generative AI model.
Tokens are words or parts of words that can be fed into a large language model as input data. “These can really blow up your cost by five to 10 times,” he said. “It becomes very variable and it’s very difficult to predict.”
Gartner recommends IT leaders try to simulate the lower and upper limits of large language model usage to get a better idea of potential cost. This figure can then be used to keep usage in the threshold limits to ensure that the costs of running the model do not exceed the potential value it can deliver.
Read more on Artificial intelligence, automation and robotics
-
How GenAI-created synthetic data improves augmentation
By: Mary Pratt
-
Top 10 artificial intelligence stories of 2023
By: Cliff Saran
-
Gartner Symposium: It’s all about AI
By: Cliff Saran
-
Fear of AI might increase workplace turnover
By: Patrick Thibodeau