Community
The artificial intelligence (AI) landscape has experienced rapid evolution in recent years, with investments far outpacing short-term revenue expectations1. This disconnect has led to a complex situation where tech giants and startups alike are facing challenges. Major companies like Cisco, Intel, and Dell have announced layoffs, while numerous AI startups have shuttered2. The initial euphoria surrounding AI's potential to revolutionize industries has given way to more pragmatic concerns, with even industry leaders like OpenAI facing questions about their long-term viability3. In this document, we examine the current state of AI through the lens of the innovation adoption cycle, exploring the challenges and opportunities as the technology moves from the innovator phase towards mainstream adoption.
The primary hurdle facing AI adoption is the difficulty in effectively applying and integrating the technology. Many projects are failing to meet expectations, as organizations struggle to find practical use cases that deliver tangible value4. This challenge is compounded by the rapid pace of AI development, which often outstrips an organization's ability to adapt its processes and workforce.
One of the most significant issues plaguing large language models (LLMs) is their tendency to produce convincing but false information, known as hallucinations5. This problem undermines trust in AI systems and necessitates careful fact-checking, limiting their usefulness in scenarios requiring high accuracy.
There's often a disconnect between what AI models produce and what users expect. This misalignment can lead to disappointment and resistance to adoption, even when the AI's output is objectively good. Bridging this gap requires not only technological improvements, but also better education and expectation management.
Training AI models on client-specific data has proven challenging due to two main factors:
These issues necessitate additional data cleaning and augmentation techniques, increasing project costs and complexity.
The operational costs of running advanced AI models remain high. For instance, it's estimated that ChatGPT costs over $0.36 per query to operate7, while their pricing ranges from $5 to $15 per 1M tokens8. This pricing structure often results in services being offered below cost, which is unsustainable in the long term.
Advanced AI techniques like Tree of Thoughts (ToT) require hundreds of model calls to generate a single output. This computational intensity drives up costs and limits the scalability of certain AI applications.
The current state of AI adoption aligns with the "Crossing the Chasm" model of technology adoption9. We are currently in the innovator phase, characterized by high optimism, but also with a focus on "figuring stuff out" rather than widespread practical implementation.
As the industry moves towards the visionary phase, companies are beginning to demonstrate real solutions in niche applications. However, this transition is accompanied by a crash in hype as the reality of the challenging path to profitability sets in.
Unlike previous technological revolutions, this era of AI is marked by significant investment from large tech companies in the US and China. However, the payoff for these investments may be 10-15 years away, raising questions about the long-term commitment of these corporate giants to funding AI research.
The current situation draws parallels to the research labs of the 1950s and 1960s, such as Bell Labs and Xerox PARC. These institutions produced groundbreaking technology but often failed to capitalize on their innovations. There's a possibility that today's tech giants could face a similar fate, with smaller, more agile startups ultimately reaping the rewards of their research.
Major tech companies are actively pushing AI adoption to avoid falling victim to the innovator's dilemma10. They're attempting to lead their customers towards AI adoption, even in the face of slow uptake. Microsoft's pricing strategy for Copilot, initially set at $108,000 per year for 300 licenses and later adjusted to $360 per year for a single license, illustrates the challenges in finding the right balance.
One of the most significant hurdles in AI commercialization is determining appropriate pricing models. Companies are struggling to balance the need for sustainable revenue with the goal of driving adoption and creating value for customers. Recently the CEO of Cohere complained there is little margin in selling ChatBot services11. Several pricing strategies have emerged, each with its own trade-offs12.
The complexity of AI pricing is further compounded by factors such as uncertain operational costs, difficulties in quantifying AI's value, data ownership concerns, rapid technological changes, and competitive pressures.
As the industry matures, we can expect pricing models to evolve, potentially moving towards more sophisticated, value-based approaches and dynamic pricing in AI marketplaces. Successful strategies will need to effectively communicate the value of AI offerings while ensuring sustainable growth for providers.
As AI becomes more powerful and pervasive, ethical considerations and regulatory challenges are coming to the forefront13. Issues such as bias in AI systems, privacy concerns, industry compliance. and the potential for AI to be used in harmful ways are becoming increasingly important. Navigating this complex landscape will be crucial for the industry's long-term success.
There's a growing need for AI education at all levels, from basic digital literacy to advanced technical skills. Organizations must invest in reskilling and upskilling their workforce to effectively leverage AI technologies. This transformation of the workforce presents both challenges and opportunities for individuals and organizations alike.
As AI systems become more complex, the need for explainable AI (XAI) grows. Stakeholders, including end-users, regulators, and developers, need to understand how AI systems arrive at their decisions. Improving the transparency and interpretability of AI models is crucial for building trust and ensuring responsible deployment14.
The training and operation of large AI models require significant computational resources, leading to high energy consumption. As AI adoption grows, addressing the environmental impact of these systems will become increasingly important. Developing more energy-efficient AI architectures and promoting sustainable AI practices will be key challenges for the industry.
As AI becomes more prevalent across industries, there's a growing need for standardized governance frameworks and best practices. Establishing industry-wide standards for AI development, deployment, and monitoring will be crucial for ensuring responsible and consistent use of the technology.
Copyright holders in certain countries are concerned about their information being used in training AI models. Japan and the United States exemplify the extreme positions countries can take. In Japan, AI’s can be trained on copyright information without any legal repercussions. Yet, in the US the large copyright holders believe it is a legal violation to train an AI on copyrighted material.
A legitimate concern is that, of course, AI models can consume so much information, way more than any human can possibly absorb in a lifetime. There are definitely deals that are going to get done with the really massive models who will get access to this information, but is this generally helpful or useful for the general forward step of AI?
The AI industry is at a critical juncture. While the technology has shown immense promise, it faces significant challenges in terms of adoption, cost-effectiveness, and practical implementation. As we approach the "chasm" in AI adoption, the focus must shift towards developing quality applications that deliver tangible value to customers.
The future of AI will likely be shaped by how well the industry can address these challenges. This includes improving the technology itself, developing sustainable business models, navigating regulatory landscapes, and effectively managing societal impacts. While the path forward may be challenging, the potential benefits of AI remain enormous, promising to transform industries and society in profound ways.
As we move forward, it will be crucial for stakeholders across the AI ecosystem – from researchers and developers to business leaders and policymakers – to collaborate in addressing these challenges. By doing so, we can work towards realizing the full potential of AI while mitigating its risks and ensuring its benefits are broadly distributed across society.
Written by: Dr Oliver King-Smith is CEO of smartR AI, a company which develops applications based on their SCOTi® AI and alertR frameworks.
References:
1“Artificial intelligence is losing hype”, Economist, 19 Aug 2024
2https://techcrunch.com/2024/08/15/tech-layoffs-2024-list/
3“OpenAI could be on the brink of bankruptcy in under 12 months, with projections of $5 billion in losses”, 25 July 2024, Kevin Okemwa, Windows Central
4“Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025”, 29 July 2024, Gartner
5“Detecting hallucinations in large language models using semantic entropy”, 19 June 2024, Sebastian Farquhar et al., Nature
6“The Impact of Poor Data Quality (and How to Fix It)”, 1 March 2023, Keith D. Foote, Dataversity
7“You won’t believe how much ChatGPT costs to operate”, 20 April 2023, Fionna Agomuoh, Digital Trends
8https://openai.com/api/pricing/
9“Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers or simply Crossing the Chasm”, 2014, Geoffrey A. Moore
10“he Innovator's Dilemma: When New Technologies Cause Great Firms to Fail,”, 1997, Clayton Christensen
11“What margins? AI’s business model is changing fast, says Cohere founder”, 19 August 2024, Maxwell Zeff, Techcrunch
12“7 AI pricing models and which to use for profitable growth”, 22 May 2024, Alvaro Morales, With Orb
13“Ethical and regulatory challenges of AI technologies in healthcare: A narrative review”, 2024, Ciro Mennella, Umberto Maniscalco et al, Heliyon
14“Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence”, 2023, Sajid Ali et al., Information Fusion
Image credit: Freepik https://www.freepik.com/free-photo/workers-using-ai-computing-simulation_134840249.htm#fromView=image_search_similar&page=1&position=0&uuid=6adc7e50-0e55-41bf-9410-f8edcbda3256
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
David Smith Information Analyst at ManpowerGroup
20 November
Konstantin Rabin Head of Marketing at Kontomatik
19 November
Ruoyu Xie Marketing Manager at Grand Compliance
Seth Perlman Global Head of Product at i2c Inc.
18 November
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.