Community
The dust storm from the Generative AI frenzy started after the launch of ChatGPT a year ago appears gradually settling down. While bringing the phase of excited delirium to a halt, it also amplifies a mellowing realism amongst both early adopters (explorers) as well as passive copycats about the promise and true potential of Artificial Intelligence in general and Generative AI in particular. At the same time, wide proliferation of LLMs – both open source and proprietary models - and their easy accessibility have fuelled a mass wave of democratization of AI across business organizations of different sizes and stripes.
Reliability of embryonic LLMs yet to cross the confidence threshold
With frequent news of LLMs spitting gibberish or generating biased analysis or insensible images, signs of juvenility of AI models and deficiency in their in-built intelligence get amply highlighted. Even if the question of AI replacing humans is left for exclusive conference room debates, persisting concerns regarding inherent bias and low explainability of outcomes from AI models as well as threats of LLMs being exploited for malicious usage are hard to ignore. Meanwhile, despite publicly expressed concerns of the regulators and policymakers, the evolution of a harmonized global framework to establish comprehensive guardrails to safeguard against AI risks remains on an uncertain path.
Against this backdrop, some of the digital inventors’ fancy idea of marching towards Artificial General Intelligence (AGI) is certainly going to be a long and meandering journey. As an extension to it, dream ideas of autonomous agents with abilities to take independent decisions and perform self-governed actions appear far off on the horizon, if not an outright pipedream. Even if promise of productivity gains appears appealing, present generation of embryonic LLMs fundamentally exhibit low credibility. Thus, their acceptability in wider business contexts in present form is bound to remain limited – to the extent of familiarization and experimentation.
Continued evolution of LLMs
Amid uncertainty reigning the AI evolution path, a good thing is that the AI ecosystem remains bubbling with the launch of hundreds of open-sources and proprietary models in recent times. Importantly, there is a marked shift towards smaller versions of domain-specific LLMs, which consume enterprise data and align with a more nuanced business context rather than relying on public data and domain-agnostic training. Known as small language models (SLMs), these models have significantly a smaller number of parameters – typically in the order of a few million to a few billion (unlike hundreds of billions or a few trillion in the case of LLMs). SLMs follow a knowledge distillation mechanism for transferring knowledge from pre-trained LLMs and provide better efficiency and customization options. A related development is about emergence of ensembling framework, which leverages the varied strength of different open LLMs and attains superior performance by mixing and balancing the outcomes. Meanwhile, multi agent orchestration and Adaptive AI with flexibilities to deal with evolving business scenarios based on learning from experiences are rapid advancements in larger AI ecosystem.
The dilemma of AI adoption strategy
Witnessing an intensifying AI arms race in a business ecosystem, no firm likes to be seen running behind the AI hype cycle. Business firms’ strong urges to be on the AI bandwagon are largely driven by the idea of get somehow equipped with AI-powered capabilities, which at least creates a visage of competitive differentiation. However, even after rounds of exploration and experimentation across variety of novelty use cases, they are yet to solve the puzzle of harnessing the power of AI in specific business contexts to derive promised business values. Certainly, finding the answer to the next level of the consideration - where AI fits the best – to unravel unique insights versus performing autonomous actions, becomes a million-dollar question.
Mastering the technological complexity and establishing a better governed AI / Gen AI environment are too complex issues to have an immediate and all-encompassing answer. At the same time, keeping a realistic view of fragmented data landscape across silos of legacy applications and LOBs, AI adoption strategy presents a proverbial chicken and egg situation for most of organizations. Importantly, perennial challenges of broken data engineering foundation and drippy data pipeline hardly creates an ideal situation for smooth integration of AI technologies and tools in the enterprise landscape. With low participation of business stakeholders and technology platform and tooling constraints, harnessing of AI capabilities presently remains limited to productivity gain use cases - exploiting the abilities of chatbots, virtual assistants and copilots across manual-intensive processes.
As AI stands at a strategic inflection point, this brings a crucial question about AI adoption strategy of firms – what happens to the promise of AI to shape new opportunities of business innovation in newer realms and unravel unconventional constructs of forward-looking ideas. More importantly, are they ready to take a deep dive and swim through stages of exploration, experimentation, and evaluation for real-life adoption of bold transformation ideas - specifically focused on new constructs of products, partnership, and platform models.
Accelerating AI journey: Pragmatic shift in the technology management mindset
The dominating thought behind AI adoption strategy remains that investment in technology tools, computing infrastructure and resources is sufficient and expected business outcomes will somehow follow. While CXOs aim to harness AI-led business values, they follow the old ways of conceptualization and implementation of AI-centric projects – very much like handling traditional IT projects following a typical software development lifecycle (SDLC). To dispel the dilemma arising from an outdated technology management outlook accustomed to software development programs, a vital change in basic dimensions of organizational culture and leadership mindset becomes essential.
The Way forward
AI adoption journey of a business firm requires a fine-balancing between the two extremes of technology management issues. It involves nuanced maneuvers between walking through the safe, steady, and time-tested path and letting the innovation opportunities squander at one end, and judiciously loosening thresholds of technology practices to channelize the inventive gusts and reframe business core on the other side.
Acquiring the hard capabilities of technology platform, software and tools are necessary for enabling the AI innovation journey, but not a sufficient condition to effectively realize promised business values from AI. Firmly anchored on Responsible AI tenets, it is the criticality of augmented team skills & expertise, and cross-functional collaboration, which can ultimately fuel the exploratory urges to accelerate the AI innovation journey for an enterprise. Certainly, a change in the mindset and entrepreneurial stance in technology management put wind in the sails while exploring uncharted waters of AI.
Disclaimer: The author is an employee of Tata Consultancy Services Limited (TCS). The opinions expressed herein are of author’s own and do not reflect those of the company.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Kathiravan Rajendran Associate Director of Marketing Operations at Macro Global
25 November
Vitaliy Shtyrkin Chief Product Officer at B2BINPAY
22 November
Kunal Jhunjhunwala Founder at airpay payment services
Shiv Nanda Content Strategist at https://www.financialexpress.com/
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.