How can AI become more sustainable?

1 Like 1 Be the first to comment

How can AI become more sustainable?

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Artificial intelligence (AI) has become a hot topic in the past year, and it is set to skyrocket in usage in the coming years. Every financial organisation is racing to implement AI technology into their services, and every business and social media network is priming for AI integration.

This is an excerpt from The Future of AI in Financial Services 2025 report, which was a special edition for the inaugural Finextra event, NextGen AI. Click here to read the report.

While the reception of AI intruding into every aspect of people's lives is somewhat a mixed bag, there is no denying that it is here, and it is everywhere. From ChatGPT to Google’s Gemini all the way back to Apple’s Siri, AI is a pervasive and unstoppable force.

What is another unstoppable force? The inevitable and destructive march towards climate catastrophe. While AI provides us with the tools to make business operations more sustainable, the technology itself is a blackhole for energy and power. What solutions are in play to combat AI’s sustainable flaws while still leveraging its potential to change the world?

Monitoring and transparency are key in developing AI systems

In the past few years, greenwashing and greenhushing have been obstacles to the sustainable transition in the financial and banking sector. While there are still major banking giants that continue to neglect the sustainable outcomes of their operations (BlackRock), the Paris Agreement and UN’s Sustainable Development Goals have seen numerous financial institutions embrace green policies and net zero initiatives that encourage transparency.

For transparency to be measured, there needs to be a standardisation of transparency frameworks designed specifically for AI technology. While there are currently frameworks in place and measurements, AI is a whole new game, and will require its own set of rules.

Isa Goksu, CTO, UKI and DE, at Globant, stated that API usage needs to be
monitored at a closer scale within organisations to ensure transparency in AI. Goksu furthered: “By systematically tracking how APIs are used, organisations gain insight into usage patterns, data flow, and potential compliance issues. This transparency ensures resource utilisation is aligned with organisational policies and objectives while identifying any unauthorised or excessive use of APIs.”

Pavel Goldman-Kalaydin, head of AI & ML at Sumsub, emphasised that businesses must be mindful of internal AI processes that may include bias, and should be cautious of the data they use for training AI systems.

Goldman-Kalaydin explained: “For instance, on the topic of inclusivity, AI can project gender or racial biases based on existing stereotypes. Businesses can consider measures such as diverse dataset curation and algorithmic fairness testing, ensuring that their AI-driven CX strategies are not perpetuating harmful stereotypes or excluding certain demographic groups.”

Bahadir Yilmaz, chief analytics officer at ING, added: “To ensure transparency and accountability as AI grows, companies should adopt a comprehensive approach focusing on explainability, auditing, thorough documentation, ethical oversight, traceability and transparency. These steps would help align AI growth with societal values and safeguard individual rights.”

Solving the AI energy problem

To make AI more sustainable, we must look for solutions to the energy problem. According to the International Energy Agency (IEA), datacentres used 1.65 billion gigajoules of electricity, which is about 2% of the global demand. This can only increase as AI usage continues to grow. The IEA estimates that by 2026, AI energy consumption will increase between 35% and 128%.

According to the World Economic Forum, GenAI systems use 33 times more energy to complete a task than a task-specific software. The Forum estimates that the electricity consumption of datacentres will increase exponentially over the next couple of years, as seen in the figure below.

The AI-energy issue is further detailed in the Finextra long read: AI is eating up our energy – how will sustainable ambitions survive?

UK Lord Chris Holmes stated that “AI has the ability to optimise its own operations and make those real-time adjustments.” He indicated that AI can be used to make itself more sustainable, and that looking to renewable energy can be the solution to source this technology.

Shaun Hurst, principal regulatory adviser at Smarsh, commented: “The most important approach firms should follow is smart planning. Leaders can identify how best to manage data centres, cool systems and share computing power, and, as a result, this can help organisations to run AI efficiently, cut their energy bills and ensure more sustainable use of their technologies. Importantly, cloud computing has proven to be very effective for achieving this, due to its flexibility and reduced environmental impact.”

Goksu also emphasised the role of cloud computing for preventing energy wastage and providing a flexible and scalable infrastructure, “Federated learning also emerges as a promising field, training AI models on decentralised data, thus minimising the energy, network load, and storage traditionally required for central data aggregation.”

He continued that as AI develops, it will update along with optimised algorithms that will optimise power usage and lower energy consumption but streamlining data processing.

Yilmaz further detailed what strategies could be utilised to lower AI energy consumption: “It is crucial that AI is environmentally and economically viable in the long run. To improve AI efficiency and sustainability, tactics like model optimisation, efficient algorithms, edge computing, energy efficient hardware, renewable energy could be used to reduce energy consumption and thus environmental impact. Additionally, fostering trust is crucial for the long-term adoption of AI.”

Hurst elaborated: “Another important tactic is simply getting the basics right. Think of this as keeping your digital house in order. Clean data, relating to accurate, complete and consistent data; using the right sized tools; and having regular check-ins to identify any waste are all key. It’s like servicing your car; regular maintenance keeps things running smoothly and efficiently for the long haul.”

Regulation keeping AI in check

In Europe and the UK there are several regulations in place that will drive AI towards being more transparent, and therefore more sustainable. The EU AI Act is currently a major player, which was enacted by EU parliament earlier this year, addresses significant risks that AI brings to the market as well as checks on transparency and innovation guidelines.

Goldman-Kalaydin stated: “Companies need to be pragmatic. AI governance does not mean governing all AI systems. Be proportionate, pragmatic and risk based. Focus on what truly matters. For example, despite the hype around the EU AI Act, the reality is that most companies will not even be subjected to it, much less to its heaviest regulatory requirements. In that sense, getting to know your AI technologies, their level of risk, and the regulations that are applicable to the geographies and/or markets your company operates in, is usually a first good step to kick off your journey towards more trustworthiness in AI.

“For fintech companies, regulation is not necessarily a barrier but a familiar landscape. Having long operated within stringent financial compliance frameworks, fintechs should be well-equipped to adapt to new AI regulations, viewing them as a natural extension of established practices in data protection and transaction security.”

Goldman-Kalaydin furthered that AI legislation should have detailed requirements for safety and testing, and deepfake detection such as mandatory watermarking to avoid the further spread of AI fraud and misinformation. They continued that stakeholders, policymakers, and regulators must collaborate to combat AI fraud and create a strong regulatory framework.

Lord Holmes commented according to the AI Bill that he proposed in the UK parliament, stating: “In many ways, deploying AI solves for AI. Also, I propose in my Bill that every business which develops, deploys, or uses AI has an AI responsible officer. For this, don’t think burdensome bureaucratic overcompliance, think role rather than individual, all underpinned by a proportionality principle.”

Where do we go from here?

AI is still too new, and it is being applied too much everywhere. The consumer does not want AI in every aspect of their experience online, they want more efficiency, faster speed, and more accuracy and AI can help implement that. However, it is important to note that AI is not the end-all, be-all of everything new when it comes to technological innovation.

Moving forward, both governments and businesses need to be more proactive about controlling the fallout of AI technologies, including AI-driven misinformation and energy wastage from AI usage.

Comments: (0)

/ai Long Reads

Sehrish Alikhan

Sehrish Alikhan Reporter at Finextra

How can AI become more sustainable?

/ai

Scott Hamilton

Scott Hamilton Contributing Editor at Finextra Research

What is agentic AI, and why should banks or customers care?

/ai

Dominique Dierks

Dominique Dierks Senior Content Manager at Finextra

What is the global AI legislative outlook?

/ai

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.