Community
It takes only a quick scan of daily media headlines to know we are collectively riding a wave of artificial intelligence. But, for all the benefits that come with AI — and there are many — there is also a downside to consider, especially in the business arena. While AI is helping make financial institutions smarter, faster, and more efficient, it is also making criminals smarter, faster, and more efficient. The same technologies that are driving innovation and improving decision making are also expanding the threat landscape. Organizations must understand the risks AI can present, and be ready to take proactive steps to ensure they operate in a manner that is both private and secure.
One of the assets foundational to the continued optimization of AI for financial services is data. AI is data hungry so the availability of broader, richer data sources for training and evaluation/inference means there is a better chance of effectively leveraging AI in ways that drive meaningful, positive business outcomes. Success in the AI arena has many forms, but imagine the impact of machine learning (ML) models optimized to efficiently assess customer risk, reduce false positives, and flag fraudulent activity. Or, AI-driven process improvements that support automation and improve operational efficiencies. These advances can meaningfully improve the outcomes of day-to-day activity and, ultimately, the organization’s bottom line.
While the data-driven value of AI may be clear, it is not hard to understand that leveraging data assets to fuel these breakthroughs can also introduce risk of exposure. Not only do financial institutions need to be mindful of the regulatory boundaries that govern the sector, they also need to be aware of the increased risk an AI-enhanced threat landscape presents for organizational assets such as intellectual property, competitive advantage, and even its reputation with consumers. It is critical that the benefits gained via AI do not come at the cost of sacrificing privacy and security.
As is often the case, the risks associated with technology advances such as those we’re currently seeing in the AI arena can be offset with other breakthroughs in technology. Privacy Enhancing Technologies (PETs) are a family of technologies uniquely equipped to enable, enhance, and preserve the privacy of data throughout its lifecycle. For AI use cases, they allow users to securely train and evaluate ML models using data sources across silos and boundaries, including cross-jurisdictional, third-party, and publicly-available datasets. By protecting data while it’s being used or processed (Data in Use) and complementing existing Data in Transit and Data at Rest protections, PETs can enable AI capabilities that enhance financial service organizations’ decision making, protect privacy, and combat broader legal, societal, and global security risks. In addition to enabling this net new data usage, PETs also help ensure sensitive assets, including ML models trained over regulated data sources, remain protected at all points in the processing lifecycle. This limits the increased risk presented by even the most complex threats within the AI landscape such as data spoofing, model poisoning, and adversarial ML.
To understand how PETs protect AI and reduce risk presented by an AI-powered threat landscape in practice, let’s look at a few examples specific to the financial services industry. Using a core technology in the PETs family, secure multiparty computation (SMPC), organizations can securely train ML models across jurisdictions. For example, a bank looking to enrich an ML risk model using datasets located in another region needs to protect that model during training to ensure the privacy and security of both the regulated data upon which the model was originally trained and the regulated data included in the cross-jurisdictional dataset. If the model is exposed during training, it is easy for adversaries to reverse-engineer the model to extract sensitive information, putting the organization at risk of violating privacy regulations. This means that any exposure of the model itself is a direct liability; PETs eliminate that risk. By using a PETs-powered encrypted training solution, financial firms can safely train ML models on datasets in other jurisdictions without moving or pooling data, improving the risk model and enhancing the decision-making workflow.
Another core member of the PETs family, homomorphic encryption (HE), helps protect models so that they can be securely leveraged outside the financial institution’s trusted walls. Analysts can use sensitive ML models to securely extract insights from data sources residing in other jurisdictions or owned by third parties, even when using proprietary models or those trained using regulated data. For example, a bank may want to enhance its customer risk model by leveraging datasets sourced from another of its operating jurisdictions. Currently, data localization and other privacy regulations limit such efforts, even between branches of the same bank, because of the risk of exposing regulated data both within the dataset located in this new jurisdiction and the sensitive data upon which the model was originally trained. By using HE to encrypt the model, the entity can securely evaluate the encrypted model across multiple jurisdictions to enrich the model’s accuracy and improve outcomes while ensuring compliance.
With its increased use, the need for responsible, safe, and trustworthy AI has grown stronger. Globally-influential groups including G7 Leaders, the White House, and representatives from 28 countries who participated in the UK's AI Safety Summit have highlighted Secure AI as an area of critical importance for businesses across verticals. Technologies like PETs play a key role in addressing this challenge by helping enable security and mitigate data privacy risks, allowing financial institutions to confidently take advantage of the promise of AI despite an ever-increasing threat landscape.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
David Smith Information Analyst at ManpowerGroup
20 November
Konstantin Rabin Head of Marketing at Kontomatik
19 November
Ruoyu Xie Marketing Manager at Grand Compliance
Seth Perlman Global Head of Product at i2c Inc.
18 November
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.