Community
In a recent talk I attended, a legal expert advised against inputting personal data into AI models. But is this blanket statement truly accurate? The reality is far more nuanced, especially when we consider GDPR, the gold standard for personal data protection. This article explores how GDPR intersects with the use of personal information in AI models, specifically focusing on Large Language Models (LLMs).
AI is a vast field, but our focus here is on GPT-style LLMs - the cutting-edge technology powering services from OpenAI, Google, Microsoft, and Anthropic. These models represent the latest advancement in AI technology.
LLM deployment involves two key stages: training and inference. While training is a highly technical process undertaken by few, inference - the act of using the model - is accessible to millions. Every time you pose a question to ChatGPT, you're engaging in inference.
But is it safe to input personal data during inference? The answer is: it depends.
During inference, the model itself doesn't retain data. The input you provide and the output you receive aren't recorded or remembered by the model. This means that if both input and output are handled in compliance with GDPR, and if the data modifications made by the LLM are permissible under law, then using personal data can be safe.
However, several crucial factors warrant consideration:
To mitigate these risks, we recommend using private LLMs - models hosted locally within your controlled ecosystem. With these, you maintain control over data handling. When using your LLM, you pass GDPR-controlled data into the "context," which exists briefly in RAM before being cleared for the next request. This process is analogous to loading data from a database for displaying on a screen.
In essence, LLMs are similar to other data-handling software when it comes to GDPR compliance. The regulation requires data processing to be lawful, fair, and transparent, conducted for specified, explicit, and legitimate purposes. This necessitates careful consideration of how you're utilizing the LLM.
In conclusion, using LLMs in a GDPR-compliant manner is entirely feasible. While data storage isn't a significant concern during inference, the key lies in how you're transforming the data. By ensuring transparency and fairness in your LLM's data transformations, you can harness the power of this technology while remaining compliant with data protection regulations.
Written by: Dr Oliver King-Smith is CEO of smartR AI, a company which develops applications based on their SCOTi® AI and alertR frameworks.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Seth Perlman Global Head of Product at i2c Inc.
18 November
Dmytro Spilka Director and Founder at Solvid, Coinprompter
15 November
Kyrylo Reitor Chief Marketing Officer at International Fintech Business
Francesco Fulcoli Chief Compliance and Risk Officer at Flagstone
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.