Community
What is a Deepfake
The term "deepfake" is a blend of "deep learning" a form of Artificial Intelligence and "fake." A deepfake is synthetic media (video, image, or audio) that has been digitally manipulated or entirely generated using sophisticated AI technology to convincingly show a person appearing to say or do something they never actually said or did.
What is AI Slop
The term "AI slop" refers to digital content—such as text, images, videos, or audio—that has been created using generative artificial intelligence, and is characterized by a lack of effort, quality, or deeper meaning, often produced in an overwhelming volume.
It has a pejorative connotation, similar to the way "spam" is used to describe unwanted, low-value content.
AI slop is viewed as an "environmental pollution" problem for the internet, where the costs of mass production are nearly zero, but the cost to the information ecosystem is immense.
AI slop contributes significantly to the general erosion of trust in the internet by blurring the line between human-created authenticity, machine-generated noise and fraud.
Lies! It’s ALL Damn Lies!
AI slop and deepfakes are fundamentally similar because both are forms of synthetic media created by the same powerful generative AI models (text-to-image/video). They both contribute to a widespread erosion of trust online by blurring the line between human-made content and digital fabrication. While a deepfake is a targeted, high-quality forgery designed to maliciously deceive (e.g., faking a political speech), AI slop is low-quality content mass-produced out of indifference for accuracy or effort, often just for clicks.
Nevertheless, both types of content flood the digital ecosystem, making it increasingly difficult for users to distinguish authentic, verified information from machine-generated noise.
Key Characteristics of AI Slop
Examples of AI Slop
The general concern is that the rapid proliferation of AI slop is polluting the internet, making it harder to find high-quality, authentic human-created content and blurring the lines between real and fabricated information.
The contribution to mistrust is not primarily about malicious deepfakes (though that is a related trust problem); it's about the sheer volume and mediocrity of content that makes the web unreliable.
AI Slop Drives Mistrust
"Enshittification" is a term coined by writer and activist Cory Doctorow. The widespread adoption of AI slop is accelerating what some critics call the enshittification of digital platforms—the degradation of services as platforms prioritize profit (through mass-produced, algorithm-friendly content) over user value.
The core of the mistrust is the inability to answer two simple questions with confidence: "Did a real person make this?" and "Is this true?"
Protecting yourself from AI slop and deepfakes requires a dual approach: critical consumption (protection) and responsible behavior (not spreading). The core defense is applying strong media literacy skills to everything you see online.
Critical Consumption (Protection)
Protecting yourself from the proliferation of AI slop and deepfakes requires developing strong habits of critical consumption. The core practice is to refuse to blindly trust what you see and to develop systematic ways of verifying authenticity. This involves checking the source—prioritizing content from established, fact-checked news outlets over anonymous or clickbait accounts that have a financial motive to spread low-effort content.
You must inspect the media itself by slowing down and looking closely for tell-tale AI errors, such as distorted hands, missing jewelry, or unnatural movements in videos.
Source Verification:
Inspecting Media and Spotting the "Tells":
Slow down and inspect closely. Look for visual artifacts that AI generators frequently get wrong.
Look for anomalies
Fact-Checking and Skepticism:
Your personal sharing habits are the most powerful tool against the spread of synthetic content:
By adopting these habits, you move from being a passive consumer to an active filter and a digitally literate consumer engaged in protecting yourself and others from misinformation and lies. These are the most effective way to protect the integrity of the digital ecosystem.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Shikko Nijland CEO at INNOPAY Oliver Wyman
03 November
Laurent Descout CEO at NEO Capital Markets
Stanley Epstein Associate at Citadel Advantage Group
30 October
Julija Jevstignejeva Deputy Head of Marketing at Walletto UAB
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.