Welcome to Deepfaked.Video, the world’s largest collection of synthetic videos found online.

‘Synthetic media’ or AI-generated media is a revolutionary development. AI’s capabilities to generate all forms of digital content (from images to video) are accelerating at an exponential rate.  

Soon, AI may become the dominant means of all content production. This will dramatically change the media ecosystem, manifesting both opportunities and risks. Whilst synthetic media will undoubtedly power and democratise human creativity, some bad faith actors will try to use it will maliciously. AI can amplify the issues we already have with spam, disinformation, fraud and harassment. Reducing this harm is crucial and it’s important that we work together as an industry to combat the threats AI presents.

We believe that the best way to reduce harm is public education. This is why we created Deepfaked.Video. A resource that can help debunk existing deepfakes and give you the right tools to identify doctored videos for yourself. This way you can proactively make a judgement call (or, prebunk) when faced with new technologies.

With more than 130 video examples and expert commentary, Deepfaked.Video can help regulators, journalists and the general public better identify AI-generated media. It is our hope, that by referring back to credible media sources with each case, not only will people be better informed to make judgement calls on authenticity but that we also help contribute to a more trustworthy news ecosystem.

We've labelled each case for you with the above symbols to verify it it is a deepfake or not, a cheapfake, if it has been debunked - meaning reported as a deepfake but it actually wasn't and of course, the cases that are still unknown and a mystery.


In the database, we also analyse the intent with which the content was created. It is our belief that the public discourse should focus on the good or bad faith intent of the creator. Was the video created for a legitimate purpose, such as education or entertainment? Or was it malicious, questionable or intended to harm? Whilst we recognise that these labels of intent do not cover the full spectrum of human decisions, they serve to portray the numerous contexts in which synthetic media is used.

We have also included case studies of purported deepfakes which are either unverified or which have turned out to be not generated using AI. In many cases, traditional video editing methods are used even though they are reported as AI-generated. This goes some way to illustrate the emergence of a media ecosystem where it becomes increasingly difficult to distinguish synthetic from traditional media.

As we grow this database of videos, we aim to integrate a wealth of expert commentary with the hope that it allows constituent groups (from policy-makers to journalists and the public) to use it as a resource to better understand and identify synthetic media as it revolutionises the future of the information ecosystem.

We welcome public submissions of interesting new cases which we will review and add to our database.


We are a coalition of practitioners dedicated to debunking the myths behind deepfakes, arming the wider public with the tools they need to identify what is and what isn’t AI-generated. We recognise that with the advent of this technology, there has been too much confusion about what is real and what is fake, sometimes leading to scams, fraud and misinformation.

Expert Comments

Nina Schick

Deepfakes Expert and Author
"Malicious deepfakes are an incredibly sophisticated form of visual disinformation in an already corroding information ecosystem. Deepfakes undermine the integrity of visual media as they are not only increasingly prevalent but often also misidentified.

The public need a valuable resource that allows journalists any other interested parties to explore how deepfakes intersect with other forms of visual disinformation, and to understand how malicious AI-generated video is contributing to rising mistrust in visual media. Only by understanding these dynamics can we start to arm ourselves with the knowledge to build an inherently safer information ecosystem."

Victor Riparbelli

CEO & Co-Founder, Synthesia
Recent advancements in AI-generated speech, image and video seem almost magical. The creative possibilities are endless, but it’s important that we put the right safety measures in place to reduce harm. Increasing the population-level literacy in AI is the starting point for defining effective regulatory, technological and educational measures.

Luisa Verdoliva

Multimedia Forensics Lab Lead at University of Naples and Visiting Scientist at Google AI
“Synthetic media is here to stay. It is the future of communication, entertainment and art. Deepfakes are the unavoidable dark side of that. Can we defeat them? We struggle to design better and better deepfake detection tools, but the ultimate line of defence is people's awareness and consciousness. Our strongest asset is continuous education.”

Siwei Lyu

Professor of Computer Science and Engineering at Buffalo University
“Synthetic media, by itself, is the tour-de-force of modern AI and deep learning technologies. However, falling into the wrong hands, it becomes a deepfake. Enhancing awareness and building resilience are key to combating the misuse of synthetic media. This calls for broad collaborations across all stakeholders, including government agencies, researchers, media platforms, and users.”

Henry Adjer

Leading expert on synthetic media and virtuality
"Generative AI is fundamentally reshaping the way we think about creativity, communication, and identity. Whether it’s synthetic voice in Hollywood films or virtual influencers modelling high end fashion, long gone are the days where the synthetic can automatically be contrasted to the authentic.  

We’re still in the nascent stages of this generative revolution; the future will be one where synthetic media is ubiquitous and democratised in daily life, not as a frivolous novelty, but powering ground breaking advances in entertainment, education, and accessibility.  

However, some generative tools and malicious deepfakes are already being weaponised at scale, particularly against women and minority groups, and threaten trust in our already beleaguered digital media ecosystem.  

While the majority of synthetic media tools are used for benign purposes, companies and platforms in this space have a responsibility to shepherd this powerful technology. Deploying careful user safety policies on consent and data processing, balancing tool openness with potential for misuse, and implementing robust content moderation procedures are all critical to minimising harm and setting positive norms and standards. 

Regulation is also a key piece of the puzzle, but it’s crucial lawmakers don’t inadvertently stifle creative and innovative forms of the technology by casting too broad a net or failing to understand the dynamics of how malicious tools are shared and accessed."

To start the conversation about what counts as a ‘good’ or ‘bad’ deepfake, we’ve created the intent flowchart below. You can apply it to any deepfake case you might have at hand. This framework, is of course subjective to our views of acceptable use. It aims to facilitate an open dialogue and better understanding behind the intent of deepfake content.


Manipulating video is nothing new. People have been altering video for decades. The difference is previously, it took a lot of time, required a unique set of skills, and came at a costly price. The phenomenon of AI-generated media is groundbreaking and cutting-edge.

AI is driving a paradigm shift as big as the printing press. In five years' time, 90% of digital content will be synthetic. In 10 years, anyone with a laptop will be able to create a Hollywood-grade movie on their device. We don’t necessarily know what the future holds but we do know regardless of new and emerging trends, education is key. Let’s stay informed and inspire those in power to make a real difference for the next generation.

Discover the world of synthetic media alongside a team of interdisciplinary industry experts. Submit your deepfakes so the coalition can help prebunk potential issues and educate others on the power of AI-generated media.