As the world moves into the era of artificial intelligence, the rise of deepfakes and AI-generated media poses a significant threat to the integrity of democratic processes, especially in countries with fragile democracies. The integrity of democratic processes is particularly crucial because it ensures fairness, accountability, and citizen engagement. When compromised, democracy’s foundational values—and society’s trust in its leaders and institutions—are at risk. Protecting democracy in the AI era means staying vigilant and keeping an eye and a database of verified AI manipulations to safeguard the truth and maintain the health of free societies.
In the Global South, political stability is often precarious and elections can be influenced by mis/disinformation, which is much easier to access these days. The barrier to creating disinformation is no longer technical skill or cost these tools are now readily accessible and often free. All it takes is malicious intent to create and amplify false content at scale. There is an increasing risk that these authoritarian regimes could weaponise AI-generated mis/disinformation to manipulate public opinion, undermine elections, or silence dissent. Through fabricated videos of political figures, false news reports, and manipulated media, such regimes exploit advanced technologies to sow confusion and mistrust among the electorate, further destabilizing the already fragile democracies.
While social media platforms and AI companies continue to develop detection tools, these solutions remain limited in their ability to fully address the growing threat of synthetic disinformation, especially in culturally and linguistically diverse regions like the Global South. Detection algorithms typically depend on recognizing patterns, such as unnatural blinking, mismatched lip movements, or anomalies in facial expressions, but these models are often trained on specific data from the West that doesn’t account for nuances from the Global South. This limited scope enables deepfake creators to exploit local cultural cues and dialectical subtleties, producing media that automated detection systems struggle to accurately detect. This gap then leaves many communities vulnerable to disinformation, particularly during critical events like elections.
This rapid evolution of deepfake technology has shown the need for a stronger, combined approach that combines human and machine intelligence. Human insight is crucial in identifying context-specific inconsistencies that AI might overlook, making a collaborative model essential in countering these challenges in politically sensitive regions. Recognizing this need, Thraets developed Community Fakes, an incident database and central repository for researchers. On this platform, individuals can join forces to contribute, submit and share deepfakes and other AI-altered media. Community Fakes amplifies the strengths of human observation alongside AI tools, creating a more adaptable and comprehensive defence against disinformation and strengthening the fight for truth in media by empowering users to upload and collaborate on suspect content.
Community Fakes will crowdsource human intelligence to complement AI-based detection and, in turn, this will allow users to leverage their unique insights to spot inconsistencies in AI-generated media that machines may overlook while having conversations with other experts around the observed patterns. Users can submit suspected deepfakes on the platform, which the global community can then scrutinize, verify, and expose. This approach ensures that even the most convincing deepfakes can be exposed before they can do irreparable harm. Community Fakes will provide data sets that can be used to analyse AI content from the Global South by combining the efforts of grassroots activists, researchers, journalists and fact-checkers across different regions, languages, and cultures.
To further strengthen the fight against disinformation, Thraets is also providing an API, allowing journalists, fact-checking organizations, and other platforms to programmatically access the Community Fakes database. This will streamline the process of verifying media during crucial moments like elections and enable real-time fact-checking of viral content. With the growing need for robust verification tools, this API offers an essential resource for newsrooms and digital platforms to protect the truth.
The launch of Community Fakes comes at a critical time when the world is facing unprecedented challenges in combating disinformation and misinformation. Automated tools alone are not enough, especially in regions where AI may lack the necessary contextual understanding to flag manipulations. The combined power of AI and human intelligence offers the best chance to protect the integrity of information and safeguard democratic processes.
Thraets invites everyone— journalists, fact-checkers, or everyday citizens—to collaborate in identifying and exposing deepfakes. We encourage everyone to become part of Community Fakes and join the global effort to combat disinformation. You can play a crucial role in protecting the integrity of information during pivotal events like elections by contributing your skills and insights.
How to use Community Fakes
- Logging Into the Platform
Begin by navigating to the login page of the PLATFORM website community.thraets.org
You will be prompted to sign in using your email account. Select the desired account or add a new one if it’s not listed.
Once authenticated, you’ll be redirected to the dashboard.
2. Editing an Incident
Screenshot Reference: Edit the incident screen with fields like URL, Type, Category, and Narrative.
Navigate to the “Edit Incident” page to update or create an entry.
Fill in the necessary fields:
URL: Provide the link to the source of the fake or misinformation.
Type: Select the type of content (e.g., image, video, text).
Category: Categorize the incident appropriately (e.g., Political, Social, Health).
Narrative: Provide a clear and concise description of the issue. For example, “Hanifa, Boniface Mwangi, and the rest. We have one country.”
Optional fields include:
Archive URL: Link to an archived version of the content.
Archive Screenshot: Add a URL linking to a screenshot of the archived content.
Toggle the “Verified” option to confirm authenticity and click Update Incident.
Best Practices
When categorizing content, choose the most relevant option to enhance searchability.
Use credible archive tools like Archive.is to document content, ensuring it is preserved even if deleted from its source. Regularly review updates to ensure all incidents remain accurate and useful for the community.
Alternatively, you can send an email to thraets@protonmail.com and our analysts will upload the incident for you.
Visit Community fakes today to help safeguard the truth and ensure a better-informed world.