Category: Mis/Disinformation

  • Community Fakes: A Crowdsourcing Platform to Combat AI-Generated Deepfakes in Fragile Democracies

    Community Fakes: A Crowdsourcing Platform to Combat AI-Generated Deepfakes in Fragile Democracies

    As the world moves into the era of artificial intelligence, the rise of deepfakes and AI-generated media poses a significant threat to the integrity of democratic processes, especially in countries with fragile democracies. The integrity of democratic processes is particularly crucial because it ensures fairness, accountability, and citizen engagement. When compromised, democracy’s foundational values—and society’s trust in its leaders and institutions—are at risk. Protecting democracy in the AI era means staying vigilant and keeping an eye and a database of verified AI manipulations to safeguard the truth and maintain the health of free societies.

    In the Global South, political stability is often precarious and elections can be influenced by mis/disinformation, which is much easier to access these days. The barrier to creating disinformation is no longer technical skill or cost these tools are now readily accessible and often free. All it takes is malicious intent to create and amplify false content at scale.  There is an increasing risk that these authoritarian regimes could weaponise AI-generated mis/disinformation to manipulate public opinion, undermine elections, or silence dissent. Through fabricated videos of political figures, false news reports, and manipulated media, such regimes exploit advanced technologies to sow confusion and mistrust among the electorate, further destabilizing the already fragile democracies.

    While social media platforms and AI companies continue to develop detection tools, these solutions remain limited in their ability to fully address the growing threat of synthetic disinformation, especially in culturally and linguistically diverse regions like the Global South. Detection algorithms typically depend on recognizing patterns, such as unnatural blinking, mismatched lip movements, or anomalies in facial expressions, but these models are often trained on specific data from the West that doesn’t account for nuances from the Global South. This limited scope enables deepfake creators to exploit local cultural cues and dialectical subtleties, producing media that automated detection systems struggle to accurately detect. This gap then leaves many communities vulnerable to disinformation, particularly during critical events like elections.

    This rapid evolution of deepfake technology has shown the need for a stronger, combined approach that combines human and machine intelligence. Human insight is crucial in identifying context-specific inconsistencies that AI might overlook, making a collaborative model essential in countering these challenges in politically sensitive regions. Recognizing this need, Thraets developed Community Fakes, an incident database and central repository for researchers. On this platform, individuals can join forces to contribute, submit and share deepfakes and other AI-altered media. Community Fakes amplifies the strengths of human observation alongside AI tools, creating a more adaptable and comprehensive defence against disinformation and strengthening the fight for truth in media by empowering users to upload and collaborate on suspect content.

    Community Fakes will crowdsource human intelligence to complement AI-based detection and, in turn, this will allow users to leverage their unique insights to spot inconsistencies in AI-generated media that machines may overlook while having conversations with other experts around the observed patterns. Users can submit suspected deepfakes on the platform, which the global community can then scrutinize, verify, and expose. This approach ensures that even the most convincing deepfakes can be exposed before they can do irreparable harm. Community Fakes will provide data sets that can be used to analyse AI content from the Global South by combining the efforts of grassroots activists, researchers, journalists and fact-checkers across different regions, languages, and cultures. 

    To further strengthen the fight against disinformation, Thraets is also providing an API, allowing journalists, fact-checking organizations, and other platforms to programmatically access the Community Fakes database. This will streamline the process of verifying media during crucial moments like elections and enable real-time fact-checking of viral content. With the growing need for robust verification tools, this API offers an essential resource for newsrooms and digital platforms to protect the truth.

    The launch of Community Fakes comes at a critical time when the world is facing unprecedented challenges in combating disinformation and misinformation. Automated tools alone are not enough, especially in regions where AI may lack the necessary contextual understanding to flag manipulations. The combined power of AI and human intelligence offers the best chance to protect the integrity of information and safeguard democratic processes.

    Thraets invites everyone— journalists, fact-checkers, or everyday citizens—to collaborate in identifying and exposing deepfakes. We encourage everyone to become part of Community Fakes and join the global effort to combat disinformation. You can play a crucial role in protecting the integrity of information during pivotal events like elections by contributing your skills and insights. 

    How to use Community Fakes

    1. Logging Into the Platform
      Begin by navigating to the login page of the PLATFORM website community.thraets.org
      You will be prompted to sign in using your email account. Select the desired account or add a new one if it’s not listed.
      Once authenticated, you’ll be redirected to the dashboard.

    2. Editing an Incident
    Screenshot Reference: Edit the incident screen with fields like URL, Type, Category, and Narrative.
    Navigate to the “Edit Incident” page to update or create an entry.
    Fill in the necessary fields:
    URL: Provide the link to the source of the fake or misinformation.
    Type: Select the type of content (e.g., image, video, text).
    Category: Categorize the incident appropriately (e.g., Political, Social, Health).
    Narrative: Provide a clear and concise description of the issue. For example, “Hanifa, Boniface Mwangi, and the rest. We have one country.”
    Optional fields include:
    Archive URL: Link to an archived version of the content.
    Archive Screenshot: Add a URL linking to a screenshot of the archived content.
    Toggle the “Verified” option to confirm authenticity and click Update Incident.

    Best Practices
    When categorizing content, choose the most relevant option to enhance searchability.
    Use credible archive tools like Archive.is to document content, ensuring it is preserved even if deleted from its source. Regularly review updates to ensure all incidents remain accurate and useful for the community.

    Alternatively, you can send an email to [email protected] and our analysts will upload the incident for you.

    Visit Community fakes today to help safeguard the truth and ensure a better-informed world.

  • ‘Ruto Lies’: A Digital Chronicle of Public Discontent 

    ‘Ruto Lies’: A Digital Chronicle of Public Discontent 

    The Kenya Finance Bill 2024/25, presented to parliament on May 13, 2024, proposed various tax increases and new fees on essential items and services, including internet data, bread, cooking oil, sanitary napkins, baby diapers, digital devices, motor vehicle ownership, specialized hospitals, and imported goods. The bill quickly sparked widespread protests across Kenya, significantly amplified by social media platforms like X (formerly Twitter) and TikTok. Hashtags such as #RejectFinanceBill2024, #OccupyParliament, #RutoMustGo, and #RutoLies trended widely, fueling the demonstrations.

    In response to these protests, Thraets launched the ‘Ruto Lies’ Portrait, an interactive webpage designed to highlight the grievances of Kenyan citizens against President William Samoei Ruto. This digital portrait comprises over 5,000 tweets from the #RejectTheFinanceBill2024 demonstrations, each dot in Ruto’s face representing a tweet containing the keywords “Ruto” and “lies.” This innovative project aims to spotlight public perceptions of Ruto’s false promises and engage data scientists and civic tech researchers in analyzing these claims.

    The ‘Ruto Lies’ Portrait is both a visual spectacle and a powerful statement. Clicking on each dot allows users to read individual tweets that point out specific instances where the president’s promises were perceived as unfulfilled. This project provides a platform for public sentiment and invites deeper analysis and accountability.

    Our research team meticulously analyzed and filtered over 5,000 tweets to create this portrait. These tweets, collected during the #RejectTheFinanceBill demonstrations, contain various forms of the keywords ‘Ruto’ and ‘lies’. This extensive dataset offers a rich resource for understanding public sentiment and the specific promises that are seen as unfulfilled. We are releasing this research to encourage developers and researchers to support the project by mapping these tweets to real claims or evidence of false promises made by President Ruto. We aim to provide a comprehensive framework for analyzing these claims.

    On Saturday, August 17, 2024, Thraets participated in the HakiHack Hackathon, where we introduced and demoed the ‘Ruto Lies’ Portrait. The HakiHack Hackathon is a two-day event focused on developing tools to enhance democracy, good governance, and civic action. This event was a valuable opportunity for data scientists, statisticians, and civic tech researchers to engage with the Ruto Lies Portrait Project, access the data, and contribute to the mapping of tweets to real-world claims.

    Several resources are available and ongoing civic tech projects like those below that could be mapped to the “Ruto Lies‘ tweet claims from Kenyan citizens shared during the protests. 

    For those interested in further exploration, we have made the data and tools available on GitHub and have provided a link to the Twitter data dump. Researchers can access approximately 5,000 tweets related to claims of Ruto’s lies via the provided resources:

    This project is more than just digital art; it is a call to action for the data science and civic tech communities. Researchers can help hold leaders accountable and ensure that promises made to the public are fulfilled. We invite all interested parties to join this effort and contribute to a more transparent and accountable governance in Kenya.

    For further information or to get involved, contact Thraets at [email protected].

  • “You Can’t Fool Us”: Introducing ‘Spot the Fakes’ Quiz

    “You Can’t Fool Us”: Introducing ‘Spot the Fakes’ Quiz

    Previously, identifying manipulated images was a relatively easy task since photoshopped pictures often contained obvious inconsistencies, mostly mismatched perspectives(a peasant photo with a supercar background), unnatural lighting, or awkward element combinations. 

    However, the emergence of Generative Adversarial Networks(GANs), a form of artificial intelligence (AI) for manipulating images has birthed a new era of synthetic media creation. 

    Today’s AI-generated images are becoming increasingly sophisticated, making the line between reality and fabrication a difficult distinction and what is more worrying, we are starting to see these being used to fuel disinformation campaigns especially during elections thus presenting a threat to democracy. 

    It is on this foundation that we’ve built a Spot the Fakes quiz, an educational game to develop and sharpen the critical analytical skills for the youth, women, civil society actors, content creators, journalists and politicians to preserve their integrity from sharing misinformation. 

    We aim to;

    1. Enhance digital literacy: By exposing users to a variety of synthetic and genuine content, the platform will help individuals become more critical at recognizing subtle signs of manipulation.
    2. Build critical thinking: Through interactive challenges, users will learn to question and analyze the content they encounter, developing a healthy skepticism towards information presented on social media platforms.
    3. Support journalistic integrity: For media professionals, the platform will serve as a valuable training tool, helping them maintain high standards of accuracy in an increasingly complex information landscape.

    The Quiz

    Spot The Fakes quiz gives users the opportunity to dive into the world of AI-generated synthetic media and how to distinguish between what is authentic or fake. On visiting the site, users either accept or decline the consent form and once the consent is accepted, each session presents a series of 10-15 carefully curated media contents. 

    The quiz is currently just limited to Images but soon, they could be videos, audios or even text-based content, each challenging the user to make a critical decision: is this real or fake? It’s a simple binary choice, but one that requires keen observation and analytical thinking as discussed earlier. 

    The quiz also employs a simple randomized selection process, drawing content from a diverse pool of content from a Ugandan local Photographer – Watanda Photography and previously fact checked Content from African Civic tech organizations. 

    While we maintain strict user anonymity, we do collect and analyze aggregated data. However, no personal or sensitive information is ever collected or sourced at any point of the game now and anytime in the future. 

    Spot the Fakes is more than just a game; it’s a training ground for digital discernment. In a world where the line between fact and fiction is increasingly blurred, we’re equipping newsrooms, journalists & Fact-Checkers with the skills they need to navigate the complex information landscape. Join us in this mission to foster a more informed, critically thinking online community.

    Whilst Journalists, CSOs and Fact-Checkers can learn interactively, the quiz is also open-source which is very important as a product for educational purposes. Any non-profit or individual will be able to download the source code from our github repository in the coming weeks and host a similar quiz with their own content. Stay tuned! We’re currently working on the repository and documentation to help with this.

    The creation of the quiz has been supported by the Africa Digital Rights Fund that is administered by The Collaboration on International ICT Policy for East and Southern Africa (CIPESA)

    By Mark Okello and Samson Monyluak

  • Synthetic Media in Rwanda’s 2024 Elections

    Synthetic Media in Rwanda’s 2024 Elections

    We’ve all seen them—fake images that depict political leaders in a less-than-flattering light, deepfakes of political endorsements that never happened, or manipulated videos spreading inflammatory messages against important figures. The ease with which this content is created and spread emphasizes the significant impact AI-generated media can have on society, especially in politically charged environments.

    A recent example includes a clearly AI-generated image that was widely shared on platforms like X (formerly Twitter) and Facebook, misleadingly suggesting that Gen Z in Uganda were rioting in light of Kenya’s riots—an event that never actually happened. This image was posted by various individuals, gaining significant engagement. For instance, the image was posted by X user, Nyakundi, and received 19,000 likes with over six hundred thousand views, while the same photo shared by @inside_afric garnered over 10200 views. These instances highlight the potent reach and influence of synthetic media in shaping public perception.

    @inside_afric – https://archive.is/PyWTy 
    @C_NyaKundiH – https://archive.is/wip/KWjFC

    Another example can be found in the just concluded South African elections, where deepfakes and AI-generated videos of Joe Biden and Donald Trump were seen on social media. One deepfake showed Donald Trump endorsing Umkhonto we Sizwe (MK), urging South Africans to vote for this party. This was notably circulated on platforms like X (formerly Twitter). Another instance involved an AI-generated video of Joe Biden falsely claiming that if the ANC won the election, the USA would impose sanctions and declare South Africa an enemy of the state. Additionally, there was a manipulated image of Julius Malema of the Economic Freedom Fighters (EFF) crying about his loss, intended to mock his loss in the court of public perception. 

    AI-gen Image of Julius Malema of the Economic Freedom Fighters (EFF) crying

    And, because of Rwanda’s recent elections, we’ve seen these image manipulations often exaggerate President Paul Kagame’s facial features, turning his cheeks into caricatures. These examples illustrate the growing prevalence of synthetic media in the region.

    Synthetic media can be defined as content that is artificially created or manipulated using digital technologies, often leveraging advanced techniques such as artificial intelligence (AI) and machine learning. Synthetic media includes deepfakes, which are highly realistic but fake portrayals of famous people, making them appear to say or do things they never did, and cheap fakes, which are low-quality forgeries or manipulations that deceive viewers without using advanced AI. It also encompasses generative media, such as AI-generated text, images, music, or video, based on training data. Synthetic media can be used for a variety of purposes, from entertainment and art to malicious uses like spreading misinformation, creating fake news, or conducting fraud. These synthetic media creations, while often crude, have the power to manipulate public perception, distort reality, and incite unrest.

    In an era where information flows seamlessly across digital channels, synthetic media while lacking sophistication and often appearing crude, wields a surprising influence on public perception. These “cheap fakes,” born from basic AI tools, could sway public perception and shape critical discourse. Nowhere is this more evident than in electoral processes, where maintaining information integrity is paramount.

    The rise of Generative AI (Gen-AI) has introduced a unique set of challenges and risks to information environments, particularly in the context of elections. For this article, our focus narrows to Rwanda’s just concluded 2024 general elections and we explore how synthetic media is utilized in such scenarios. Synthetic media can significantly threaten the integrity of electoral processes by enabling the spread of misinformation and potentially undermining public trust in political institutions and media—a trend observed in various global elections. This phenomenon of synthetic media is now on the rise in Sub-Saharan Africa, as observed in the examples shared earlier.

    We conducted a comprehensive examination of synthetic media on social media platforms including X (formerly Twitter), Meta (Facebook/Instagram), TikTok, and YouTube from May 2024 to July 2024, the crucial pre-election period in Rwanda. While conducting the research, we used specific key terms in both English and Kinyarwanda, leveraging crowdsourced annotations to identify synthetic content. Our analysis revealed an insignificant and nearly unidentifiable presence of AI-generated media. The synthetic media we did encounter was of very low quality, involving simple manipulations, such as face swaps and face enlargement using basic tools like Photoshop. This finding highlights the current state of synthetic media use in Rwanda’s electoral context, providing a neutral yet insightful perspective on its potential impact.

    Our research identified five key themes dominating the synthetic media landscape, in the context of Rwanda’s 2024 election:

    1. Manipulation of President Paul Kagame’s Likeness: These manipulations were mainly alterations to President Kagame’s facial features, expressions, and overall image, which could have potentially influenced public perception of Rwanda’s President-elect. The edited images have been widely circulated as memes and gifs, intended to mock the president-elect for various reasons or to elevate the image of President Kagame like in the photoshopped image of him with famous footballer, Cristiano Ronaldo. Nevertheless, the content we encountered was consistently unsophisticated and of low quality. Here are some examples:

    1. Content Exploiting Rwanda-D.R. Congo Tensions: There are ongoing diplomatic and security issues between Rwanda and the Democratic Republic of Congo. These tensions intensified with a recent UN report accusing Rwanda of aiding the M23 rebel group that’s battling Congolese forces in eastern DRC. Synthetic media related to these tensions frequently featured manipulated images of President Félix Tshisekedi, depicting him as chubbier than he is, or, as a chicken thief being chased by locals. This trend highlights how regional conflicts can be weaponized through synthetic or AI-generated content to sway public opinion and/or exacerbate existing tensions.
    Cheapfake depiction of Pres. Tshisekedi running
    Cheapfake depiction of Pres. Tshisekedi overweight

    1. Burundi-Rwanda Relations in Focus: In January 2024, it was reported that relations between Rwanda and Burundi had deteriorated again after Burundian President Evariste Ndayishimiye renewed accusations that Rwanda was financing and training the RED-Tabara group rebels. The research observed a few manipulated media pieces concerning the complex relationship between Rwanda and Burundi. These frequently feature altered imagery of Burundian President Évariste Ndayishimi and underscore the potential for synthetic media to impact not only domestic politics but also regional diplomatic dynamics. For example, we found a rudimentarily made cheap fake video on X (formerly Twitter) that used a popular meme with the face swap of the Burundian president slapping the face swap of Rwanda’s president.
    Face swap of the Burundian president slapping the face swap of Rwanda’s president.

    1. Amplification of Synthetic Pro-Kagame Dance Content: Our analysis revealed a significant trend: supporters of President Kagame actively shared deepfake videos of him dancing dance moves made popular by TikTok with youth, as seen below. These videos, primarily promoted by young members of his Rwandan Patriotic Front (RPF) party, feature popular Kinyarwanda songs praising Kagame. This illustrates the strategic use of synthetic media by the younger generation to enhance political campaigns and manage the president’s image.
    AI-gen dance videos of Pres. Kagame

    1. Less Sophisticated Deepfakes (Cheap Fakes) Targeting President Kagame: Our investigation uncovered videos that were rather clumsy deepfakes. In one of these videos, we saw President Kagame’s likeness superimposed over a popular musician performing at a concert. We also found videos of Kagame singing, manipulated to appear as though he was participating in contemporary musical trends, likely to portray him as a modern and relatable leader. In another video, where Kagame was actually talking about President Tshisekedi, his face is manipulated to look like a ghoul, highlighting concerns about the regional focus and potential impact of these deepfakes.
    DeepFake of Pres. Kagame Singing
    DeeFake of Pres. Kagame Speaking about Pres.Tshisekedi
    DeepFake video of Pres. Kagame dancing on a stage

    While our research finds that the majority of synthetic media identified appears to be non-malicious, the persistence of cheap fakes and other manipulated content targeting political figures is deeply concerning, to say the least. This trend raises critical questions about the potential misuse of AI technologies in the Global South’s electoral processes, particularly in Rwanda, and its dire implications for democratic integrity.

    The rapid proliferation of such synthetic media presents several challenges. It can confuse voters, manipulate public opinion, discredit politicians, and ultimately undermine the legitimacy of the electoral process. Moreover, the mere existence of this deepfake technology and the fact that it is getting increasingly difficult to distinguish between what is real and what is not, can create a ‘liar’s dividend,’. The liar’s dividend, suggests that genuine recordings like documents, images, audios, and videos can be dismissed as fake, even after they have been proven to be true. Consequently, even after a fake is exposed, it becomes more challenging for the public to trust any information related to that particular topic. This further leads to the eroding trust in the fourth estate and political discourse. As an example, the 2023 Nigerian election was inundated with deepfakes, kicking this conversation into high gear. The populace watched videos of US public figures like Elon Musk and Donald Trump, applauding Labour Party presidential candidate Peter Obi. These endorsements appealed to younger voters but they were deepfakes. The sentiment though, remained in some voters who wholeheartedly believed the videos to be true. 

    This research was aimed at contributing to a broader understanding of how synthetic media is influencing political landscapes in emerging democracies. Our findings in this short research highlight the pressing need for enhanced media literacy, new pre-bunking mechanisms, and potential regulatory frameworks to effectively address the challenges posed by AI-generated content. The rise of the use of synthetic media, with its capacity to spread misinformation and undermine public trust, necessitates proactive measures to protect the integrity of electoral processes. Strengthening media literacy is crucial to equip the public with the skills to critically evaluate and discern between real and fake content. Implementing strategies where potential misinformation is anticipated and debunked before it even spreads can help mitigate the impact of these deepfakes or cheap fakes. 

    In examining the specific case of Rwanda’s 2024 elections, we provide a few insights into the current state of synthetic media in Sub-Saharan Africa and its potential impact on democratic processes. Our findings underscore the need for comprehensive strategies to safeguard information integrity in the age of artificial intelligence. These strategies include fostering collaboration between policymakers, technology platforms, and civil society to develop effective solutions and create a resilient information environment. When we address these challenges, we can better protect democratic values and ensure that elections remain fair and transparent in the face of evolving technological threats.

  • It Is becoming Impossible to do Internet Research

    It Is becoming Impossible to do Internet Research

    Thirty years ago, for a person on the African continent to connect with someone in, for example, Australia, they would have to send a letter that would take many weeks to be delivered. However, these days, connection is as simple as clicking a button. The internet has brought connectivity and made so much possible. It has revolutionized communication, enabling people to connect across vast distances instantly. It has democratized access to information, empowering individuals with knowledge and resources that were previously inaccessible.

    The internet has fostered innovation and entrepreneurship, creating new opportunities for businesses and individuals. However, alongside these positive changes, the internet has become a battleground for misinformation, disinformation, and malicious activities. There is a need for continued vigilance and research to understand and mitigate these emerging threats.

    In the early 2000s, when the East African Internet was still in its infancy, I founded an Internet cafe in Kampala, which started my digital research journey. In the years that I ran the internet cafe, I observed first-hand the risks and interactions that internet users face, creating a new trajectory for my life. Curiosity and passion drove groundbreaking work, and I was fortunate to be part of that pioneering wave.

    For more than 10 years, I have been studying the internet, online behaviors of people, and the misuse of digital platforms by bad actors. I found myself at the forefront of digital research analysis, shedding light on the power of digital platforms to shape public discourse. My research on internet behaviors began in 2011 with the Google Spam team where I focused on understanding invalid click spammers. This experience honed my skills in Open Source Intelligence research.

    In addition to my work on digital platforms, I collaborated with remote communities and organizations like Refunite, which develops tools for refugees, and Medic, which supports health workers providing care in the world’s hardest-to-reach communities. These experiences deepened my understanding of the vulnerabilities faced by marginalized groups, further enriching my perspective on the intersection of technology and society.

    My research continued during the 2016 Uganda elections, a period when social media emerged as a crucial battleground for hate speech and political debates. Through these experiences, I have developed a keen understanding of internet behaviors and their impact on public conversations.

    This work has taught me a lot about how these harmful groups think, act, and exploit the technology systems that we heavily rely on. However, none of this was taken seriously until 8 years ago. In 2016, researchers within the continent started to take digital misinformation and disinformation seriously. Part of the reason for this was the rise of hate speech and violence during Kenya’s 2008 elections, which was triggered by widespread allegations of vote-rigging, political tension, and ethnic divisions, and the rise of the use of social media to express sentiments about the elections in the 2016 Ugandan Presidential Election, marked by reports of election irregularities, voter suppression, lack of transparency, and the government’s use of security forces to intimidate opposition supporters.

    In the early days, digital research was easily accessible, even for researchers with limited budgets. The main focus was on the pursuit of knowledge and understanding. The atmosphere was open and collaborative, allowing those with just a bit of curiosity and dedication to make significant contributions to the field. Publicly available APIs (Application Programming Interfaces) and readily available data streams from social media offered researchers the tools they needed to collect and analyze data without facing major financial or access barriers like the now defunct Netvizz, which allowed researchers to download data from Facebook but was shut down by the company. This accessibility created a diverse community of researchers from different backgrounds, including academics, independent investigators, and hobbyists, all working together to uncover insights and drive innovation. This era was marked by groundbreaking discoveries and significant advancements in understanding digital behaviors and online interactions. 

    Today, social media research has undergone significant changes. It has become an essential tool for researchers like myself, journalists, and investigators who monitor activities such as influence operations, disinformation campaigns, and online narratives surrounding major events like elections. However, recent changes by platforms such as X (formerly known as Twitter) are making this open-source investigation work significantly more challenging, particularly for independent researchers and those in regions with limited or almost zero financial resources.

    “Social Media research has become an essential tool for researchers, journalists, and investigators who monitor activities such as influence operations, disinformation campaigns, and online narratives surrounding major events like elections.”

    ~ Ngamita

    In the past, researchers could easily access and analyze social media content through publicly available APIs and data streams. However, today, platforms are now restricting this access by requiring payment and licensing for social data mining partners. This is a significant problem for researchers from the global south who have significant financial barriers. What used to be an open environment for examining online conversations is now becoming an exclusive space primarily available to well-funded organizations.

    And it is becoming harder. Meta has recently announced the shutdown of CrowdTangle in August 2024 after shutting down Graph Search in June 2019. CrowdTangle significantly enhanced the coverage of misinformation over the years and provided unique access to trending topics, public accounts, communities, and viral posts on platforms like Facebook, Instagram, and Reddit—information that would otherwise be largely inaccessible. Although the company says its replacement, the Meta Content Library (MCL), is a better tool for researchers, a joint investigation by Proof News, the Tow Center for Digital Journalism, and the Algorithmic Transparency Institute found that Meta’s replacement tool is far less transparent and accessible than CrowdTangle. This will create a problem for researchers.

    TikTok’s unclear data policies are making matters worse. While TikTok has access to users’ data from all over the world, as it is the fastest-growing social media, it only allows researchers in the US and Europe to apply for access to their API. This makes it exponentially difficult for global south researchers to study misinformation and disinformation crises in and for highly volatile regions like the D.R. Congo, Sudan, and Gaza. It goes without saying that this locks out many independent actors, including journalists and academics who rely on open data to uncover story-lines, monitor influence efforts, and analyze the effects of social media platforms in the global south. The commercial interests of these platforms conflict with the necessity for clarity on how social media influences conversations, politics and societies, especially in regions facing higher risks of instability.

    As an example, there have been recent challenges in monitoring Sudan’s Saudi-UAE Proxy War and ongoing influence operations around the conflict have highlighted these issues. Military clashes between the Sudanese Army and the paramilitary Rapid Support Forces (RSF) have resulted in the deaths of hundreds and the displacement of thousands of Sudanese, underscoring a complex interplay of domestic, regional, and global actors who have contributed to this conflict. Independent researchers have found their access curtailed just when the investigation of disinformation became most critical.

    Similarly, tracking suspected influence operations around elections in Africa and other developing nations is becoming vastly more difficult on a shoestring budget. My team is learning this firsthand as we lead research around AI-generated disinformation on elections across six countries, thanks to a grant from the Africa Digital Rights Fund, managed by CIPESA. Accessing data through different sources eats into the grant money, leaving fewer resources for actual research and data analysis.

    In addition to the commercial barriers, legal and censorship challenges are increasing. Politicians and wealthy individuals also make it even more difficult for researchers to gain access to vital information. For example, in 2023, Elon Musk, sued disinformation researchers, claiming their work was driving away advertisers, thankfully that suite was dismissed. This legal pressure adds another layer of difficulty for researchers striving to hold powerful entities accountable with the data readily available. Furthermore, government actions to shut down social media during critical events, such as the Facebook shutdown in Uganda, the internet shutdown during the 2020 elections, and the TikTok shutdown in West Africa, further hinder open research and transparency.

    On July 12th 2024, The EU Commision issued preliminary findings that X (formerly Twitter) is in breach of the Digital Services Act for deceptive verified account practices, insufficient advertising transparency, and restricted data access for researchers, potentially leading to significant fines and corrective measures. While we welcome this outcome, we know that more pressure and more action is required to address these challenges.

    To address these challenges, several solutions can help regional researchers continue their vital work. To ensure researchers, especially in the global south, can continue their crucial work in combating disinformation, we must prioritize open, ethically-governed data access and advocate for increased funding to offset data access costs. Building collaborative networks among researchers, journalists, and NGOs is crucial to amplifying the collective voice for open access and shared resources. For instance, in our AI-generated research, we have partnered with MEVER, who generously provided us with their tools and expertise, bolstering our detection capabilities. Such collaborations are vital, as they empower low-resourced or budget-constrained organizations in the Global South. I also believe that governments and international bodies should also mandate social media platforms to provide transparent and equitable access to data for research purposes. In adopting these measures, we can significantly enhance our ability to understand and mitigate the impacts of digital threats on society.

    All of these solutions may seem ideal, but the reality is that conducting research and producing impactful findings in the current environment is challenging. The digital research landscape has become increasingly restrictive, with platforms locking down data access and imposing significant financial barriers. Despite the importance of transparency and open access, independent researchers, especially those in the Global South, face numerous obstacles that hinder their ability to gather and analyze data effectively. The ability to understand and combat digital threats like disinformation is severely compromised as a result and this highlights the urgent need for actionable solutions and support.

  • Scammers Target African Journalists with Disinformation on Meta(Facebook/Instagram) Platforms

    Scammers Target African Journalists with Disinformation on Meta(Facebook/Instagram) Platforms

    As journalists across the region continue to face unprecedented attacks on their press freedoms with widespread intimidation, harassment and detention for their work, a new threat looms. African journalists are increasingly becoming a subject of targeted disinformation with deep fakes of journalists’ images being used to spread fake stories and scam people. 

    An investigation by Thraets has found fraudulent adverts targeting African journalists and media personalities on Facebook and Instagram(Meta) with scam Ads running false and sensational stories about these journalists using random Facebook pages. 

    The scams use fake copies of genuine newspapers from the countries respective to the journalists like “Vanguard Newspaper” from Nigeria, “The Daily Monitor” from Uganda, “The Herald” from Zimbabwe  and “Ghana Web” from Ghana.

    From our investigation, we found that the fraudsters have targeted Journalists and media personalities in Kenya, Ghana, Nigeria, Zimbabwe and Uganda like Johnie Hughes, Nana Aba Anamoah, Bola Ray, Andrew Mwenda, Jeff Koinange, Rosebell Kagumire, David Hundeyin, Hopewell Chin’ono, Hugo R.P Ribatika, Mohammed Ali. Some adverts have even announced these journalists dead. 

    We believe this targeting goes beyond these countries, as our investigation didn’t cover French speaking countries. In this Investigation we aim to uncover the actors behind this behavior and their intentions.

    The first fake post targeted Ugandan Journalist, Rosebell Kagumire and was shared and sponsored by an account called “Voice of Dipu” and according to Facebook Transparency, the account has changed its name multiple times. 

    The account appears to have been compromised based on frequent page name changes and image changes. We also identified a lack of connection in the posts shared and the adverts from the original user.

    Thraets spoke to Rosebell Kagumire who confirmed that she is alive and aware of the fake stories but did fully understand the motive behind these adverts and targeted posts. 

    A screenshot of the fake sponsored post on Instagram and the Facebook page transparency information of the account

    Another similar post targeted Nigerian Journalist and Author, David Hundeyin. Although a different account, the post was similar to the one that targeted Rosebell Kagumire and ran a hoax story that the journalist had been arrested. Infact, the post shows David Hundeyin in handcuffs but the photo has been fabricated. 

    The Nigerian Journalist has scoffed at these continued bogus posts and has urged the readers of his works and his newsletter to be aware of the scams and hoaxes. 

    “Yes, I am aware of the attacks. I think the purpose of these disinformation attacks is to slowly delegitimize me in the minds of the low-information segment of my audience that is likely to fall for such scams. Even if they are later informed that I had nothing to do with the scam, the negative association between my name and an unpleasant (potentially ruinous) event will have been established in their minds.”, David Hundeyin said. 

    The adverts have claimed that some of these journalists and media personalities have died and the sponsored posts have been active for weeks without any action from Meta. 

    We found that the adverts have similar text patterns across the ads with clickbait titles such as; “Tragic end David Hundeyin” and the posts consistently seem to be using 5 multiple versions across most of the ads currently running that we saw in our data.

    Additionally, to make a case for their hoaxes, the posts use established media houses in the respective countries such as; the Daily Monitor in Uganda, Vanguard News for Nigeria, Ghana Web for Ghana and The Herald in Zimbabwe but also former media employers for example Jeff Koinange from Kenya who worked at CNN.

    Thraets also found that the scam leads to an investment called “Nearest Edge” platform that uses the images of popular journalists and media personalities across Africa. 

    Although the suspicious website is currently unavailable, Wayback Machine revealed that the last time the website behind this scam was active and got an archive save was back in 2021. 

    The investigation also reveals that the most recent time the website has been active was on 2024 June when there was a surge in the targeted adverts towards the journalists as seen from Facebook Ad library and also the web archive history. 

    We reached out to some of the journalists that have been targeted and they shared their views about the targeted attacks; 

    Zimbabwean journalist Hopewell Rugoho-Chin’ono believes the attacks are politically motivated and are meant to taint his reputation both locally and internationally. 

    Rugoho-Chin’ono says he is aware of these attacks because he has constantly been a victim and has received many messages from people who have also been victims. Additionally, he believes he’s lucky to have a solid reputation, so few believe in the fake stories.

    Rosebell Kagumire a Ugandan journalist, pan-African feminist and socio-political commentator believes that she has been a target because of her support for various online campaigns, advocacy work and reporting. 

    “I don’t have a full idea why I am a target but I am involved in supporting various campaigns online and speak on issues that may have transnational opposition – women’s rights. LGBTQ+ rights, Palestinian freedom, anti-imperialism etc.”, she remarked. 

    Rosebell says the attacks have taken time away from her work as she tries to figure out what was going on, if they are personal attacks and these efforts have been futile. 

    “The attack did not affect my work but it took time away from my work to try and figure out what was going on. If it was a personal attack and for what. And since it was extremely difficult even with contacts to get an ear from platforms to look into this specific attack,” she said. 

    The journalists have called upon tech companies to take action into the continued threats against the fourth estate especially in Africa. 

    “Tech companies must equip specialized teams that are able to respond to the changing nature of threats to journalists and people in prominent public discourse spaces. It is not enough to simply tell journalists that they should use the general report features when faced with coordinated disinformation campaigns. It was shocking when those approached at Meta said they needed a specific public link to the posts in order to respond – even for them, this was not possible.”, said Rosbell Kagumire. 

    “We need to see the swift responses to these natures of attacks because they are not only personal but political and aimed at silencing and sabotaging the presence, position and perspectives of the targeted journalists and leaders. More study into the motives behind these attacks in order for other actors to support journalists beyond online response.”, she emphasized. 

    At the height of these scam advertisements, Meta continues to profit greatly from these fraudulent ads as they harm both the people who believe them and the innocent journalists whose credibility they damage. 

    “Regarding the tech companies and social media platforms that enable these bad actors, I think that it is not a case of them merely not doing enough – they are actively complicit in incidents like these because with Facebook for example, there is essentially zero content moderation for sponsored posts. META clearly values revenues and growth-at-all-costs over providing a safe user experience, because the calculation is that the reward from achieving quarterly or annual stock price targets far outweighs the potential financial downsides of lapsed or failed platform regulation.”, David Hundeyin explained. 

    Thraets reached out to Meta for a comment but at the time of publishing, the company had not responded to our requests. 

    However, following our requests, they (Meta) silently reviewed some ads in the screenshots we shared and have since taken them down although many more scam ads are still up and running as of publishing our investigation. 

    For any inquiries email the Research Team: [email protected]

  • AI-Generated Misinformation and Disinformation Idea-thon

    AI-Generated Misinformation and Disinformation Idea-thon

    Help Solve a Major Societal Problem—Bring Your Ideas to Fight Misinformation and Disinformation

    AI-generated Misinformation and disinformation pose a significant threat to democracy. They are spread by local/foreign state actors, businesses and malign non-state actors bent on undermining and damaging free and liberty-loving republics. A new grassroots strategy is needed: bottom-up rather than top-down.

    That is why Thraets, Outbox Hub and CIPESA are holding an AI Mis/Disinformation Idea-thon session to address the problem and find new solutions. Teams will form to attack misinformation and disinformation from four tracks: government, business/technology, nonprofit, and Foreign Actor. 

    Winning team will receive a prize!

    Examples of work products include new novel product changes to social media platforms, new methodologies to investigate the trends,  legislation and regulations, a business plan for a tech start-up, a mobile app, an academic course, or a new nonprofit. Be creative!

    Technical skills are needed, but not necessary to participate. Just bring your best ideas for combating AI misinformation and disinformation.

    A note on AI mis/disinformation  Idea-thon structure and teams: 

    The teams will be organized by the primary category their solution falls under: Government, business/technology, nonprofit, and education. 

    Coaches: There will be one coach per category, whose role will be to help fill in any knowledge gaps and help the teams best achieve their goal.

    Join us for a 2-hours long idea-thon! Come join a team, invite your friends.

    Sign up here or Contact Thraets at [email protected] with questions about the event.

  • Thraets Secures Grant to Protect African Elections from AI-Generated Mis/Disinformation

    Thraets Secures Grant to Protect African Elections from AI-Generated Mis/Disinformation

    We are thrilled to announce that @Thraets, which was incubated under the Outbox Foundation has been awarded a grant by the Africa Digital Rights Fund (ADRF), managed by CIPESA (the Collaboration on International ICT Policy for East and Southern Africa). This grant will fund our project titled “Safeguarding African Elections – Mitigating the Risk of AI-Generated Mis/Disinformation to Preserve Democracy.”

    With numerous African countries, including Tunisia and Ghana, preparing for elections in 2024’s “Year of Democracy,” our project is timely and critical. It aims to counter the increasing threats posed by AI-generated disinformation campaigns that could jeopardize free and fair elections.

    Our research will focus on the upcoming electoral processes in Tunisia (presidential, local and municipal elections in November 2024) and Ghana (presidential and parliamentary elections in December 2024). These countries were selected for their regional significance and the crucial role of supporting democracy in West and North Africa.

    Through this initiative, we aim to:

    1. Develop an open-source AI tracking and knowledge hub to crowdsource and monitor AI-generated content related to the elections.
    2. Conduct red teaming activities to refine investigative methodologies and enhance AI detection tools/tutorials for identifying synthetic and manipulated media.
    3. Train journalists and civil society organizations to detect and counter AI-generated disinformation tactics.
    4. Advocate for platform accountability and robust content moderation policies.

    In an era of advanced AI capabilities that can mislead and divide, defending the integrity of the democratic process is crucial.

    By leveraging technology, building capacity, and fostering multi-stakeholder cooperation, we aim to uphold citizens’ rights to freely express themselves, access accurate information, and participate in shaping their nations’ futures.

    We look forward to sharing more updates as we progress. Stay tuned!