Category: Issues

  • Surveillance or Security? Uganda’s Digital License Plates and the Trade-Off Between Privacy and Governance

    Surveillance or Security? Uganda’s Digital License Plates and the Trade-Off Between Privacy and Governance

    In a bold yet controversial move to modernize its transportation system, Uganda introduced digital license plates as part of a sweeping technological transformation. While the government frames this initiative as a step toward improved road safety and crime prevention, the rollout has sparked significant and fierce public debate. Beneath the veneer of progress and modernization, the initiative exposes a darker reality: the creation of a 24/7 surveillance grid with worrying implications for privacy and civil liberties.

    The new system, known as the Intelligent Transport Monitoring System (ITMS), is managed by Russian contractor, Joint Stock Global Security Company (JSGSC), which has no noticeable online presence. The system will transmit real-time location data, routes, and movement patterns of every vehicle fitted with the plates. This system intersects with Huawei’s “smart city” CCTV networks, which already blanket urban areas like Kampala. Together, they form an all-seeing infrastructure capable of tracking citizens’ movements, associations, and activities with alarming precision.

    The digital number plates mark a significant shift from traditional alphanumeric license plates, transforming them into sophisticated tracking tools. Embedded with Radio-Frequency Identification (RFID) chips and QR codes, these plates are not mere vehicle identifiers, they enable continuous monitoring of vehicle movements and ownership details. The plates allow authorities to track individuals in real time. 

    Critics argue this mirrors tactics used by authoritarian regimes to suppress dissent. For example, Russia’s own surveillance apparatus has been weaponized against activists and journalists. Russia has long exported surveillance technology to African nations as part of its soft power strategy, offering tools that empower autocrats while deepening dependency.

    Uganda’s partnership with Global Security is part of a broader trend of international collaborations in technology and infrastructure. In 2022, the government signed a deal with Russian state-owned Rostec to develop a $4 billion refinery. As with past agreements, questions have been raised about data security and privacy. Ensuring clear safeguards and transparency in managing vehicle tracking data will be essential to addressing public concerns.

    The rollout of these plates raises urgent questions about transparency, governance, and accountability. For instance, who has access to this data? How long will it be stored? What safeguards exist to prevent misuse? And with a foreign contractor like Global Security involved, what guarantees do Ugandans have that their data will not be misappropriated or weaponized against them? As Richard Ngamita, our Research Lead, highlighted in his Bluesky thread, the lack of transparency around data storage and access is alarming: “Who audits Global Security? What stops this data from being sold to third parties or used to target opposition figures?”

    Anatomy of a Surveillance State

    Uganda’s surveillance playbook is not new. Over the past decade, the government has mandated SIM card registration (2013) to link phone numbers to national IDs. The government also imposed a social media tax in 2018, ostensibly to widen the tax base but it was called out for its use in curbing online dissent. In 2020, the government also deployed Huawei’s facial recognition cameras to fight crime but this system was often used to monitor protests. It was alleged that the government used Cellebrite’s UFED technology to hack into the smartphones of activists and political opponents in 2022. Also, FinFisher spyware was covertly deployed in 2012 to intercept the communications of journalists, activists, and political opponents.

    Each of these measures was justified as a tool for “security” or as a tool to push the country forward but then was quickly repurposed for political control. During the 2021 elections, opposition leader, Bobi Wine’s supporters were tracked via their phones and social media, leading to wide arrests and violence. 

    Although these new digital plates are touted as a way to modernize Uganda’s transportation infrastructure by enhancing road safety, improving traffic management,  and reducing crime like car theft, critics view the initiative as mass surveillance disguised as technological progress. In a country with limited data protection safeguards, this raises pressing concerns about privacy and potential misuse as this technology will enable real-time monitoring of vehicles used for rallies, investigative journalism, or even medical care

    The Illusion of Public Safety

    Supporters of this system claim the plates will reduce car theft and improve traffic management. Yet experts question these assertions. A 2023 Wired investigation revealed that license plate readers in the U.S. and Europe have leaked sensitive data, exposing drivers to stalking, fraud, and government overreach. In Uganda, where cybersecurity safeguards are minimal, these risks can be significantly magnified.

    Moreover, the system’s $190 (UGX 714,300) cost per plate for new cars and replacement costs of $40 for cars and $14 for motorcycles is hefty and unaffordable to the majority of the country already struggles with the high cost of living. This creates a two-tiered society. Motorcycle and taxi drivers, a lifeline for Uganda’s informal economy, face fines or confiscation if they can’t pay.  

    The Global Crisis on Privacy 

    Uganda’s dilemma does reflect a growing global trend which could very well become a crisis. From China’s Social Credit System to India’s Aadhaar biometric database, governments are normalizing mass surveillance under the guise of innovation. Even ‘model’ democracies like the U.S. and U.K. face backlash over unchecked police use of facial recognition.

    What sets Uganda apart is the speed and scale of its digital authoritarianism. With no data protection laws and a censored press, citizens have little to no recourse. As the Electronic Frontier Foundation warns, “Once surveillance infrastructure is built, mission creep is inevitable.”

    This concern is amplified by the country’s weak legal framework governing surveillance technologies. Key gaps include the absence of a law regulating CCTV and video surveillance, the weak enforcement of the Data Protection and Privacy Act of 2019—which, in theory, prohibits intrusive data collection without clear justification—and the lack of independent oversight to prevent abuses of the Intelligent Transport Monitoring System (ITMS). Without urgent reforms, Uganda risks entrenching a surveillance state with little accountability and significant threats to civil liberties.

    To address the growing concerns surrounding Uganda’s digital license plate system, several measures must be implemented to ensure transparency, accountability, and fairness. Transparency laws should mandate public disclosure of data contracts, storage policies, and access logs to prevent misuse and build public trust. Additionally, an independent, civilian-led oversight body should be established to audit surveillance systems and hold authorities accountable. Given the steep cost of the plates, the government should introduce subsidies or alternative pricing models to make them more affordable for low-income drivers. On a global scale, international pressure, including sanctions on firms like Global Security that facilitate repressive surveillance, could help curb the misuse of such technologies. Implementing these safeguards would strike a balance between security and civil liberties, ensuring that modernization does not come at the expense of fundamental rights.

    While technological advancements can undoubtedly benefit society, they must be implemented with care to ensure that they do not infringe on fundamental rights. The introduction of digital license plates in Uganda represents a pivotal moment, forcing citizens to weigh the supposed benefits of security against the very real risks to their privacy and freedoms.

    The rollout of digital license plates in Uganda is framed as a step toward enhancing national security, a goal that, on the surface, seems beneficial. However, the numerous concerns surrounding the Intelligent Transport Monitoring System (ITMS) raise valid skepticism. The high costs, unclear implementation strategy, and significant privacy risks make it difficult for Ugandans to embrace the initiative fully. Without clear safeguards, transparency, and accountability, these digital plates risk becoming yet another tool for unchecked surveillance rather than a genuine security solution. Until the government provides more answers than questions, the public will continue to demand clarity. Striking a balance between governance and civil liberties is crucial—without it, this project feels less like progress and more like a step toward an unsettling future of pervasive monitoring.

    Privacy is the right to be let alone. The moment that’s gone, everything else crumbles. For Uganda, that moment is now.

    Research By
    Richard Ngamita, Mercy Abiro, and Mable Amuron

  • Community Fakes: A Crowdsourcing Platform to Combat AI-Generated Deepfakes in Fragile Democracies

    Community Fakes: A Crowdsourcing Platform to Combat AI-Generated Deepfakes in Fragile Democracies

    As the world moves into the era of artificial intelligence, the rise of deepfakes and AI-generated media poses a significant threat to the integrity of democratic processes, especially in countries with fragile democracies. The integrity of democratic processes is particularly crucial because it ensures fairness, accountability, and citizen engagement. When compromised, democracy’s foundational values—and society’s trust in its leaders and institutions—are at risk. Protecting democracy in the AI era means staying vigilant and keeping an eye and a database of verified AI manipulations to safeguard the truth and maintain the health of free societies.

    In the Global South, political stability is often precarious and elections can be influenced by mis/disinformation, which is much easier to access these days. The barrier to creating disinformation is no longer technical skill or cost these tools are now readily accessible and often free. All it takes is malicious intent to create and amplify false content at scale.  There is an increasing risk that these authoritarian regimes could weaponise AI-generated mis/disinformation to manipulate public opinion, undermine elections, or silence dissent. Through fabricated videos of political figures, false news reports, and manipulated media, such regimes exploit advanced technologies to sow confusion and mistrust among the electorate, further destabilizing the already fragile democracies.

    While social media platforms and AI companies continue to develop detection tools, these solutions remain limited in their ability to fully address the growing threat of synthetic disinformation, especially in culturally and linguistically diverse regions like the Global South. Detection algorithms typically depend on recognizing patterns, such as unnatural blinking, mismatched lip movements, or anomalies in facial expressions, but these models are often trained on specific data from the West that doesn’t account for nuances from the Global South. This limited scope enables deepfake creators to exploit local cultural cues and dialectical subtleties, producing media that automated detection systems struggle to accurately detect. This gap then leaves many communities vulnerable to disinformation, particularly during critical events like elections.

    This rapid evolution of deepfake technology has shown the need for a stronger, combined approach that combines human and machine intelligence. Human insight is crucial in identifying context-specific inconsistencies that AI might overlook, making a collaborative model essential in countering these challenges in politically sensitive regions. Recognizing this need, Thraets developed Community Fakes, an incident database and central repository for researchers. On this platform, individuals can join forces to contribute, submit and share deepfakes and other AI-altered media. Community Fakes amplifies the strengths of human observation alongside AI tools, creating a more adaptable and comprehensive defence against disinformation and strengthening the fight for truth in media by empowering users to upload and collaborate on suspect content.

    Community Fakes will crowdsource human intelligence to complement AI-based detection and, in turn, this will allow users to leverage their unique insights to spot inconsistencies in AI-generated media that machines may overlook while having conversations with other experts around the observed patterns. Users can submit suspected deepfakes on the platform, which the global community can then scrutinize, verify, and expose. This approach ensures that even the most convincing deepfakes can be exposed before they can do irreparable harm. Community Fakes will provide data sets that can be used to analyse AI content from the Global South by combining the efforts of grassroots activists, researchers, journalists and fact-checkers across different regions, languages, and cultures. 

    To further strengthen the fight against disinformation, Thraets is also providing an API, allowing journalists, fact-checking organizations, and other platforms to programmatically access the Community Fakes database. This will streamline the process of verifying media during crucial moments like elections and enable real-time fact-checking of viral content. With the growing need for robust verification tools, this API offers an essential resource for newsrooms and digital platforms to protect the truth.

    The launch of Community Fakes comes at a critical time when the world is facing unprecedented challenges in combating disinformation and misinformation. Automated tools alone are not enough, especially in regions where AI may lack the necessary contextual understanding to flag manipulations. The combined power of AI and human intelligence offers the best chance to protect the integrity of information and safeguard democratic processes.

    Thraets invites everyone— journalists, fact-checkers, or everyday citizens—to collaborate in identifying and exposing deepfakes. We encourage everyone to become part of Community Fakes and join the global effort to combat disinformation. You can play a crucial role in protecting the integrity of information during pivotal events like elections by contributing your skills and insights. 

    How to use Community Fakes

    1. Logging Into the Platform
      Begin by navigating to the login page of the PLATFORM website community.thraets.org
      You will be prompted to sign in using your email account. Select the desired account or add a new one if it’s not listed.
      Once authenticated, you’ll be redirected to the dashboard.

    2. Editing an Incident
    Screenshot Reference: Edit the incident screen with fields like URL, Type, Category, and Narrative.
    Navigate to the “Edit Incident” page to update or create an entry.
    Fill in the necessary fields:
    URL: Provide the link to the source of the fake or misinformation.
    Type: Select the type of content (e.g., image, video, text).
    Category: Categorize the incident appropriately (e.g., Political, Social, Health).
    Narrative: Provide a clear and concise description of the issue. For example, “Hanifa, Boniface Mwangi, and the rest. We have one country.”
    Optional fields include:
    Archive URL: Link to an archived version of the content.
    Archive Screenshot: Add a URL linking to a screenshot of the archived content.
    Toggle the “Verified” option to confirm authenticity and click Update Incident.

    Best Practices
    When categorizing content, choose the most relevant option to enhance searchability.
    Use credible archive tools like Archive.is to document content, ensuring it is preserved even if deleted from its source. Regularly review updates to ensure all incidents remain accurate and useful for the community.

    Alternatively, you can send an email to [email protected] and our analysts will upload the incident for you.

    Visit Community fakes today to help safeguard the truth and ensure a better-informed world.

  • An Election For The World: The Global South Stakes of The U.S. Presidential Election.

    An Election For The World: The Global South Stakes of The U.S. Presidential Election.

    The phrase “When America sneezes, the world catches a cold” vividly captures the global influence of the United States. As one of the world’s largest economies and a dominant cultural, political, and technological force, shifts in America’s policies, economy, or even social trends often ripple outward, impacting countries far beyond its borders. 

    From financial markets that react to changes in U.S. economic indicators to political movements that draw inspiration from American ideals, America’s actions frequently set off chain reactions that shape the global landscape. Due to this, the world is tuning into the U.S. presidential election. Though far from American soil, these regions are deeply connected to U.S. policy, especially in terms of trade, security, aid, and governance support. The election results over the past years have reshaped international relations, reconfigured trade dynamics, and shifted the balance of power within global institutions. 

    Today, on November 5, 2024, all eyes are once again on America. The Global South is bracing for the potential impacts of the U.S. election, aware that its outcome will inevitably shape their futures for better or worse.

    Historically, U.S. foreign policy has profoundly impacted developing nations, at times destabilizing entire economies and governments through economic sanctions and military interventions. In countries like Cuba and Zimbabwe, prolonged sanctions have crippled local economies, resulting in poverty, limited access to global markets, and stunted growth. Meanwhile, American military interventions in Libya, Iraq, Afghanistan, and Syria, carried out under the promise of democracy and freedom, have often left behind enduring instability, weakened infrastructures, and humanitarian crises. While some U.S. interventions fostered development and growth, others have left a legacy of division, economic collapse, and mistrust.

    Although each new U.S. administration, whether Republican or Democrat, brings its own distinct foreign policy approach, these policies often uphold a core commitment to protecting U.S. interests. For the Global South, which frequently relies on multilateral institutions like NATO, the United Nations and the World Bank for developmental aid, trade, and security, the nature of U.S. engagement remains particularly crucial. Under one administration, foreign aid and cooperation may flourish, while another might lean toward a more self-centred, isolationist approach, leaving vulnerable regions with fewer resources.

    African nations, who have been called shitholes by former president Trump, have a unique stake in U.S. elections due to the continent’s reliance on foreign aid, security partnerships, and investment opportunities. In a recent poll conducted by Larry Madowo, a CNN International Correspondent, many respondents from African countries expressed support for former President Trump, citing the point that he speaks his mind and his conservative values. Both major U.S. political parties usually express interest in African affairs. For example, Trump’s Republican-led administration focused on fighting China’s growing influence in Africa and growing private sector investments. On the other hand, The Democratic Party’s policies have prioritised development aid and human rights in Africa. However, Biden’s administration has faced criticism for a perceived disconnect between stated intentions and tangible actions, as seen by his limited direct engagement with the continent during his term. President Biden has yet to visit the continent, only promising to visit Angola in the last weeks of his Presidency, and with Vice President Kamala Harris representing the administration by visiting Ghana, Zambia and Tanzania to strengthen ties.

    When it comes to relations with China and Russia, the world is seeing how these two giants are currently dominating the world economic stage. As the U.S.-China/Russia rivalry intensifies, countries in the Global South face pressure to align with the SuperPower that offers the most promise for economic growth at the cost of maintaining authoritarian regimes in power, making the navigation of this geopolitical divide increasingly difficult. The U.S. election outcome could either ease or heighten these tensions, pushing developing nations to either choose sides or strike a careful balance between competing powers. China’s Belt and Road Initiative (BRI), for instance, offers substantial infrastructural investment, which appeals to many developing countries seeking modernization. However, the BRI has also drawn criticism for creating debt dependencies, with some viewing this as a form of economic neo-colonialism. The results of the U.S. election will significantly impact whether countries in Africa, Asia, and Latin America choose to align more closely with the U.S. or China, particularly in light of the emerging influence of BRICS.

    The U.S. has long been seen as a beacon of democracy, yet recent events have shaken this image, weakening its influence over global governments. The January 6th, 2021 insurrection, where a sitting president actively refused to transfer power and citizens protested election results violently, highlighted some of the deep divisions that have been brewing in America. This post-election violence, coupled with persistent racial and social inequalities—much of which were spotlighted after the killing of George Floyd—eroded America’s moral high ground and credibility in promoting democratic values abroad. Many in the Global South, who have faced international criticism for similar post-election unrest, were quick to point out the hypocrisy. This event served as a reminder that democratic challenges and civil instability are not limited to developing nations, and this is now casting a more critical view of American influence in global governance and human rights advocacy. As a result, America is increasingly being viewed as less of a beacon of democratic excellence, and the world is closely observing to see how this situation unfolds.

    With the rise of digital threats like foreign interference, misinformation, and cyber surveillance affecting U.S. elections, similar concerns are now emerging in the Global South. At Thraets, we are closely monitoring these digital threat trends and observing how the U.S. addresses them, recognizing that similar tactics may be used to target elections in the Global South. We aim to anticipate and prepare for potential risks to the electoral integrity in our own democracies.

    As these issues surface, widely broadcasted on mainstream media and social media which is accessible to all, countries in the Global South are increasingly scrutinizing the ideals that America has long promoted. What exactly is the democracy that America champions? Could a contentious election further erode U.S. credibility? Will it fuel greater skepticism or lessen America’s influence on global efforts toward democratic reform?

    Kamala Harris or Donald Trump, the world is watching…

  • When Technology Enables Abuse: Rana Ayyub’s Battle with Tech-Facilitated Gender-Based Violence

    When Technology Enables Abuse: Rana Ayyub’s Battle with Tech-Facilitated Gender-Based Violence

    The rise of digital platforms has empowered independent and citizen journalism and has offered a vital space for voices that challenge power structures. However, these platforms have also become hotbeds for harassment, disinformation, and orchestrated attacks, particularly against outspoken journalists like Rana Ayyub. Rana Ayyub is an Indian journalist, author of Gujarat Files: Anatomy of a Cover Up, and a columnist for The Washington Post. As a vocal critic of the current Indian government and a staunch advocate for human rights, Ayyub has been relentlessly targeted by a coordinated campaign of harassment, character assassination, and disinformation, aimed at silencing her.

    An analysis of thousands of tweets and other online abuse reveals that the harassment directed at Ayyub is anything but random; rather, it appears to be a highly organized and sustained effort to discredit her. The scale is staggering, with relentless daily torrents of abusive tweets, messages, and hashtags aimed at undermining her credibility. Our random sample from this data highlights specific patterns in these attacks: nearly half of the abusive replies to her X (formerly Twitter) account are in English, while the remaining responses are in Hindi and various regional languages. This multilingual assault demonstrates the strategic coordination behind the campaign, designed to damage her reputation across diverse audiences.

    An example of a threat Ayyub faces daily, including calls for violence and rape.

    The harassment extends beyond verbal abuse. Manipulated images, videos, and harmful links are often employed to worsen smear campaigns against her. What’s even more concerning is the high frequency of violent threats, such as calls for rape and murder, that Ayyub encounters nearly daily.

    Several coordinated tactics have been employed to harass and discredit Ayyub, highlighting the organized nature of this campaign:

    • Reply Spam: The investigation tracked over 6,000 tweet mentions, many of which originated from accounts dedicated to flooding Ayyub’s timeline with abusive replies. This method is intended to drown out meaningful conversation and overwhelm her with toxic content, making it difficult for her to engage or respond effectively.
    Rana Ayyub regularly receives thousands of abusive replies and mentions on Twitter.
    • Hashtag Campaigns: Hashtags like #RanaAyyub and #GujaratLies are frequently deployed to amplify disinformation and rally online mobs to digitally stone Ayyub. These campaigns work to paint Ayyub as a foreign agent or propagandist, casting her as a threat to India’s sovereignty and fueling public outrage by appealing to nationalist sentiments. The latest wave of attacks surged after Ayyub was awarded the International Press Freedom Award, with many accounts expressing anger and intensifying the harassment in response to her international recognition.
    • Disinformation Spread: Ayyub has become a primary target of disinformation campaigns on social media, with false claims accusing her of spreading misinformation or acting as a foreign agent against India—tactics aimed at damaging her credibility and deflecting from her journalism. In one instance, allegations of money laundering led to harassment so severe that the UN intervened. Even then, some people criticized the UN’s stance.

    In March 2022, Indian authorities barred her from boarding a flight to London, citing an investigation into alleged financial misconduct; by October, the Enforcement Directorate filed charges claiming misuse of over $324,000 (Rs 2.7 crore) in raised funds. Ayyub has denied all allegations, describing them as attempts to silence her. This weaponization of disinformation undermines her work and sets a dangerous precedent for journalists everywhere. 

    Coordinated Campaigns and Political Affiliations

    A deeper analysis of the accounts spreading this disinformation reveals a coordinated effort, with many profiles showing strong affiliations to pro-BJP (Bharatiya Janata Party) narratives. While no direct involvement from the Indian government or the BJP has been definitively established, the consistent overlap between accounts promoting disinformation and those aligned with BJP content points to a politically motivated campaign against Ayyub.


    Some of the top accounts repeatedly harassing Ayyub show strong affiliations with pro-BJP narratives.

    Some of the most active accounts in the harassment campaign are linked to high levels of engagement with pro-government and nationalist content, which further suggests that Ayyub’s criticism of the current regime plays a significant role in the attacks.

    Government Censorship

    The investigation also highlights instances where actions taken by the Indian government appear to have directly fueled surges in harassment against Rana Ayyub. One notable example occurred on June 26, 2022, when Ayyub shared a notice from X (formerly Twitter), informing her that her post regarding the Gyanvapi Mosque and the farmers’ protests had been blocked at the request of the Indian government. 

    A Twitter notice showing that Ayyub’s tweet was withheld following a government order.

    This censorship not only suppressed her voice but also triggered a fresh wave of abuse, as trolls intensified their attacks, using the blocked content as an opportunity to malign her further highlights the intersection of government censorship and digital harassment, with state actions inadvertently or intentionally emboldening those who seek to discredit journalists like Ayyub.

    This type of government censorship, combined with the growing disinformation campaigns, showcases a troubling intersection of digital violence and state control of the narrative, where journalists like Ayyub become prime targets.

    Self-Censorship Due to Digital Violence

    We hypothesize that the relentless wave of online abuse has taken a profound psychological toll on Rana Ayyub and that she has been forced to self-censor since 2020 as a defense mechanism. Following X’s introduction of the ‘Choose Who Can Reply’ feature, Ayyub has consistently restricted her posts to limit replies, allowing only selected users to respond. Data from our analysis reveals a marked drop in the volume of replies to her tweets, from thousands before 2021 to just hundreds afterwards. While this measure has significantly reduced the volume of abusive replies in her mentions, it has not prevented the broader disinformation campaign targeting her. The smear campaigns, false narratives, and coordinated attacks continue, circulating across various platforms and amplifying the digital violence she faces.

    The dots denote how the tweet replies dropped sharply starting in 2020 as Ayyub began using Twitter’s ‘restrict replies’ feature to manage harassment.

    In an attempt to protect herself, Ayyub frequently tags law enforcement agencies in her posts to bring attention to the violent threats against her life, including rape and murder. Despite these repeated pleas for intervention, law enforcement has not taken decisive action. Ayyub’s self-censorship is a stark example of how digital violence forces journalists into silence, not through formal censorship, but through the overwhelming weight of abuse. Even though she has taken steps to shield herself, the structural issues surrounding moderation and accountability on platforms like X (formerly Twitter) and Instagram remain unresolved. Social media companies have been criticized for their failure to effectively moderate abuse and curb disinformation, particularly in cases involving public figures in politically sensitive environments. The absence of comprehensive safeguards allows harassment to persist unchecked, further discouraging voices that challenge powerful institutions.

    Journalists Under Siege

    The psychological and professional toll of targeted harassment against journalists like Rana Ayyub extends far beyond the individual. It sends a chilling message to the broader journalism community: speaking truth to power can come with life-threatening consequences. The digital space, originally envisioned as a platform for democratizing information and empowering independent voices, has instead become a battleground where journalists are systematically targeted and silenced. Ayyub’s experience highlights the urgent need for stronger protections, from governments and tech platforms, to ensure that journalists can work without the looming threat of violence or retaliation.

    The harassment campaign against Rana Ayyub is emblematic of a much larger crisis facing journalists globally—particularly women and those who challenge authoritarian regimes. The tactics used against Ayyub are not isolated; they are part of a broader pattern of digital repression aimed at suppressing free speech and curbing independent journalism, especially in regions like the Global South. These methods—reply spam, disinformation, and coordinated smear campaigns—serve to intimidate and discredit journalists, while also eroding public trust in independent reporting.

    This case, which is just one of many, underscores the pressing need for more robust protections for journalists operating in digital spaces. Social media platforms must step up their content moderation efforts and implement stronger accountability mechanisms to curb the spread of abuse and disinformation. Without these safeguards, the digital landscape will continue to serve as a hostile environment for those who dare to challenge powerful institutions.

    The critical question that remains is whether these attacks are purely organic, fueled by online trolls and patriotic zeal, or if there is a deeper, more orchestrated effort by political entities, such as the BJP or the Indian government, to suppress dissenting voices like Ayyub’s. The overlap between pro-government content and accounts engaged in the harassment certainly raises concerns about the role of political forces in enabling or even directing such attacks. As the lines between online harassment and political influence blur, addressing this issue nationally and globally becomes increasingly important.

    Research By
    Mable Amuron, Mercy Abiro and Richard Ngamita. 

  • ‘Ruto Lies’: A Digital Chronicle of Public Discontent 

    ‘Ruto Lies’: A Digital Chronicle of Public Discontent 

    The Kenya Finance Bill 2024/25, presented to parliament on May 13, 2024, proposed various tax increases and new fees on essential items and services, including internet data, bread, cooking oil, sanitary napkins, baby diapers, digital devices, motor vehicle ownership, specialized hospitals, and imported goods. The bill quickly sparked widespread protests across Kenya, significantly amplified by social media platforms like X (formerly Twitter) and TikTok. Hashtags such as #RejectFinanceBill2024, #OccupyParliament, #RutoMustGo, and #RutoLies trended widely, fueling the demonstrations.

    In response to these protests, Thraets launched the ‘Ruto Lies’ Portrait, an interactive webpage designed to highlight the grievances of Kenyan citizens against President William Samoei Ruto. This digital portrait comprises over 5,000 tweets from the #RejectTheFinanceBill2024 demonstrations, each dot in Ruto’s face representing a tweet containing the keywords “Ruto” and “lies.” This innovative project aims to spotlight public perceptions of Ruto’s false promises and engage data scientists and civic tech researchers in analyzing these claims.

    The ‘Ruto Lies’ Portrait is both a visual spectacle and a powerful statement. Clicking on each dot allows users to read individual tweets that point out specific instances where the president’s promises were perceived as unfulfilled. This project provides a platform for public sentiment and invites deeper analysis and accountability.

    Our research team meticulously analyzed and filtered over 5,000 tweets to create this portrait. These tweets, collected during the #RejectTheFinanceBill demonstrations, contain various forms of the keywords ‘Ruto’ and ‘lies’. This extensive dataset offers a rich resource for understanding public sentiment and the specific promises that are seen as unfulfilled. We are releasing this research to encourage developers and researchers to support the project by mapping these tweets to real claims or evidence of false promises made by President Ruto. We aim to provide a comprehensive framework for analyzing these claims.

    On Saturday, August 17, 2024, Thraets participated in the HakiHack Hackathon, where we introduced and demoed the ‘Ruto Lies’ Portrait. The HakiHack Hackathon is a two-day event focused on developing tools to enhance democracy, good governance, and civic action. This event was a valuable opportunity for data scientists, statisticians, and civic tech researchers to engage with the Ruto Lies Portrait Project, access the data, and contribute to the mapping of tweets to real-world claims.

    Several resources are available and ongoing civic tech projects like those below that could be mapped to the “Ruto Lies‘ tweet claims from Kenyan citizens shared during the protests. 

    For those interested in further exploration, we have made the data and tools available on GitHub and have provided a link to the Twitter data dump. Researchers can access approximately 5,000 tweets related to claims of Ruto’s lies via the provided resources:

    This project is more than just digital art; it is a call to action for the data science and civic tech communities. Researchers can help hold leaders accountable and ensure that promises made to the public are fulfilled. We invite all interested parties to join this effort and contribute to a more transparent and accountable governance in Kenya.

    For further information or to get involved, contact Thraets at [email protected].

  • Israeli Gas, Kenyan Tears: Investigating Israel-Supplied Riot Control Agents Used in the Kenya Demonstrations

    Israeli Gas, Kenyan Tears: Investigating Israel-Supplied Riot Control Agents Used in the Kenya Demonstrations

    In June and July 2024, East Africa experienced waves of protests, notably in Kenya, where large-scale youth-led demonstrations erupted in response to the contentious Finance Bill 2024. The demonstrations were part of a broader effort to pressure President William Ruto, with activist groups staging protests every Tuesday and Thursday and calling for his resignation due to the government’s fiscal policies and handling of the economic crisis​​. 

    This kind of peaceful protest is an exercise protected as a fundamental human right enshrined in many constitutions and international human rights treaties, including the African Charter on Human and People’s Rights and the International Covenant on Civil and Political Rights. This right is also protected under Kenya’s Constitution. This right is not absolute, however. Kenya’s Penal Code states that any gatherings or other meeting or procession that may threaten the peace or public order can be disrupted by the police, and the means for this disruption is not clearly defined. 

    Police violence however was not confined to the protests in the Nairobi Central Business District (CBD). A heightened police presence was observed in the suburbs of Githurai, Zimmerman, Pipeline, and Ongata Rongai just miles away from CBD on nights that preceded the protests, majorly police using non-lethal ammunition like teargas and water cannons inside residential areas long into the night. Over 60 deaths related to the protests have been confirmed by the Kenya National Commission on Human Rights, some of the from indiscriminate police action in Ongata Rongai and Githurai. This further raises questions about police conduct in peaceful protests.

    Footage from these demonstrations provided an opportunity to study and analyse the riot gear used by security forces, expanding our digital library research as part of the Tech Misuse and Abuse Project.

    Using open-source investigation methods, Thraets verified over 40 videos and 120 images documenting the demonstrations in Kenya, particularly around the #RejectTheFinanceBill2024 and #RutoMustGo campaigns. These images and videos, shared on social media platforms like X (formerly Twitter), Facebook, Instagram, and TikTok, highlighted nearly four different demonstrations where tear gas was misused. Africa Uncensored and the team at Thraets, a network of open-source researchers, confirmed the location, date, and validity of these events, which were crucial for our investigation.

    The Connection to Israel 

    Our open-source revealed that security forces used tear gas grenades, most of which were manufactured by ISPRA Ltd in Israel. ISPRA Ltd specialises in developing and manufacturing devices for riot control, crowd management, anti-terror equipment, and police gear. The company’s products include tear gas grenades, anti-riot guns, and various types of ammunition, including those used by security operatives in the Kenyan protests​​.

    ISPRA Ltd is an Israeli company founded in 1969, specialising in producing tear gas and other ‘non-lethal’ ammunition. With the tagline, ‘smart solutions for riot control,’ ISPRA exports anti-riot gear worldwide and is a major supplier to Israeli police and military. However, they maintain a limited online presence, with minimal information available on their exports, financial status, employee count, or annual reports. Their LinkedIn page shows 84 associated employees with no posts or articles.

    Many companies, such as ISPRA, do not maintain a prominent online presence, but market their products at arms fairs worldwide. To trace and identify their products, we tracked their participation in Defense Exhibitions and Arms Fairs throughout the years. ISPRA has exhibited at numerous Arms Fairs, including Milipol in Paris since 1993, ADEX in Azerbaijan, KADEX in Kazakhstan, ShieldAfrica in 2021, and many others.  At ADEX 2018, they promoted their Cyclone Anti-Riot Drone System.

    This is not the first time an Israeli company has supplied equipment to an African country. In early 2001, the Zimbabwean Financial Gazette newspaper reported that Zimbabwe had sought at least 30 riot control vehicles as part of a $10 million deal with another Israeli company called the Beit Alfa Trailer Company (BAT). 

    The ‘non-lethal’ tear gas canisters collected by demonstrators during the #RejectFinanceBill2024 protests included impact rounds manufactured by ISPRA Ltd between 2017 and 2022. These rounds were specifically designed to manage rioting crowds in various ways, ranging from causing non-lethal pain to fully incapacitating people without causing permanent harm. The purpose is to control both individual targets and larger groups effectively.

    The equipment identified and documented by our team included products like 37/38/40 mm Anti-Riot Guns, Multi-effect grenades, and 37/38mm ammunition such as the G2020-CS/CS (MULTI EFFECT GRENADE – CS SMOKE + STUN & CS POWDER), C850-XRB (37/38MM ROUND – 24 BULLET BALLS), and C850-1CS (37/38mm ROUND – CS SMOKE), among others.

    (C850-XRB – impact rounds and balls )

    The image above, from the Kenyan riots, shows an ‘Impact Ammunition’ with the product label “C850-XRB,” which matches the labels from products produced by ISPRA Ltd as shown in the 2019 ISPRA Catalog.

    Another example item is the C850-1CS, which also matches the labels from ISPRA Ltd products as shown in this Instagram post.

    The label ‘Model 2020’ on some of the tear gas canisters that we saw in the videos and images indicates a specific model or version which reflects updates in design, formulation, or compliance with regulations, including advancements in safety features, dispersal methods, or changes in the chemical composition of agents used, such as CS gas or other irritants. This particular model was mapped to the Multi-effect grenades G2020-CS/CS. However, some of the protesters confused the label 2020 as the expiry date like in this video shared here, which is not correct. Another example is this video from 2023 where a protester makes the same claims in Kisumu city. 

    During the protests, Kenyans saw, for the first time, an orange-red substance released unto protesters leading to questions on what exactly it was and how potentially dangerous it might be and was immediately named; The “Agent Orange”. Neitizens immediately circulated conspiracy theories(here, here and here) on who was responsible and behind the new type of teargas.

    “Agent Orange” was a chemical herbicide and defoliant that was used by the U.S. military as part of its herbicidal warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. 

    According to articles and online reports such as Nation Media, we believe the strange teargas is Oleoresin Capsicum(OC) spray, often deprived from capsaicin –an active ingredient in chilli peppers which fall under the same Multi-effect grenades G4040-CS/* categories

    Based on photos from the #RejectTheFinanceBill2024 demonstrations in Kenya, Thraets identified grenades and canisters visually similar to ISPRA Ltd (Israel) G2020 products by exact serial number and part number patterns as per ISPRA Ltd products shared in catalogues on their website.

    Archive X @benny_gitau (https://archive.is/PQMYC)
    Google Image Reverse Search of INSPRA LTD G2020-CS Grenade

    The Rise in Crowd Control Tech from Israel: Intermediaries and Questionable Deals

    In 2014,  the Times of Israel reported a 40% increase in Israeli weapons exports to African countries compared to the previous year, according to Defense Ministry data. This surge likely included tear gas purchases from ISPRA Ltd., which we estimate began between 2010 and 2015. This assumption is supported by an investigation by the Kenyan Auditor-General into the 3.8 billion KES spent by the Kenyan Ministry of Interior in Kenya which is responsible for policing functions and internal security. This investigation was documented by John Ngirachu in Nation Media. The report indicated that ISPRA Ltd. received 272 million KES, with significant payments made on June 30, 2015, the last day of the financial year, into accounts at Kenya Commercial Bank and the National Bank of Kenya. Although the article does not specify tear gas, it lists arms, insurance, vehicles, and helicopter repairs among the purchases (Standard News Archive).

    The Star also reports the same mysterious account No. 1109896077 at KCB, Moi Avenue Branch, that transacted over Sh8.7 billion without the authority of the National Treasury as required by law. This was the same account flagged earlier that was used to pay INSPRA Ltd.

    Our research analysts utilised Getty Images to analyze images tagged with ‘Kenya Tear Gas’ from as early as the 2000s. This analysis, which included the examination of 1,663 photos, confirmed that, before 2010, Kenyan police primarily used tear gas from the French company, Nobel Securite. This conclusion is based on the examination of the launchers and tear gas canisters depicted in the images. ISPRA Ltd.’s specific tear gas canisters first appeared around 2010, marking a shift in the type of tear gas used by Kenyan police.

    Our research shows the transition from French-made Nobel Securite tear gas to Israeli-made ISPRA Ltd tear gas around 2010, coinciding with the reported increase in Israeli arms exports to Africa. 

    Timeline of Tear Gas Usage and Protests in Kenya

    2005:

    • Police used tear gas during various protests, with evidence showing tear gas canisters from Nobel Securite, a French company.
    • Example:2005 anti-Referendum protest

    2006:

    2007-2008:

    2010-2012:

    • The first documented use of ISPRA Ltd’s Model-2020 tear gas canisters in Kenyan protests, as seen in these Getty images where a policeman is seen holding a blue and orange tear gas canister.
    • Example:2012 photo from Eastleigh

    2014:

    2023-2024:

    • Protests reignited in response to the Finance Bill, which proposed significant tax increases and other financial reforms in June 2023 and June 2024. Demonstrations continued throughout June and July, with heavy police response and the use of tear gas procured from ISPRA Ltd.
    • Example: Finance Bill Protests, July 16

    The consistent use of intermediaries and secondary channels by Israeli companies in Africa, often through dubious means, has been well-documented. This practice, highlighted by Yotam Gidron in Israel in Africa, exemplifies Israel’s ‘middle-man’ approach to diplomacy on the continent.

    Israeli companies often operate through local intermediaries to facilitate deals, a practice that has raised concerns about transparency and ethics. This method has allowed certain Israeli firms to navigate complex regulatory environments and political landscapes while distancing themselves from direct involvement in potentially controversial transactions. This strategy also allows them to circumvent restrictions and embargoes that might be in place.

    Gidron notes that this approach has been a key aspect of Israel’s engagement in Africa, with intermediaries playing crucial roles in arms deals, security contracts, and other business ventures. The use of intermediaries is seen as a way to maintain plausible deniability and reduce direct accountability for actions taken by local agents on behalf of Israeli interests​.

    HSN Codes and Export Details

    Using open-source platforms like Volza Grow Global, we confirmed recent exports of tear gas to Kenya from ISPRA Israel Product Co Ltd, which made four export shipments. The platform also mentions that its top export market is Kenya, with top export product categories including HSN Codes 7610900000, 8421399000, and 9020000000. HSN code stands for Harmonized System of Nomenclature, a 6-digit uniform code that classifies over 5000 products and is accepted worldwide. Recent exports to Kenya under these codes include:

    • HSN Code 9306900000: ‘42 STEEL DRUMS PARTS FOR GRENADE CLASS 6 1 4 1 UN 1700’ which are tear gas grenades
    • HSN Code 7610900000: ‘1X20 FT CNTR STC 3 UNITS PORTABLE ALUMINIUM LADDER 7 CARTONS TACTICAL GAS MASK FILTERS STEEL DRUMS PARTS FOR G808 ST GRENADE CLASS 6 1 4 1 UN 1700’ 
    Archive – Volza Grow Global (https://archive.is/F8nE0) – 2023 data

    The substantial volume of tear gas grenades and related riot control equipment being exported to Kenya suggests a continued and perhaps increased use of tear gas for crowd dispersion. This raises significant concerns about the long-term health impacts on the Kenyan population and the potential escalation in the use of force in managing public demonstrations.

    Quantities of anti-riot equipment exported to Kenya, with the consignee listed as the Kenya Police

    Health Impacts

    According to a recent article by Nation Media, over 958 tear gas canisters were thrown in one night in Nairobi’s Githurai suburb, some of which were thrown into the houses in the suburb. This incident underscores the urgent need for more thorough investigations into the health impacts of tear gas exposure. Tear gas, commonly composed of chemicals like CS (2-chlorobenzalmalononitrile) or CN (chloroacetophenone), is designed to cause temporary incapacitation by irritating the eyes, respiratory system, and skin.


    There have also been reports of miscarriages and menstrual irregularities following tear gas exposure, and some individuals in Kenya exposed to teargas have reported effects here, here, here, here and here. The exact mechanisms are not well understood, but the chemicals may disrupt hormonal balances and affect reproductive organs. However, the health impacts can extend beyond temporary discomfort:

    • Respiratory Issues: Exposure to tear gas can lead to chronic respiratory problems, including asthma and bronchitis. The chemicals can cause severe inflammation of the airways, making breathing difficult and potentially leading to long-term damage.
    • Ocular Damage: Tear gas exposure can result in temporary blindness and long-term eye damage. Symptoms include severe irritation, tearing, and conjunctivitis. In some cases, it can lead to permanent vision impairment.
    • Dermatological Effects: The skin can suffer from severe burns and rashes upon contact with tear gas. Prolonged exposure may lead to chronic skin conditions and infections due to the compromised skin barrier.
    • Reproductive Health: There have been reports of miscarriages and menstrual irregularities following tear gas exposure. The exact mechanisms are not well understood, but the chemicals may disrupt hormonal balances and affect reproductive organs.
    • Genetic Mutations: Although less well-documented, there is concern about the potential for genetic mutations caused by the chemical components of tear gas. This could have far-reaching implications, particularly for children exposed during crucial developmental stages.

    Conclusion

    Israel’s engagement with African nations is evolving, but perhaps not in the expected direction. While recent years have seen Israel exporting advanced surveillance technologies to countries across the continent, there’s a parallel trend of supplying traditional crowd control equipment such as tear gas canisters. This shift to ‘less-than-lethal’ riot gear demonstrates the multifaceted nature of Israel’s strategic relationships in Africa. The expansion into supplying tear gas, as seen with companies like ISPRA Ltd., raises questions about the broader implications of Israel’s involvement in African security matters. As Israel leverages its expertise in surveillance and crowd control technologies for diplomatic and economic gains in Africa, concerns grow about the potential consequences. The deepening of dictatorships and weakening of democracy across the continent can no longer be ignored. In light of these developments, it is imperative to critically examine how Israel’s involvement in African security matters might be impacting the political landscape. The increasing use of Israeli-supplied crowd control equipment, such as tear gas, not only affects public health but also influences the dynamics of civil unrest and state responses. This calls for a nuanced understanding and rigorous scrutiny of the long-term consequences of such exports on the democratic fabric and human rights conditions in African countries.

    Our cataloging of these non-lethal weapons is an ongoing project. If you have any images or videos not featured in this post, we welcome your contributions. Please share them with our research team at [email protected] to help expand this important resource.

    The investigation into tear gas used in the recent Kenya demonstrations was led by the Thraets team, in collaboration with the Africa Uncensored team.

  • “You Can’t Fool Us”: Introducing ‘Spot the Fakes’ Quiz

    “You Can’t Fool Us”: Introducing ‘Spot the Fakes’ Quiz

    Previously, identifying manipulated images was a relatively easy task since photoshopped pictures often contained obvious inconsistencies, mostly mismatched perspectives(a peasant photo with a supercar background), unnatural lighting, or awkward element combinations. 

    However, the emergence of Generative Adversarial Networks(GANs), a form of artificial intelligence (AI) for manipulating images has birthed a new era of synthetic media creation. 

    Today’s AI-generated images are becoming increasingly sophisticated, making the line between reality and fabrication a difficult distinction and what is more worrying, we are starting to see these being used to fuel disinformation campaigns especially during elections thus presenting a threat to democracy. 

    It is on this foundation that we’ve built a Spot the Fakes quiz, an educational game to develop and sharpen the critical analytical skills for the youth, women, civil society actors, content creators, journalists and politicians to preserve their integrity from sharing misinformation. 

    We aim to;

    1. Enhance digital literacy: By exposing users to a variety of synthetic and genuine content, the platform will help individuals become more critical at recognizing subtle signs of manipulation.
    2. Build critical thinking: Through interactive challenges, users will learn to question and analyze the content they encounter, developing a healthy skepticism towards information presented on social media platforms.
    3. Support journalistic integrity: For media professionals, the platform will serve as a valuable training tool, helping them maintain high standards of accuracy in an increasingly complex information landscape.

    The Quiz

    Spot The Fakes quiz gives users the opportunity to dive into the world of AI-generated synthetic media and how to distinguish between what is authentic or fake. On visiting the site, users either accept or decline the consent form and once the consent is accepted, each session presents a series of 10-15 carefully curated media contents. 

    The quiz is currently just limited to Images but soon, they could be videos, audios or even text-based content, each challenging the user to make a critical decision: is this real or fake? It’s a simple binary choice, but one that requires keen observation and analytical thinking as discussed earlier. 

    The quiz also employs a simple randomized selection process, drawing content from a diverse pool of content from a Ugandan local Photographer – Watanda Photography and previously fact checked Content from African Civic tech organizations. 

    While we maintain strict user anonymity, we do collect and analyze aggregated data. However, no personal or sensitive information is ever collected or sourced at any point of the game now and anytime in the future. 

    Spot the Fakes is more than just a game; it’s a training ground for digital discernment. In a world where the line between fact and fiction is increasingly blurred, we’re equipping newsrooms, journalists & Fact-Checkers with the skills they need to navigate the complex information landscape. Join us in this mission to foster a more informed, critically thinking online community.

    Whilst Journalists, CSOs and Fact-Checkers can learn interactively, the quiz is also open-source which is very important as a product for educational purposes. Any non-profit or individual will be able to download the source code from our github repository in the coming weeks and host a similar quiz with their own content. Stay tuned! We’re currently working on the repository and documentation to help with this.

    The creation of the quiz has been supported by the Africa Digital Rights Fund that is administered by The Collaboration on International ICT Policy for East and Southern Africa (CIPESA)

    By Mark Okello and Samson Monyluak

  • Synthetic Media in Rwanda’s 2024 Elections

    Synthetic Media in Rwanda’s 2024 Elections

    We’ve all seen them—fake images that depict political leaders in a less-than-flattering light, deepfakes of political endorsements that never happened, or manipulated videos spreading inflammatory messages against important figures. The ease with which this content is created and spread emphasizes the significant impact AI-generated media can have on society, especially in politically charged environments.

    A recent example includes a clearly AI-generated image that was widely shared on platforms like X (formerly Twitter) and Facebook, misleadingly suggesting that Gen Z in Uganda were rioting in light of Kenya’s riots—an event that never actually happened. This image was posted by various individuals, gaining significant engagement. For instance, the image was posted by X user, Nyakundi, and received 19,000 likes with over six hundred thousand views, while the same photo shared by @inside_afric garnered over 10200 views. These instances highlight the potent reach and influence of synthetic media in shaping public perception.

    @inside_afric – https://archive.is/PyWTy 
    @C_NyaKundiH – https://archive.is/wip/KWjFC

    Another example can be found in the just concluded South African elections, where deepfakes and AI-generated videos of Joe Biden and Donald Trump were seen on social media. One deepfake showed Donald Trump endorsing Umkhonto we Sizwe (MK), urging South Africans to vote for this party. This was notably circulated on platforms like X (formerly Twitter). Another instance involved an AI-generated video of Joe Biden falsely claiming that if the ANC won the election, the USA would impose sanctions and declare South Africa an enemy of the state. Additionally, there was a manipulated image of Julius Malema of the Economic Freedom Fighters (EFF) crying about his loss, intended to mock his loss in the court of public perception. 

    AI-gen Image of Julius Malema of the Economic Freedom Fighters (EFF) crying

    And, because of Rwanda’s recent elections, we’ve seen these image manipulations often exaggerate President Paul Kagame’s facial features, turning his cheeks into caricatures. These examples illustrate the growing prevalence of synthetic media in the region.

    Synthetic media can be defined as content that is artificially created or manipulated using digital technologies, often leveraging advanced techniques such as artificial intelligence (AI) and machine learning. Synthetic media includes deepfakes, which are highly realistic but fake portrayals of famous people, making them appear to say or do things they never did, and cheap fakes, which are low-quality forgeries or manipulations that deceive viewers without using advanced AI. It also encompasses generative media, such as AI-generated text, images, music, or video, based on training data. Synthetic media can be used for a variety of purposes, from entertainment and art to malicious uses like spreading misinformation, creating fake news, or conducting fraud. These synthetic media creations, while often crude, have the power to manipulate public perception, distort reality, and incite unrest.

    In an era where information flows seamlessly across digital channels, synthetic media while lacking sophistication and often appearing crude, wields a surprising influence on public perception. These “cheap fakes,” born from basic AI tools, could sway public perception and shape critical discourse. Nowhere is this more evident than in electoral processes, where maintaining information integrity is paramount.

    The rise of Generative AI (Gen-AI) has introduced a unique set of challenges and risks to information environments, particularly in the context of elections. For this article, our focus narrows to Rwanda’s just concluded 2024 general elections and we explore how synthetic media is utilized in such scenarios. Synthetic media can significantly threaten the integrity of electoral processes by enabling the spread of misinformation and potentially undermining public trust in political institutions and media—a trend observed in various global elections. This phenomenon of synthetic media is now on the rise in Sub-Saharan Africa, as observed in the examples shared earlier.

    We conducted a comprehensive examination of synthetic media on social media platforms including X (formerly Twitter), Meta (Facebook/Instagram), TikTok, and YouTube from May 2024 to July 2024, the crucial pre-election period in Rwanda. While conducting the research, we used specific key terms in both English and Kinyarwanda, leveraging crowdsourced annotations to identify synthetic content. Our analysis revealed an insignificant and nearly unidentifiable presence of AI-generated media. The synthetic media we did encounter was of very low quality, involving simple manipulations, such as face swaps and face enlargement using basic tools like Photoshop. This finding highlights the current state of synthetic media use in Rwanda’s electoral context, providing a neutral yet insightful perspective on its potential impact.

    Our research identified five key themes dominating the synthetic media landscape, in the context of Rwanda’s 2024 election:

    1. Manipulation of President Paul Kagame’s Likeness: These manipulations were mainly alterations to President Kagame’s facial features, expressions, and overall image, which could have potentially influenced public perception of Rwanda’s President-elect. The edited images have been widely circulated as memes and gifs, intended to mock the president-elect for various reasons or to elevate the image of President Kagame like in the photoshopped image of him with famous footballer, Cristiano Ronaldo. Nevertheless, the content we encountered was consistently unsophisticated and of low quality. Here are some examples:

    1. Content Exploiting Rwanda-D.R. Congo Tensions: There are ongoing diplomatic and security issues between Rwanda and the Democratic Republic of Congo. These tensions intensified with a recent UN report accusing Rwanda of aiding the M23 rebel group that’s battling Congolese forces in eastern DRC. Synthetic media related to these tensions frequently featured manipulated images of President Félix Tshisekedi, depicting him as chubbier than he is, or, as a chicken thief being chased by locals. This trend highlights how regional conflicts can be weaponized through synthetic or AI-generated content to sway public opinion and/or exacerbate existing tensions.
    Cheapfake depiction of Pres. Tshisekedi running
    Cheapfake depiction of Pres. Tshisekedi overweight

    1. Burundi-Rwanda Relations in Focus: In January 2024, it was reported that relations between Rwanda and Burundi had deteriorated again after Burundian President Evariste Ndayishimiye renewed accusations that Rwanda was financing and training the RED-Tabara group rebels. The research observed a few manipulated media pieces concerning the complex relationship between Rwanda and Burundi. These frequently feature altered imagery of Burundian President Évariste Ndayishimi and underscore the potential for synthetic media to impact not only domestic politics but also regional diplomatic dynamics. For example, we found a rudimentarily made cheap fake video on X (formerly Twitter) that used a popular meme with the face swap of the Burundian president slapping the face swap of Rwanda’s president.
    Face swap of the Burundian president slapping the face swap of Rwanda’s president.

    1. Amplification of Synthetic Pro-Kagame Dance Content: Our analysis revealed a significant trend: supporters of President Kagame actively shared deepfake videos of him dancing dance moves made popular by TikTok with youth, as seen below. These videos, primarily promoted by young members of his Rwandan Patriotic Front (RPF) party, feature popular Kinyarwanda songs praising Kagame. This illustrates the strategic use of synthetic media by the younger generation to enhance political campaigns and manage the president’s image.
    AI-gen dance videos of Pres. Kagame

    1. Less Sophisticated Deepfakes (Cheap Fakes) Targeting President Kagame: Our investigation uncovered videos that were rather clumsy deepfakes. In one of these videos, we saw President Kagame’s likeness superimposed over a popular musician performing at a concert. We also found videos of Kagame singing, manipulated to appear as though he was participating in contemporary musical trends, likely to portray him as a modern and relatable leader. In another video, where Kagame was actually talking about President Tshisekedi, his face is manipulated to look like a ghoul, highlighting concerns about the regional focus and potential impact of these deepfakes.
    DeepFake of Pres. Kagame Singing
    DeeFake of Pres. Kagame Speaking about Pres.Tshisekedi
    DeepFake video of Pres. Kagame dancing on a stage

    While our research finds that the majority of synthetic media identified appears to be non-malicious, the persistence of cheap fakes and other manipulated content targeting political figures is deeply concerning, to say the least. This trend raises critical questions about the potential misuse of AI technologies in the Global South’s electoral processes, particularly in Rwanda, and its dire implications for democratic integrity.

    The rapid proliferation of such synthetic media presents several challenges. It can confuse voters, manipulate public opinion, discredit politicians, and ultimately undermine the legitimacy of the electoral process. Moreover, the mere existence of this deepfake technology and the fact that it is getting increasingly difficult to distinguish between what is real and what is not, can create a ‘liar’s dividend,’. The liar’s dividend, suggests that genuine recordings like documents, images, audios, and videos can be dismissed as fake, even after they have been proven to be true. Consequently, even after a fake is exposed, it becomes more challenging for the public to trust any information related to that particular topic. This further leads to the eroding trust in the fourth estate and political discourse. As an example, the 2023 Nigerian election was inundated with deepfakes, kicking this conversation into high gear. The populace watched videos of US public figures like Elon Musk and Donald Trump, applauding Labour Party presidential candidate Peter Obi. These endorsements appealed to younger voters but they were deepfakes. The sentiment though, remained in some voters who wholeheartedly believed the videos to be true. 

    This research was aimed at contributing to a broader understanding of how synthetic media is influencing political landscapes in emerging democracies. Our findings in this short research highlight the pressing need for enhanced media literacy, new pre-bunking mechanisms, and potential regulatory frameworks to effectively address the challenges posed by AI-generated content. The rise of the use of synthetic media, with its capacity to spread misinformation and undermine public trust, necessitates proactive measures to protect the integrity of electoral processes. Strengthening media literacy is crucial to equip the public with the skills to critically evaluate and discern between real and fake content. Implementing strategies where potential misinformation is anticipated and debunked before it even spreads can help mitigate the impact of these deepfakes or cheap fakes. 

    In examining the specific case of Rwanda’s 2024 elections, we provide a few insights into the current state of synthetic media in Sub-Saharan Africa and its potential impact on democratic processes. Our findings underscore the need for comprehensive strategies to safeguard information integrity in the age of artificial intelligence. These strategies include fostering collaboration between policymakers, technology platforms, and civil society to develop effective solutions and create a resilient information environment. When we address these challenges, we can better protect democratic values and ensure that elections remain fair and transparent in the face of evolving technological threats.

  • It Is becoming Impossible to do Internet Research

    It Is becoming Impossible to do Internet Research

    Thirty years ago, for a person on the African continent to connect with someone in, for example, Australia, they would have to send a letter that would take many weeks to be delivered. However, these days, connection is as simple as clicking a button. The internet has brought connectivity and made so much possible. It has revolutionized communication, enabling people to connect across vast distances instantly. It has democratized access to information, empowering individuals with knowledge and resources that were previously inaccessible.

    The internet has fostered innovation and entrepreneurship, creating new opportunities for businesses and individuals. However, alongside these positive changes, the internet has become a battleground for misinformation, disinformation, and malicious activities. There is a need for continued vigilance and research to understand and mitigate these emerging threats.

    In the early 2000s, when the East African Internet was still in its infancy, I founded an Internet cafe in Kampala, which started my digital research journey. In the years that I ran the internet cafe, I observed first-hand the risks and interactions that internet users face, creating a new trajectory for my life. Curiosity and passion drove groundbreaking work, and I was fortunate to be part of that pioneering wave.

    For more than 10 years, I have been studying the internet, online behaviors of people, and the misuse of digital platforms by bad actors. I found myself at the forefront of digital research analysis, shedding light on the power of digital platforms to shape public discourse. My research on internet behaviors began in 2011 with the Google Spam team where I focused on understanding invalid click spammers. This experience honed my skills in Open Source Intelligence research.

    In addition to my work on digital platforms, I collaborated with remote communities and organizations like Refunite, which develops tools for refugees, and Medic, which supports health workers providing care in the world’s hardest-to-reach communities. These experiences deepened my understanding of the vulnerabilities faced by marginalized groups, further enriching my perspective on the intersection of technology and society.

    My research continued during the 2016 Uganda elections, a period when social media emerged as a crucial battleground for hate speech and political debates. Through these experiences, I have developed a keen understanding of internet behaviors and their impact on public conversations.

    This work has taught me a lot about how these harmful groups think, act, and exploit the technology systems that we heavily rely on. However, none of this was taken seriously until 8 years ago. In 2016, researchers within the continent started to take digital misinformation and disinformation seriously. Part of the reason for this was the rise of hate speech and violence during Kenya’s 2008 elections, which was triggered by widespread allegations of vote-rigging, political tension, and ethnic divisions, and the rise of the use of social media to express sentiments about the elections in the 2016 Ugandan Presidential Election, marked by reports of election irregularities, voter suppression, lack of transparency, and the government’s use of security forces to intimidate opposition supporters.

    In the early days, digital research was easily accessible, even for researchers with limited budgets. The main focus was on the pursuit of knowledge and understanding. The atmosphere was open and collaborative, allowing those with just a bit of curiosity and dedication to make significant contributions to the field. Publicly available APIs (Application Programming Interfaces) and readily available data streams from social media offered researchers the tools they needed to collect and analyze data without facing major financial or access barriers like the now defunct Netvizz, which allowed researchers to download data from Facebook but was shut down by the company. This accessibility created a diverse community of researchers from different backgrounds, including academics, independent investigators, and hobbyists, all working together to uncover insights and drive innovation. This era was marked by groundbreaking discoveries and significant advancements in understanding digital behaviors and online interactions. 

    Today, social media research has undergone significant changes. It has become an essential tool for researchers like myself, journalists, and investigators who monitor activities such as influence operations, disinformation campaigns, and online narratives surrounding major events like elections. However, recent changes by platforms such as X (formerly known as Twitter) are making this open-source investigation work significantly more challenging, particularly for independent researchers and those in regions with limited or almost zero financial resources.

    “Social Media research has become an essential tool for researchers, journalists, and investigators who monitor activities such as influence operations, disinformation campaigns, and online narratives surrounding major events like elections.”

    ~ Ngamita

    In the past, researchers could easily access and analyze social media content through publicly available APIs and data streams. However, today, platforms are now restricting this access by requiring payment and licensing for social data mining partners. This is a significant problem for researchers from the global south who have significant financial barriers. What used to be an open environment for examining online conversations is now becoming an exclusive space primarily available to well-funded organizations.

    And it is becoming harder. Meta has recently announced the shutdown of CrowdTangle in August 2024 after shutting down Graph Search in June 2019. CrowdTangle significantly enhanced the coverage of misinformation over the years and provided unique access to trending topics, public accounts, communities, and viral posts on platforms like Facebook, Instagram, and Reddit—information that would otherwise be largely inaccessible. Although the company says its replacement, the Meta Content Library (MCL), is a better tool for researchers, a joint investigation by Proof News, the Tow Center for Digital Journalism, and the Algorithmic Transparency Institute found that Meta’s replacement tool is far less transparent and accessible than CrowdTangle. This will create a problem for researchers.

    TikTok’s unclear data policies are making matters worse. While TikTok has access to users’ data from all over the world, as it is the fastest-growing social media, it only allows researchers in the US and Europe to apply for access to their API. This makes it exponentially difficult for global south researchers to study misinformation and disinformation crises in and for highly volatile regions like the D.R. Congo, Sudan, and Gaza. It goes without saying that this locks out many independent actors, including journalists and academics who rely on open data to uncover story-lines, monitor influence efforts, and analyze the effects of social media platforms in the global south. The commercial interests of these platforms conflict with the necessity for clarity on how social media influences conversations, politics and societies, especially in regions facing higher risks of instability.

    As an example, there have been recent challenges in monitoring Sudan’s Saudi-UAE Proxy War and ongoing influence operations around the conflict have highlighted these issues. Military clashes between the Sudanese Army and the paramilitary Rapid Support Forces (RSF) have resulted in the deaths of hundreds and the displacement of thousands of Sudanese, underscoring a complex interplay of domestic, regional, and global actors who have contributed to this conflict. Independent researchers have found their access curtailed just when the investigation of disinformation became most critical.

    Similarly, tracking suspected influence operations around elections in Africa and other developing nations is becoming vastly more difficult on a shoestring budget. My team is learning this firsthand as we lead research around AI-generated disinformation on elections across six countries, thanks to a grant from the Africa Digital Rights Fund, managed by CIPESA. Accessing data through different sources eats into the grant money, leaving fewer resources for actual research and data analysis.

    In addition to the commercial barriers, legal and censorship challenges are increasing. Politicians and wealthy individuals also make it even more difficult for researchers to gain access to vital information. For example, in 2023, Elon Musk, sued disinformation researchers, claiming their work was driving away advertisers, thankfully that suite was dismissed. This legal pressure adds another layer of difficulty for researchers striving to hold powerful entities accountable with the data readily available. Furthermore, government actions to shut down social media during critical events, such as the Facebook shutdown in Uganda, the internet shutdown during the 2020 elections, and the TikTok shutdown in West Africa, further hinder open research and transparency.

    On July 12th 2024, The EU Commision issued preliminary findings that X (formerly Twitter) is in breach of the Digital Services Act for deceptive verified account practices, insufficient advertising transparency, and restricted data access for researchers, potentially leading to significant fines and corrective measures. While we welcome this outcome, we know that more pressure and more action is required to address these challenges.

    To address these challenges, several solutions can help regional researchers continue their vital work. To ensure researchers, especially in the global south, can continue their crucial work in combating disinformation, we must prioritize open, ethically-governed data access and advocate for increased funding to offset data access costs. Building collaborative networks among researchers, journalists, and NGOs is crucial to amplifying the collective voice for open access and shared resources. For instance, in our AI-generated research, we have partnered with MEVER, who generously provided us with their tools and expertise, bolstering our detection capabilities. Such collaborations are vital, as they empower low-resourced or budget-constrained organizations in the Global South. I also believe that governments and international bodies should also mandate social media platforms to provide transparent and equitable access to data for research purposes. In adopting these measures, we can significantly enhance our ability to understand and mitigate the impacts of digital threats on society.

    All of these solutions may seem ideal, but the reality is that conducting research and producing impactful findings in the current environment is challenging. The digital research landscape has become increasingly restrictive, with platforms locking down data access and imposing significant financial barriers. Despite the importance of transparency and open access, independent researchers, especially those in the Global South, face numerous obstacles that hinder their ability to gather and analyze data effectively. The ability to understand and combat digital threats like disinformation is severely compromised as a result and this highlights the urgent need for actionable solutions and support.

  • Scammers Target African Journalists with Disinformation on Meta(Facebook/Instagram) Platforms

    Scammers Target African Journalists with Disinformation on Meta(Facebook/Instagram) Platforms

    As journalists across the region continue to face unprecedented attacks on their press freedoms with widespread intimidation, harassment and detention for their work, a new threat looms. African journalists are increasingly becoming a subject of targeted disinformation with deep fakes of journalists’ images being used to spread fake stories and scam people. 

    An investigation by Thraets has found fraudulent adverts targeting African journalists and media personalities on Facebook and Instagram(Meta) with scam Ads running false and sensational stories about these journalists using random Facebook pages. 

    The scams use fake copies of genuine newspapers from the countries respective to the journalists like “Vanguard Newspaper” from Nigeria, “The Daily Monitor” from Uganda, “The Herald” from Zimbabwe  and “Ghana Web” from Ghana.

    From our investigation, we found that the fraudsters have targeted Journalists and media personalities in Kenya, Ghana, Nigeria, Zimbabwe and Uganda like Johnie Hughes, Nana Aba Anamoah, Bola Ray, Andrew Mwenda, Jeff Koinange, Rosebell Kagumire, David Hundeyin, Hopewell Chin’ono, Hugo R.P Ribatika, Mohammed Ali. Some adverts have even announced these journalists dead. 

    We believe this targeting goes beyond these countries, as our investigation didn’t cover French speaking countries. In this Investigation we aim to uncover the actors behind this behavior and their intentions.

    The first fake post targeted Ugandan Journalist, Rosebell Kagumire and was shared and sponsored by an account called “Voice of Dipu” and according to Facebook Transparency, the account has changed its name multiple times. 

    The account appears to have been compromised based on frequent page name changes and image changes. We also identified a lack of connection in the posts shared and the adverts from the original user.

    Thraets spoke to Rosebell Kagumire who confirmed that she is alive and aware of the fake stories but did fully understand the motive behind these adverts and targeted posts. 

    A screenshot of the fake sponsored post on Instagram and the Facebook page transparency information of the account

    Another similar post targeted Nigerian Journalist and Author, David Hundeyin. Although a different account, the post was similar to the one that targeted Rosebell Kagumire and ran a hoax story that the journalist had been arrested. Infact, the post shows David Hundeyin in handcuffs but the photo has been fabricated. 

    The Nigerian Journalist has scoffed at these continued bogus posts and has urged the readers of his works and his newsletter to be aware of the scams and hoaxes. 

    “Yes, I am aware of the attacks. I think the purpose of these disinformation attacks is to slowly delegitimize me in the minds of the low-information segment of my audience that is likely to fall for such scams. Even if they are later informed that I had nothing to do with the scam, the negative association between my name and an unpleasant (potentially ruinous) event will have been established in their minds.”, David Hundeyin said. 

    The adverts have claimed that some of these journalists and media personalities have died and the sponsored posts have been active for weeks without any action from Meta. 

    We found that the adverts have similar text patterns across the ads with clickbait titles such as; “Tragic end David Hundeyin” and the posts consistently seem to be using 5 multiple versions across most of the ads currently running that we saw in our data.

    Additionally, to make a case for their hoaxes, the posts use established media houses in the respective countries such as; the Daily Monitor in Uganda, Vanguard News for Nigeria, Ghana Web for Ghana and The Herald in Zimbabwe but also former media employers for example Jeff Koinange from Kenya who worked at CNN.

    Thraets also found that the scam leads to an investment called “Nearest Edge” platform that uses the images of popular journalists and media personalities across Africa. 

    Although the suspicious website is currently unavailable, Wayback Machine revealed that the last time the website behind this scam was active and got an archive save was back in 2021. 

    The investigation also reveals that the most recent time the website has been active was on 2024 June when there was a surge in the targeted adverts towards the journalists as seen from Facebook Ad library and also the web archive history. 

    We reached out to some of the journalists that have been targeted and they shared their views about the targeted attacks; 

    Zimbabwean journalist Hopewell Rugoho-Chin’ono believes the attacks are politically motivated and are meant to taint his reputation both locally and internationally. 

    Rugoho-Chin’ono says he is aware of these attacks because he has constantly been a victim and has received many messages from people who have also been victims. Additionally, he believes he’s lucky to have a solid reputation, so few believe in the fake stories.

    Rosebell Kagumire a Ugandan journalist, pan-African feminist and socio-political commentator believes that she has been a target because of her support for various online campaigns, advocacy work and reporting. 

    “I don’t have a full idea why I am a target but I am involved in supporting various campaigns online and speak on issues that may have transnational opposition – women’s rights. LGBTQ+ rights, Palestinian freedom, anti-imperialism etc.”, she remarked. 

    Rosebell says the attacks have taken time away from her work as she tries to figure out what was going on, if they are personal attacks and these efforts have been futile. 

    “The attack did not affect my work but it took time away from my work to try and figure out what was going on. If it was a personal attack and for what. And since it was extremely difficult even with contacts to get an ear from platforms to look into this specific attack,” she said. 

    The journalists have called upon tech companies to take action into the continued threats against the fourth estate especially in Africa. 

    “Tech companies must equip specialized teams that are able to respond to the changing nature of threats to journalists and people in prominent public discourse spaces. It is not enough to simply tell journalists that they should use the general report features when faced with coordinated disinformation campaigns. It was shocking when those approached at Meta said they needed a specific public link to the posts in order to respond – even for them, this was not possible.”, said Rosbell Kagumire. 

    “We need to see the swift responses to these natures of attacks because they are not only personal but political and aimed at silencing and sabotaging the presence, position and perspectives of the targeted journalists and leaders. More study into the motives behind these attacks in order for other actors to support journalists beyond online response.”, she emphasized. 

    At the height of these scam advertisements, Meta continues to profit greatly from these fraudulent ads as they harm both the people who believe them and the innocent journalists whose credibility they damage. 

    “Regarding the tech companies and social media platforms that enable these bad actors, I think that it is not a case of them merely not doing enough – they are actively complicit in incidents like these because with Facebook for example, there is essentially zero content moderation for sponsored posts. META clearly values revenues and growth-at-all-costs over providing a safe user experience, because the calculation is that the reward from achieving quarterly or annual stock price targets far outweighs the potential financial downsides of lapsed or failed platform regulation.”, David Hundeyin explained. 

    Thraets reached out to Meta for a comment but at the time of publishing, the company had not responded to our requests. 

    However, following our requests, they (Meta) silently reviewed some ads in the screenshots we shared and have since taken them down although many more scam ads are still up and running as of publishing our investigation. 

    For any inquiries email the Research Team: [email protected]