We’ve all seen them—fake images that depict political leaders in a less-than-flattering light, deepfakes of political endorsements that never happened, or manipulated videos spreading inflammatory messages against important figures. The ease with which this content is created and spread emphasizes the significant impact AI-generated media can have on society, especially in politically charged environments.
A recent example includes a clearly AI-generated image that was widely shared on platforms like X (formerly Twitter) and Facebook, misleadingly suggesting that Gen Z in Uganda were rioting in light of Kenya’s riots—an event that never actually happened. This image was posted by various individuals, gaining significant engagement. For instance, the image was posted by X user, Nyakundi, and received 19,000 likes with over six hundred thousand views, while the same photo shared by @inside_afric garnered over 10200 views. These instances highlight the potent reach and influence of synthetic media in shaping public perception.
Another example can be found in the just concluded South African elections, where deepfakes and AI-generated videos of Joe Biden and Donald Trump were seen on social media. One deepfake showed Donald Trump endorsing Umkhonto we Sizwe (MK), urging South Africans to vote for this party. This was notably circulated on platforms like X (formerly Twitter). Another instance involved an AI-generated video of Joe Biden falsely claiming that if the ANC won the election, the USA would impose sanctions and declare South Africa an enemy of the state. Additionally, there was a manipulated image of Julius Malema of the Economic Freedom Fighters (EFF) crying about his loss, intended to mock his loss in the court of public perception.
And, because of Rwanda’s recent elections, we’ve seen these image manipulations often exaggerate President Paul Kagame’s facial features, turning his cheeks into caricatures. These examples illustrate the growing prevalence of synthetic media in the region.
Synthetic media can be defined as content that is artificially created or manipulated using digital technologies, often leveraging advanced techniques such as artificial intelligence (AI) and machine learning. Synthetic media includes deepfakes, which are highly realistic but fake portrayals of famous people, making them appear to say or do things they never did, and cheap fakes, which are low-quality forgeries or manipulations that deceive viewers without using advanced AI. It also encompasses generative media, such as AI-generated text, images, music, or video, based on training data. Synthetic media can be used for a variety of purposes, from entertainment and art to malicious uses like spreading misinformation, creating fake news, or conducting fraud. These synthetic media creations, while often crude, have the power to manipulate public perception, distort reality, and incite unrest.
In an era where information flows seamlessly across digital channels, synthetic media while lacking sophistication and often appearing crude, wields a surprising influence on public perception. These “cheap fakes,” born from basic AI tools, could sway public perception and shape critical discourse. Nowhere is this more evident than in electoral processes, where maintaining information integrity is paramount.
The rise of Generative AI (Gen-AI) has introduced a unique set of challenges and risks to information environments, particularly in the context of elections. For this article, our focus narrows to Rwanda’s just concluded 2024 general elections and we explore how synthetic media is utilized in such scenarios. Synthetic media can significantly threaten the integrity of electoral processes by enabling the spread of misinformation and potentially undermining public trust in political institutions and media—a trend observed in various global elections. This phenomenon of synthetic media is now on the rise in Sub-Saharan Africa, as observed in the examples shared earlier.
We conducted a comprehensive examination of synthetic media on social media platforms including X (formerly Twitter), Meta (Facebook/Instagram), TikTok, and YouTube from May 2024 to July 2024, the crucial pre-election period in Rwanda. While conducting the research, we used specific key terms in both English and Kinyarwanda, leveraging crowdsourced annotations to identify synthetic content. Our analysis revealed an insignificant and nearly unidentifiable presence of AI-generated media. The synthetic media we did encounter was of very low quality, involving simple manipulations, such as face swaps and face enlargement using basic tools like Photoshop. This finding highlights the current state of synthetic media use in Rwanda’s electoral context, providing a neutral yet insightful perspective on its potential impact.
Our research identified five key themes dominating the synthetic media landscape, in the context of Rwanda’s 2024 election:
- Manipulation of President Paul Kagame’s Likeness: These manipulations were mainly alterations to President Kagame’s facial features, expressions, and overall image, which could have potentially influenced public perception of Rwanda’s President-elect. The edited images have been widely circulated as memes and gifs, intended to mock the president-elect for various reasons or to elevate the image of President Kagame like in the photoshopped image of him with famous footballer, Cristiano Ronaldo. Nevertheless, the content we encountered was consistently unsophisticated and of low quality. Here are some examples:
- Content Exploiting Rwanda-D.R. Congo Tensions: There are ongoing diplomatic and security issues between Rwanda and the Democratic Republic of Congo. These tensions intensified with a recent UN report accusing Rwanda of aiding the M23 rebel group that’s battling Congolese forces in eastern DRC. Synthetic media related to these tensions frequently featured manipulated images of President Félix Tshisekedi, depicting him as chubbier than he is, or, as a chicken thief being chased by locals. This trend highlights how regional conflicts can be weaponized through synthetic or AI-generated content to sway public opinion and/or exacerbate existing tensions.
- Burundi-Rwanda Relations in Focus: In January 2024, it was reported that relations between Rwanda and Burundi had deteriorated again after Burundian President Evariste Ndayishimiye renewed accusations that Rwanda was financing and training the RED-Tabara group rebels. The research observed a few manipulated media pieces concerning the complex relationship between Rwanda and Burundi. These frequently feature altered imagery of Burundian President Évariste Ndayishimi and underscore the potential for synthetic media to impact not only domestic politics but also regional diplomatic dynamics. For example, we found a rudimentarily made cheap fake video on X (formerly Twitter) that used a popular meme with the face swap of the Burundian president slapping the face swap of Rwanda’s president.
- Amplification of Synthetic Pro-Kagame Dance Content: Our analysis revealed a significant trend: supporters of President Kagame actively shared deepfake videos of him dancing dance moves made popular by TikTok with youth, as seen below. These videos, primarily promoted by young members of his Rwandan Patriotic Front (RPF) party, feature popular Kinyarwanda songs praising Kagame. This illustrates the strategic use of synthetic media by the younger generation to enhance political campaigns and manage the president’s image.
- Less Sophisticated Deepfakes (Cheap Fakes) Targeting President Kagame: Our investigation uncovered videos that were rather clumsy deepfakes. In one of these videos, we saw President Kagame’s likeness superimposed over a popular musician performing at a concert. We also found videos of Kagame singing, manipulated to appear as though he was participating in contemporary musical trends, likely to portray him as a modern and relatable leader. In another video, where Kagame was actually talking about President Tshisekedi, his face is manipulated to look like a ghoul, highlighting concerns about the regional focus and potential impact of these deepfakes.
While our research finds that the majority of synthetic media identified appears to be non-malicious, the persistence of cheap fakes and other manipulated content targeting political figures is deeply concerning, to say the least. This trend raises critical questions about the potential misuse of AI technologies in the Global South’s electoral processes, particularly in Rwanda, and its dire implications for democratic integrity.
The rapid proliferation of such synthetic media presents several challenges. It can confuse voters, manipulate public opinion, discredit politicians, and ultimately undermine the legitimacy of the electoral process. Moreover, the mere existence of this deepfake technology and the fact that it is getting increasingly difficult to distinguish between what is real and what is not, can create a ‘liar’s dividend,’. The liar’s dividend, suggests that genuine recordings like documents, images, audios, and videos can be dismissed as fake, even after they have been proven to be true. Consequently, even after a fake is exposed, it becomes more challenging for the public to trust any information related to that particular topic. This further leads to the eroding trust in the fourth estate and political discourse. As an example, the 2023 Nigerian election was inundated with deepfakes, kicking this conversation into high gear. The populace watched videos of US public figures like Elon Musk and Donald Trump, applauding Labour Party presidential candidate Peter Obi. These endorsements appealed to younger voters but they were deepfakes. The sentiment though, remained in some voters who wholeheartedly believed the videos to be true.
This research was aimed at contributing to a broader understanding of how synthetic media is influencing political landscapes in emerging democracies. Our findings in this short research highlight the pressing need for enhanced media literacy, new pre-bunking mechanisms, and potential regulatory frameworks to effectively address the challenges posed by AI-generated content. The rise of the use of synthetic media, with its capacity to spread misinformation and undermine public trust, necessitates proactive measures to protect the integrity of electoral processes. Strengthening media literacy is crucial to equip the public with the skills to critically evaluate and discern between real and fake content. Implementing strategies where potential misinformation is anticipated and debunked before it even spreads can help mitigate the impact of these deepfakes or cheap fakes.
In examining the specific case of Rwanda’s 2024 elections, we provide a few insights into the current state of synthetic media in Sub-Saharan Africa and its potential impact on democratic processes. Our findings underscore the need for comprehensive strategies to safeguard information integrity in the age of artificial intelligence. These strategies include fostering collaboration between policymakers, technology platforms, and civil society to develop effective solutions and create a resilient information environment. When we address these challenges, we can better protect democratic values and ensure that elections remain fair and transparent in the face of evolving technological threats.