Thirty years ago, for a person on the African continent to connect with someone in, for example, Australia, they would have to send a letter that would take many weeks to be delivered. However, these days, connection is as simple as clicking a button. The internet has brought connectivity and made so much possible. It has revolutionized communication, enabling people to connect across vast distances instantly. It has democratized access to information, empowering individuals with knowledge and resources that were previously inaccessible.
The internet has fostered innovation and entrepreneurship, creating new opportunities for businesses and individuals. However, alongside these positive changes, the internet has become a battleground for misinformation, disinformation, and malicious activities. There is a need for continued vigilance and research to understand and mitigate these emerging threats.
In the early 2000s, when the East African Internet was still in its infancy, I founded an Internet cafe in Kampala, which started my digital research journey. In the years that I ran the internet cafe, I observed first-hand the risks and interactions that internet users face, creating a new trajectory for my life. Curiosity and passion drove groundbreaking work, and I was fortunate to be part of that pioneering wave.
For more than 10 years, I have been studying the internet, online behaviors of people, and the misuse of digital platforms by bad actors. I found myself at the forefront of digital research analysis, shedding light on the power of digital platforms to shape public discourse. My research on internet behaviors began in 2011 with the Google Spam team where I focused on understanding invalid click spammers. This experience honed my skills in Open Source Intelligence research.
In addition to my work on digital platforms, I collaborated with remote communities and organizations like Refunite, which develops tools for refugees, and Medic, which supports health workers providing care in the world’s hardest-to-reach communities. These experiences deepened my understanding of the vulnerabilities faced by marginalized groups, further enriching my perspective on the intersection of technology and society.
My research continued during the 2016 Uganda elections, a period when social media emerged as a crucial battleground for hate speech and political debates. Through these experiences, I have developed a keen understanding of internet behaviors and their impact on public conversations.
This work has taught me a lot about how these harmful groups think, act, and exploit the technology systems that we heavily rely on. However, none of this was taken seriously until 8 years ago. In 2016, researchers within the continent started to take digital misinformation and disinformation seriously. Part of the reason for this was the rise of hate speech and violence during Kenya’s 2008 elections, which was triggered by widespread allegations of vote-rigging, political tension, and ethnic divisions, and the rise of the use of social media to express sentiments about the elections in the 2016 Ugandan Presidential Election, marked by reports of election irregularities, voter suppression, lack of transparency, and the government’s use of security forces to intimidate opposition supporters.
In the early days, digital research was easily accessible, even for researchers with limited budgets. The main focus was on the pursuit of knowledge and understanding. The atmosphere was open and collaborative, allowing those with just a bit of curiosity and dedication to make significant contributions to the field. Publicly available APIs (Application Programming Interfaces) and readily available data streams from social media offered researchers the tools they needed to collect and analyze data without facing major financial or access barriers like the now defunct Netvizz, which allowed researchers to download data from Facebook but was shut down by the company. This accessibility created a diverse community of researchers from different backgrounds, including academics, independent investigators, and hobbyists, all working together to uncover insights and drive innovation. This era was marked by groundbreaking discoveries and significant advancements in understanding digital behaviors and online interactions.
Today, social media research has undergone significant changes. It has become an essential tool for researchers like myself, journalists, and investigators who monitor activities such as influence operations, disinformation campaigns, and online narratives surrounding major events like elections. However, recent changes by platforms such as X (formerly known as Twitter) are making this open-source investigation work significantly more challenging, particularly for independent researchers and those in regions with limited or almost zero financial resources.
In the past, researchers could easily access and analyze social media content through publicly available APIs and data streams. However, today, platforms are now restricting this access by requiring payment and licensing for social data mining partners. This is a significant problem for researchers from the global south who have significant financial barriers. What used to be an open environment for examining online conversations is now becoming an exclusive space primarily available to well-funded organizations.
And it is becoming harder. Meta has recently announced the shutdown of CrowdTangle in August 2024 after shutting down Graph Search in June 2019. CrowdTangle significantly enhanced the coverage of misinformation over the years and provided unique access to trending topics, public accounts, communities, and viral posts on platforms like Facebook, Instagram, and Reddit—information that would otherwise be largely inaccessible. Although the company says its replacement, the Meta Content Library (MCL), is a better tool for researchers, a joint investigation by Proof News, the Tow Center for Digital Journalism, and the Algorithmic Transparency Institute found that Meta’s replacement tool is far less transparent and accessible than CrowdTangle. This will create a problem for researchers.
TikTok’s unclear data policies are making matters worse. While TikTok has access to users’ data from all over the world, as it is the fastest-growing social media, it only allows researchers in the US and Europe to apply for access to their API. This makes it exponentially difficult for global south researchers to study misinformation and disinformation crises in and for highly volatile regions like the D.R. Congo, Sudan, and Gaza. It goes without saying that this locks out many independent actors, including journalists and academics who rely on open data to uncover story-lines, monitor influence efforts, and analyze the effects of social media platforms in the global south. The commercial interests of these platforms conflict with the necessity for clarity on how social media influences conversations, politics and societies, especially in regions facing higher risks of instability.
As an example, there have been recent challenges in monitoring Sudan’s Saudi-UAE Proxy War and ongoing influence operations around the conflict have highlighted these issues. Military clashes between the Sudanese Army and the paramilitary Rapid Support Forces (RSF) have resulted in the deaths of hundreds and the displacement of thousands of Sudanese, underscoring a complex interplay of domestic, regional, and global actors who have contributed to this conflict. Independent researchers have found their access curtailed just when the investigation of disinformation became most critical.
Similarly, tracking suspected influence operations around elections in Africa and other developing nations is becoming vastly more difficult on a shoestring budget. My team is learning this firsthand as we lead research around AI-generated disinformation on elections across six countries, thanks to a grant from the Africa Digital Rights Fund, managed by CIPESA. Accessing data through different sources eats into the grant money, leaving fewer resources for actual research and data analysis.
In addition to the commercial barriers, legal and censorship challenges are increasing. Politicians and wealthy individuals also make it even more difficult for researchers to gain access to vital information. For example, in 2023, Elon Musk, sued disinformation researchers, claiming their work was driving away advertisers, thankfully that suite was dismissed. This legal pressure adds another layer of difficulty for researchers striving to hold powerful entities accountable with the data readily available. Furthermore, government actions to shut down social media during critical events, such as the Facebook shutdown in Uganda, the internet shutdown during the 2020 elections, and the TikTok shutdown in West Africa, further hinder open research and transparency.
On July 12th 2024, The EU Commision issued preliminary findings that X (formerly Twitter) is in breach of the Digital Services Act for deceptive verified account practices, insufficient advertising transparency, and restricted data access for researchers, potentially leading to significant fines and corrective measures. While we welcome this outcome, we know that more pressure and more action is required to address these challenges.
To address these challenges, several solutions can help regional researchers continue their vital work. To ensure researchers, especially in the global south, can continue their crucial work in combating disinformation, we must prioritize open, ethically-governed data access and advocate for increased funding to offset data access costs. Building collaborative networks among researchers, journalists, and NGOs is crucial to amplifying the collective voice for open access and shared resources. For instance, in our AI-generated research, we have partnered with MEVER, who generously provided us with their tools and expertise, bolstering our detection capabilities. Such collaborations are vital, as they empower low-resourced or budget-constrained organizations in the Global South. I also believe that governments and international bodies should also mandate social media platforms to provide transparent and equitable access to data for research purposes. In adopting these measures, we can significantly enhance our ability to understand and mitigate the impacts of digital threats on society.
All of these solutions may seem ideal, but the reality is that conducting research and producing impactful findings in the current environment is challenging. The digital research landscape has become increasingly restrictive, with platforms locking down data access and imposing significant financial barriers. Despite the importance of transparency and open access, independent researchers, especially those in the Global South, face numerous obstacles that hinder their ability to gather and analyze data effectively. The ability to understand and combat digital threats like disinformation is severely compromised as a result and this highlights the urgent need for actionable solutions and support.