You are here: HomeAfrica2024 07 05Article 1938969

Africa News of Friday, 5 July 2024

Source: cisanewsletter.com

AI deepfakes threaten Africa’s democracy and security

File photo File photo

Every election usually comes with all manner of rumours, propaganda, half-truths and sometimes, plain lies. In the past, this happened when such sources of misinformation managed to escape the scrutiny of media gatekeepers. In the past decade, social media and artificial intelligence have made it even easier for such potential sources of falsehood to spread. If there is any consolation, it is that even social media platforms are now seeing the threat of their being used as conveyor belts of misinformation and disinformation.

This year, almost half of the world’s population will be voting, according to the National Democratic Institute, a non-profit American non-governmental organisation whose stated mission is to “support and strengthen democratic institutions worldwide through citizen participation, openness and accountability.” In at least 61 countries, nearly half of the global population will head to the polls. This includes eight of the ten most populous nations: Bangladesh, Brazil, India, Indonesia, Mexico, Pakistan, Russia, and the United States.

The European Union will also hold elections to the European Parliament in June. All in all, around 2 billion people—about a quarter of everyone on the planet—will have the chance to vote this year. About 15 of those elections will be held in Africa. And as is the case with elections in the United States, Asia and Europe, national elections in African countries are typically fraught with misinformation and disinformation.

In the case of Africa, the threats of misinformation are even more likely, as democracy on the continent is still struggling to take a firm foothold. Therefore, misinformation and disinformation are more likely to lead to coups, wars, civil strife, ethnic cleansing and other atrocities.

The Oxford Dictionary defines disinformation as “False information which is intended to mislead, especially propaganda issued by a government organisation to a rival power or the media.” The Wikipedia definition is even more poignant: “Disinformation is false information deliberately spread to deceive people.

Disinformation is an orchestrated, adversarial activity in which actors employ strategic deceptions and media manipulation tactics to advance political, military, or commercial goals”; While the Oxford Dictionary defines misinformation as “False or inaccurate information, especially that which is deliberately intended to deceive.”

According to Wikipedia, “Misinformation is incorrect or misleading information. Misinformation can exist without specific malicious intent; disinformation is distinct in that it is deliberately deceptive and propagated. Misinformation can include inaccurate, incomplete, misleading, or false information as well as selective or half-truths.”

Evidence of both misinformation and disinformation have already been witnessed in some of the elections that have taken place so far in different parts of the world and are also manifesting in those countries that are yet to hold elections this year. AI and social media are tools that can be used for good or bad in an election. In the run-up to the 9-phase elections from 7 April to 12 May in India, for example, the influence of AI was substantively felt.

Prime Minister Narendra Modi used AI to address an audience in Hindi by using the government-created AI tool Bhashini, which was then translated into Tamil in real-time. Just over the border in Pakistan, AI allowed jailed politician Imran Khan to address a rally. Politicians and political parties take advantage of social media to disseminate information (press statements, videos and pictures of political activities, achievements etc.) cheaply and widely, since such platforms have a massive, instant following than traditional media.

However, the dark side of both AI and social media is beginning to cause worry. In the Indian elections, for example, two viral videos showed Bollywood stars Ranveer Singh and Aamir Khan campaigning for the opposition Congress party, according to the BBC. Both filed police complaints saying these were deepfakes, made without their consent. On 29 April, Prime Minister Modi also raised concerns about AI being used to distort speeches by senior leaders of the ruling party, including him.

The next day, police arrested two people, one each from the opposition Aam Aadmi Party (AAP) and the Congress party, in connection with a doctored video of Home Minister Amit Shah. Also, deepfakes of popular deceased politicians appealing to voters as if they were still alive have become a popular campaign tactic in India. There was also a video of an opposition lawmaker in Bangladesh — a conservative Muslim-majority nation — wearing a bikini.

According to TechTarget, Deepfake AI is “a type of artificial intelligence used to create convincing images, audio and video hoaxes.” It says the term describes both the technology and the resulting bogus content and is a portmanteau of deep learning and fake. Deepfakes, TechTarget noted, often transform existing source content where one person is swapped for another. They also create entirely original content where someone is represented doing or saying something they didn’t do or say.

TechTarget warns: “The greatest danger posed by deepfakes is their ability to spread false information that appears to come from trusted sources. For example, in 2022 a deepfake video was released of Ukrainian president Volodymyr Zelenskyy asking his troops to surrender.” It said concerns have also been raised over the potential to meddle in elections and election propaganda, adding that while deepfakes pose serious threats, they also have legitimate uses, such as video game audio and entertainment, and customer support and caller response applications, such as call forwarding and receptionist services.

Deepfakes have become ingrained in US elections. An article by Emil Sayegh, the President and CEO of Ntirety and published on 14 May 2024 by Forbes listed some examples of deepfakes:

1. A manipulated video on Twitter suggesting President Biden incorrectly stated Russia’s occupation duration of Kyiv.

2. Manipulated footage claiming Senator Elizabeth Warren advocated barring Republicans from voting in the 2024 presidential election.

3. Altered audio on TikTok falsely conveying President Biden’s threats to deploy F-15s to Texas.

4. A video alteration making Vice President Harris appear inebriated and nonsensical.

5. An online AI-generated photo falsely showing ex-President Donald Trump with Jeffrey Epstein and an underage girl.

6. AI-created images on x (formerly Twitter) falsely depicting President Biden in military attire.

7. A PAC-supported advertisement misusing AI to replicate Donald Trump’s criticism of Iowa Governor Kim Reynolds.

8. AI-generated portrayals of Donald Trump and Joe Biden in a fictitious debate on Twitch.

9. A DeSantis campaign video with AI-fabricated images attacking Donald Trump.

10. Synthetic speech suggesting President Biden made comments on financial instability, potentially inciting market chaos or misleading corporate leaders.

A June 6, 2024 article by Shanze Hasan published on www.brennancenter.org mentioned that earlier this year, AI-generated robocalls imitated President Biden’s voice, targeting New Hampshire voters and discouraging them from voting in the primary. Additionally, an AI-generated image falsely depicting former president Trump with convicted sex trafficker Jeffrey Epstein and a young girl began circulating on Twitter.

Outside the US, the article listed a few other election-related incidents such as the circulation of deep fakes in the Slovakian election last year, defaming a political party leader and possibly helping swing the election in favour of his pro-Russia opponent. A 16 March 2024 VOA report on Slovakia said there was a fake audio of the country’s liberal party leader discussing changing ballots and raising the price of beer. The VOA also lists a video of Moldova’s pro-Western president leader throwing her support behind a political party that is friendly to Russia.

Shanze Hasan’s article, titled, ‘The Effect of AI on Elections Around the World and What to Do About It,’ recalled that in January this year, the Chinese government, apparently, tried to deploy AI deepfakes to meddle in the Taiwanese election. As well, the author observed that a wave of malicious AI-generated content is appearing in Britain ahead of its election, scheduled for July 4. One deepfake depicted a BBC newsreader, Sarah Campbell, falsely claiming that British Prime Minister Rishi Sunak promoted a scam investment platform.

The article mentioned an instance where the leading candidate for president in Indonesia, a former general, deployed an AI-generated cartoon to humanise himself to appeal to younger voters. Also, in Belarus, the country’s embattled opposition ran an AI-generated “candidate” for parliament. The candidate — a chatbot that describes itself as a 35-year-old from Minsk — is part of an advocacy campaign to help the opposition, many of whom have gone into exile, reach Belorussian voters.

Ahead of Singapore’s elections due in 2025, the country has already started experiencing the threat of AI deepfakes. Channelnewsasia.com reported on 1 June 2024 that a widely circulated WhatsApp message is making the rounds claiming that new Prime Minister Lawrence Wong has called for polls to be held on September 6, 2024. According to Channel News Asia, however, no such election can be called without the Electoral Boundaries Review Committee, which, it noted, “has not even been convened – at least as of April 18, 2024 – and this is a necessary step before an election can be called.”

South Africa’s recent election brought the influence of social media and AI in African politics to the fore when Tech giants Meta (owners of Facebook, WhatsApp, Instagram and Threads), as well as TikTok, X (formerly Twitter) and Google refused to share detailed election plans and engage with civil society on their plans to douse social media-enabled fake news and intemperate language, such as happened in 2021 which sparked off violence that resulted in the killing of 300 people following a contempt case against former President Jacob Zuma, who is now the leader of splinter party uMkhonto we Sizwe (MK). Ahead of this year’s election, Mr Zuma was disqualified from running in the upcoming elections due to a prior prison sentence for contempt of court. His supporters rejected the decision.

“If these courts, which are sometimes captured, if they stop MK, there will be anarchy in this country. There will be riots like you’ve never seen in this country. There will be no elections,” Theeastafrican.co.ke, quoted MK’s leader, Mr Visvin Reddy, as threatening in a widely circulated video on social media in March 2024. Mr Reddy is facing charges of inciting public violence, along with other MK party members over the incendiary comment. Also, social media posts, including a TikTok video, showed individuals wearing MK shirts and brandishing firearms. In January 2024, over 60 people linked to MK were charged with instigating deadly riots in 2021, the report said.

Threats of AI-generated misinformation and disinformation in Africa

Africa is a potentially volatile continent when it comes to electoral politics and democracy. Several countries on the continent are now getting used to the idea of democracy, which is slowly taking root. Even without the threat of misinformation and disinformation, elections are often a very tense and fragile exercise on the continent.

Adding AI misinformation and disinformation to the mix makes it even dicier. This development is a serious threat to the continent’s democracy, especially when some regions, such as the Sahel, are opting for military coups and demonstrating undemocratic tendencies by annulling all the democratic progress made in the past few decades through national elections.

It is worth noting too, that Africa is a highly religious, conservative and multi-ethnic continent with a potpourri of cultures. Without AI-generated deepfakes, one man sat on radio in Rwanda and used hate speech to spark a genocide that resulted in the massacring of almost one million people. Without AI-assisted misinformation and disinformation, wars have been sparked among different ethnic groups – some, over livestock, and others, over ethnic and religious rumours and divisions.

AI-generated deepfakes could worsen the situation since mobile phones and social media have become commonplace on the continent. In this digital age, any misinformation or disinformation will spread like wildfire and the devastating consequences would be unparalleled by any pre-AI historical events.

Big Tech tackling deepfakes

Tech companies are, however, racing to rein in the dark side of AI-generated content to forestall misinformation and disinformation. TikTok, for instance, announced in May this year that it will begin labelling A.I.-generated content, according to CNN. The international news network also reported that Meta said last month that it will begin labelling AI content. YouTube, CNN added, has also introduced rules mandating creators to disclose when videos are A.I.-created so that a label can be applied. Elon Musk’s X has, however, not announced any plans to label AI-generated content.

OpenAI, the ChatGPT-creator that allows users to in turn create A.I.-generated imagery through its DALL-E model, said last month that it would soon launch a tool that allows users to detect when an image is built by a bot. The company confirmed that it would launch an election-related $2 million fund with Microsoft to combat deepfakes that can “deceive the voters and undermine democracy.”

In Africa, Dubawa, a project of the Centre for Journalism Innovation and Development (CJID), has developed a chatbot, an AI tool to fact-check content and bust deepfakes. Premiumtimesng.com quoted Monsur Hussain, the head of Innovation at the CJID, as saying the Dubawa Chatbot is developed to give accurate and timely responses to claims or questions. He said the tool has access to real-time internet data, unlike other AI tools like the ChatGPT and MetaAI, both of which “do not have access to real-time internet data.

“The Dubawa Chatbot is an AI tool built to provide answers to everyday questions regarding viral misinformation and disinformation in the information ecosystem,” the chatbot answered when asked about its function. “It aims to reduce the spread of harmful and misleading content online by offering results and references from credible sources”, he added.

In Ghana, which goes to the polls on 7 December 2024, the country’s Cyber Security Authority, according to ClassFMonline.com, has pledged to collaborate with the tech giants to combat the spread of misinformation and disinformation online, particularly on social media platforms. The Authority explained that with the elections approaching, the nation is likely to encounter AI-driven misinformation campaigns due to the swift pace of digitalisation.

Describing deepfakes as a “malicious” activity when he spoke at the West African Regional CSIRTS Symposium in Accra, Dr Albert Antwi Boasiako, Director-General of the Cyber Security Authority, warned that “there will be a pattern of cyber-attacks” as the election approaches.

Dr Albert Antwi Boasiako said, “Criminals are innovating their process, and we’re likely to see AI-powered disinformation and misinformation campaigns. That makes it a little bit difficult for us, but we’re working with the technical service providers – those who own the platform.

“They also have mechanisms to attack us. So, Facebook, Twitter, which is now X and others; we’re engaging with them to ensure that as we get close to the elections, we will be able to detect and prevent some of those issues”; Dr Boasiako said.