Deepfakes and Election Integrity: How Vulnerable Are We?

Deepfakes and Election Integrity
Source: Shutterstock/Who is Danny
Table of Contents

The rise of sophisticated media synthesis technologies has enabled the creation of highly convincing deepfakes, falsely depicting individuals like Pope Francis, Donald Trump, and Elon Musk in fabricated scenarios. 

This alarming development poses a substantial risk, especially threatening the integrity of elections.

In the lead-up to the 2024 US presidential elections, experts are voicing concerns over the potential misuse of advanced generative artificial intelligence (AI) for disinformation campaigns.

The apprehension is that AI could exacerbate the spread of false information, further destabilizing an already vulnerable information ecosystem.

The growing accessibility of AI tools, proficient in creating realistic images, voice audio, and human-like text, amplifies the risk of widespread disinformation. Additionally, such technology could be misused to escalate voter suppression, generate false engagement, and create confusion around voting procedures.

Recent incidents, such as an AI-generated image of an explosion at the Pentagon causing a brief stock market dip, highlight the plausible effect of such technology.

This article explores the potential impact of deepfake technology on voter behavior, election campaigns, and overall election integrity.

Video source: YouTube/ABC News

Deepfake Statistics

Recent data from DeepMedia, a leading company specializing in synthetic media detection, reveals that this year the number of video deepfakes has tripled, and voice deepfakes have surged eightfold compared to the same period in 2022.

The potential impact is massive, with an estimated half a million video and voice deepfakes expected to flood social media platforms worldwide this year.

Moreover, the cost of cloning a voice has drastically decreased. While it once demanded a substantial investment of $10,000 in server and AI-training expenses until late last year, innovative startups now provide the same service for just a few dollars.

In this era of rapidly evolving technology, the vulnerability to deepfake manipulation raises critical questions about the security and authenticity of the information we encounter during election periods. 

While it’s still uncertain how AI will influence the upcoming election, the rapid increase in generative AI use, coupled with the rollback of content moderation measures on social media platforms, presents a significant challenge.

The Influence of Deepfakes and False Claims in Turkish Elections

In the recent Turkish elections, a troubling wave of disinformation, including deepfakes and false claims, has swept across the political landscape, sowing confusion among voters.

The electoral campaign, a critical juncture in the nation’s history, has been clouded by politicians from all sides making unsubstantiated assertions, obscuring the truth and muddying the waters of public discourse.

One notable instance that stands out is President Recep Tayyip Erdogan’s campaign montage video. This video allegedly shows the leadership of the Kurdistan Workers Party (PKK) expressing support for opposition challenger Kemal Kilicdaroglu.

Even though the video appears to be a fake, it has served to bolster Erdogan’s narrative that the opposition is soft on terrorism.

In the midst of this disinformation storm, the fact-checking organization Teyit has been a beacon of truth. Working tirelessly, they have been verifying or refuting claims made during the campaign, uncovering inaccuracies from all corners of the political spectrum.

What’s more, the current campaign has seen a marked increase in the use of disinformation compared to previous elections. This has led to voter misdirection, potentially swaying voting behavior and deepening the polarization in Turkish society.

The International Press Institute has described the level of organized disinformation during the election campaign as unparalleled. 

Video source: YouTube/MEMO

Understanding the Psychology Behind Deepfake-Driven Propaganda

From a psychological perspective, deepfakes exploit human cognitive biases and the trust we place in visual information, making them potent tools for manipulation and propaganda.

Our minds naturally tend to lean towards information that aligns with our preexisting beliefs, and we may feel uncomfortable when faced with contradictory ideas.

Influencing Through Repeated Exposure

Additionally, repeated exposure to a particular piece of information can make us more inclined to believe it.

These inherent tendencies make people vulnerable to manipulation, enabling propagandists to effectively spread false narratives and exert a significant influence on public opinion.

Spreading False Narratives

This issue becomes particularly worrisome in the context of political propaganda, where deepfakes can be employed to manipulate public sentiment, sway election outcomes, and destabilize societal structures.

The mere existence of deepfakes can foster distrust in all forms of information, whether true or false, leading to an erosion of trust in news and information within society.

Addressing the Issue

Recognizing the gravity of this issue, the European Parliamentary Research Service conducted a study recommending that policymakers address five key dimensions of the deepfake lifecycle to mitigate its negative effects.

These dimensions encompass:

  1. The technology itself,
  2. The process of creating deepfakes,
  3. Circulation of deepfakes, 
  4. Deepfakes targets, and
  5. The audience’s response.

Understanding these psychological mechanisms is critical in developing effective countermeasures and raising public awareness about the dangers posed by deepfake propaganda.

Impacts of Deepfakes on Voter Behavior

Political deepfakes can significantly impact voter behavior by creating confusion and doubt. Even if these deepfakes do not completely deceive individuals, they can still instill uncertainty about what is genuine and what is manipulated.

As a result, the lack of trust in media and information sources can harm online interactions, making people less willing to cooperate with others and less cautious about the truth of the information they share.

This prevailing uncertainty poses a challenge in having meaningful discussions about important topics. Voters might struggle to determine whether to believe what they see or remain vigilant against spreading fake videos.

The blurring of reality and fiction can disrupt democratic processes and impede informed decision-making.

Moreover, traditional methods of fighting false information, like fact-checking, may not work effectively against deepfakes. The problem lies in the difficulty of confirming whether a video is authentic or manipulated.

Deepfakes are technically advanced and often crafted from publicly available videos, making them difficult to distinguish from genuine content.

While some individuals might be capable of identifying deepfakes, it remains unclear whether these efforts will be swift and extensive enough to mitigate the harmful effects of deepfakes on voter behavior.

This raises concerns about the potential exploitation of confusion by powerful individuals, who may propose limiting freedom of speech under the guise of restoring certainty.

Video source: YouTube/Marshall Artist

Ethical Dimensions of Deepfakes in Political Campaigns

Ethical concerns arise when deepfakes are used in political campaigns. An important issue is the erosion of public trust in media and information sources.

As deepfake technology becomes more advanced and widespread, it becomes harder for the general public to distinguish between authentic and manipulated content.

This blurring of reality and fiction can hinder democratic processes and voters’ ability to make informed decisions.

Additionally, malicious actors can exploit deepfakes to spread falsehoods and engage in the character assassination of political candidates.

Disseminating fabricated content can have severe consequences for individuals’ careers and personal lives, as well as for the democratic integrity of the electoral process.

Addressing these ethical challenges requires robust regulations. Policymakers and technology platforms must collaborate to develop mechanisms for detecting and countering deepfake content while safeguarding free speech rights and media freedoms.

Election Integrity and Metaverse Deepfake Threat

In the context of election integrity and the evolving metaverse, the deepfake threat intensifies. This compounds existing concerns in the metaverse, especially regarding verifying people’s true identities. The deceptive use of deepfakes, like fake celebrity endorsements and political impersonations, adds to these anxieties.

According to Gartner’s projections, around 25% of individuals are expected to spend at least an hour daily in the metaverse by 2026. This virtual space will facilitate connections with partners, friends, colleagues, educators, and businesses.

The fundamental question arises: Can we genuinely authenticate the identities of our metaverse interactions? Furthermore, AI could potentially examine our actions, feelings, and exchanges to influence us.

To replicate actions realistically, the metaverse will rely on various technologies such as sensors, eye tracking, face tracking, and haptics. 

Consequently, metaverse platforms will accumulate vast amounts of user data, including biometric details. Should this data be exploited by malicious actors, it could enable the creation of avatars indistinguishable from reality.

The Silver Lining

Despite the concerning impact of political deepfakes, there is a silver lining. Educating the public about the existence and potential consequences of deepfakes can play a crucial role in reducing confusion and increasing trust in social media news.

Encouraging skepticism and questioning the authenticity of the content we encounter can be a powerful tool in combatting the spread of false information.

As we move forward, the future of political deepfakes will depend on the actions taken by various stakeholders, including tech companies,  policymakers, social media platforms, journalists, and users. 

Collective efforts to address this issue are essential to safeguarding the integrity of elections and maintaining informed public discussions.

The effects of political deepfakes on public discourse and the shaping of opinions are too significant to be overlooked. 

As we remain alert and discerning, we contribute to a more resilient democratic landscape that stands strong against manipulative forces seeking to exploit the power of deepfake technology.

Deepfakes and Election: Key Takeaways

Deepfake vulnerabilities raise questions about election information security and authenticity. The increasing use of sophisticated AI technology presents a significant risk, with the potential for widespread disinformation and voter misdirection.

The psychological mechanisms behind deepfake-driven propaganda highlight the need to raise public awareness and implement effective countermeasures.

However, there is hope in fostering awareness and skepticism among users to combat the spread of disinformation.

The future of political deepfakes relies on the collective efforts of tech companies, social media platforms, policymakers, journalists, and the public to protect the democratic process. Together, these actions can help mitigate the negative effects of deepfakes and uphold the integrity of elections.

Subscribe to our newsletter

Keep up-to-date with the latest developments in artificial intelligence and the metaverse with our weekly newsletter. Subscribe now to stay informed on the cutting-edge technologies and trends shaping the future of our digital world.

Neil Sahota
Neil Sahota (萨冠军) is an IBM Master Inventor, United Nations (UN) Artificial Intelligence (AI) Advisor, author of the best-seller Own the AI Revolution and sought-after speaker. With 20+ years of business experience, Neil works to inspire clients and business partners to foster innovation and develop next generation products/solutions powered by AI.