Congressional Challenges with Deepfakes: Securing Democracy

Congressional Challenges with Deepfakes
Source: Shutterstock/Katherine Welles
Table of Contents

“Deepfakes,” a term coined in 2017 to depict convincing forgeries created through artificial intelligence (AI) across various mediums like photos, audio, and videos, may pose significant challenges to national security in the future.

As AI advances, the implications for various entities, such as congressional oversight, U.S. defense authorizations and appropriations, and social media platforms regulation, become increasingly significant.

In this comprehensive exploration of the challenges posed by deepfake technologies, we explore the urgent calls for congressional action and the measures introduced to counteract the growing threats.

Traversing Deepfake Threats: Advisory Strategies and Legislative Solutions

In September, a joint advisory from the Cybersecurity and Infrastructure Security Agency (CISA), FBI, and NSA urged organizations to strengthen their verification capabilities and enhance deepfake detection techniques.

The advisory highlighted a concerning surge in threats from synthetic media, posing an escalating challenge for users of modern technology and owners of critical infrastructure.

This challenge extends beyond explicit content, unveiling a potential for malicious cyber activities through altered audio and telecommunications operations. The broader scope of concerns associated with the proliferation of deepfakes necessitates a comprehensive and proactive approach to address the evolving threat landscape.

Introduced earlier this year, the Preventing Deepfakes of Intimate Images Act aims to prohibit the nonconsensual development of deepfake intimate images, creating additional legal courses of action for affected individuals. This legislative measure adds an essential layer of protection in response to the growing challenges posed by deepfake technologies.

Deepfakes and National Security: Urgent Calls for Congressional Action

In response to these challenges, during a House Oversight and Accountability Committee hearing on November 8, legal experts and technologists called for proactive measures from the U.S. Congress in response to challenges posed by deepfake technologies.

They stressed the need to impose restrictions on deepfake use and advocated for new protections, especially for women and minority communities, to prevent digital media manipulation.

The experts highlighted the worsening of existing harms, particularly the targeting of vulnerable individuals like women with nonconsensual sexual deepfake content. Notably, a study by the Dutch company Sensity AI reported that since 2018, a staggering 96 percent of deepfake images comprise sexually explicit content featuring women without their consent.

Urgently addressing deepfake impact on society and related issues is crucial to safeguarding personal privacy, public trust, and national security and preserving the integrity of essential societal entities.

Democratic Initiative: Urgency to Combat Deepfakes in Tech World

Coinciding with this urgent call to action, a group of 30 House Democrats, led by Rep. Derek Kilmer, have taken a stance. In a letter addressed to top executives of influential companies like OpenAI, Google, Amazon, TikTok, and Microsoft, they expressed apprehensions about using AI to create deepfake content.

The joint appeal for cooperation and transparency intensifies the increasing worries regarding the hazards linked to the manipulation of synthetic media. The concerns are emphasized as they shed light on the risks of manipulating online users, calling for collaborative initiatives to mitigate the associated dangers. 

The letter explicitly underscores anxieties about malicious actors utilizing generative AI applications for deceptive purposes, particularly on widely used platforms.

With the 2024 presidential election on the horizon, they highlight the urgency to combat the proliferation of false content that could undermine faith in U.S. democratic institutions and request companies disclose their efforts to monitor and combat deceptive synthetic media by December 8, 2023.

Confronting the Deepfake Dilemma: Vital Legislative Measures

Amidst the challenges posed by deepfakes emerging from unidentified foreign entities, concerns intensify within U.S. intelligence circles regarding their potential influence on American political leadership. The recognition of this urgency sparks a growing call for legislative action, emphasizing the need to address the misuse of personas.

This collective push finds a tangible response in the DEEP FAKES Accountability Act, a legislative initiative mandating the labeling of deepfakes on online platforms and imposing criminal penalties for failing to identify malicious content.

Moreover, collaborative efforts between the federal government and technology companies to develop anti-deception software come to the forefront, underlining the urgency driven by the alarming ease of creating deepfakes.

The societal impact, including harm to individuals’ credibility and potential racial tensions, reinforces the call for congressional intervention beyond market reliance.

Deepfake Defense: Securing Funding and Spearheading Initiatives

Securing additional funding for critical entities, such as the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation (NSF), is essential in developing advanced and practical tools for detecting deepfakes. Experts underscore these agencies’ pivotal role in spearheading initiatives to counter the growing threat of manipulated media.

In the earlier-discussed hearing, the emphasis on DARPA’s active role in addressing deepfake challenges aligns with a broader consensus calling for more robust efforts. This echoes a growing recognition of the need to explore explainability issues in AI and explore trust and safety at a grassroots level.

Acknowledging recent executive orders, these actions are seen as positive steps in the fight against deepfakes. The use of tools like watermarking is highlighted, demonstrating a commitment to authenticity in government documents and a strategic approach to combating disinformation.

Additionally, the directive for the Secretary of Commerce to establish standards and best practices for detecting fake content further solidifies the comprehensive effort to tackle the deepfake menace.

Video source: YouTube/The Independent

Closing the Gaps

Current approaches to combating synthetic media are inadequate. This realization, voiced by industry authorities, catalyzes a deeper exploration of the need for comprehensive legislative measures. 

Recognizing these shortcomings extends beyond the visible challenges of nonconsensual content, reaching into the unsettling potential for malicious impersonation of government officials, military figures, and law enforcement personnel.

This revelation propels the conversation beyond simplistic detection and removal efforts, highlighting the need for proactive and transformative solutions.

Subsequently, expert recommendations seamlessly incorporate into the ongoing discourse, underscoring the importance of heightened public awareness, improved media literacy, and reinforced privacy protections.

Congressional Challenges with Deepfakes: Key Takeaways

The term “deepfake” has become a significant national security concern, demanding immediate attention. Calls for congressional action are exemplified through advisory strategies, legislative solutions, and a Democratic initiative, highlighting the severity of the issue, particularly its disproportionate impact on vulnerable communities.

The DEEP FAKES Accountability Act is a crucial legislative measure that emphasizes collaboration between government and technology entities. Securing additional funding for vital agencies like DARPA and the NSF becomes crucial in developing advanced tools for deepfake detection.

The revelation that deepfake threats extend beyond explicit content to malicious impersonation underscores the need for transformative, proactive solutions. Addressing these challenges emphasizes the importance of heightened public awareness, improved media literacy, and reinforced privacy protections.

As we bridge the gaps in our defense against synthetic media, this exploration identifies the issues and illuminates a path forward. Through collective and strategic efforts, we can ensure a secure future amid evolving technological risks.

Subscribe to our newsletter

Keep up-to-date with the latest developments in artificial intelligence and the metaverse with our weekly newsletter. Subscribe now to stay informed on the cutting-edge technologies and trends shaping the future of our digital world.

Neil Sahota
Neil Sahota (萨冠军) is an IBM Master Inventor, United Nations (UN) Artificial Intelligence (AI) Advisor, author of the best-seller Own the AI Revolution and sought-after speaker. With 20+ years of business experience, Neil works to inspire clients and business partners to foster innovation and develop next generation products/solutions powered by AI.