In an era dominated by the transformative power of artificial intelligence (AI), public discourse regarding AI has emerged as a critical challenge to the bedrock of democratic societies. Recognizing the severity of this issue, the Brookings Institute’s Center for Technology Innovation convened an in-person panel discussion titled “The Dangers Posed by AI and Disinformation During Elections” on Wednesday, March 13, 2024, from 2PM to 3PM in Washington D.C., bringing together a broad spectrum of experts.
The distinguished panel congregated renowned experts from across the policy and technology spaces. Darrell M. West, a Senior Fellow at the Brookings Center for Technology Innovation and the current Brookings Institute Douglas Dillion Chair in Governmental Studies moderated the panel. The panelists included Dr. Soheil Feizi, an Associate Professor in the Department of Computer Science at the University of Maryland (UMD), The Honorable Shana M. Broussard, the current Commissioner of the Federal Election Commission (FEC), and Matt Perault, the Director of the Center on Technology Policy at University of North Carolina at Chapel Hill (UNC Chapel Hill).
A US-based global think tank, the Brookings Institute, conducts research on domestic and foreign policy, technology, and other issues important to researchers and policymakers. It aims to dissect the intricate nexus of AI-driven disinformation and its capacity to influence electoral outcomes across key democracies globally, including the United States, Indonesia, Mexico, and India.
More than a mere panel discussion, this event made a resounding call to action for policymakers, technologists, and citizens, emphasizing the immediate need to combat the dual threats posed by AI and disinformation. The experts highlighted the subtle, yet powerful ways AI can fabricate narratives that obscure the boundary between reality and fiction, undermining the public’s ability to discern truth in the political realm.
With profound consequences, AI and disinformation compromises the integrity of elections, swaying public opinion, and destabilizing the essence of democratic engagement. As Federal Election Commission (FEC) Commissioner Shana M. Broussard stated, “there needs to be a push for disclaimers at the federal and state level on combatting misinformation and disinformation using generative AI models.”
According to Matt Perault, the Director of the Center on Technology Policy at UNC Chapel Hill, “the first step is: how do we think about evaluating public policy problems? I have been motivated by the Knight Foundation’s new investments in translational work around connecting policymakers with academia as a way to combat disinformation.” The Knight Foundation is a national non-profit focused on solving questions such as how societies can shield themselves from the covert influence of artificially crafted false narratives.
The event underscored the complex nature of this challenge, illustrating that it transcends mere technical fixes or advancements in AI models. Instead, it represents a crisis at the intersection of ethics, governance, and human rights, requiring a comprehensive response that includes regulatory frameworks, technological interventions, and, importantly, public education and awareness. Thus, the Brookings Institute forum served not only as a platform for problem identification, but also as a guiding light toward strategies that can strengthen democracies against the unique challenges of the digital era.
The Scope of AI-Driven Disinformation: Demystifying the Digital Maze
“A seismic shift is occurring in the landscape of democracy, marked by the advent of AI-generated disinformation,” FEC Commissioner Shana M. Broussard explained during the panel. This shift represents an existential threat to the foundational pillars of transparency, accountability, and trust that support our electoral systems.
The concern extends beyond the sheer volume of disinformation to its sophistication and credibility. AI’s ability to learn and adapt is being exploited to craft and disseminate false narratives with pinpoint accuracy. These narratives, designed to exploit societal divides and individual biases, can subtly shape public opinion and influence voter behavior, often unbeknownst to the public.
Indirectly, there’s a more insidious effect on society’s fabric: the erosion of trust in public discourse. As the moderator of the panel, Darrell M. West, pointed out, “there has been an emergence in new generative AI tools designed to create – among other things – fake content videos and audio tapes.” When citizens struggle to distinguish between authentic and fabricated realities, the foundation of informed consent—the cornerstone of democratic governance—is compromised.
Panelists at the Brookings event explored case studies worldwide, showing that no democracy is immune to these challenges. From the United States to Indonesia, Mexico, and India, AI-driven disinformation campaigns have been customized to exploit local vulnerabilities, disrupting elections and polarizing societies. This global perspective underscores the need for a coordinated international response.
Addressing AI-driven disinformation requires reevaluating the legal and ethical frameworks that govern political campaigning and speech. “We are tasked with leveraging AI’s benefits while guarding against its potential to deceive and divide,” explained Dr. Soheil Feizi, emphasizing the need for balance.
The battle against AI-driven disinformation calls for a collaborative approach involving governments, tech companies, civil society, and the media. Developing and enforcing standards for AI use in the public sphere, alongside education and awareness initiatives, is crucial. These measures can empower citizens to evaluate information and critically resist false narratives’ allure.
The Brookings Institute’s event highlighted AI-driven disinformation’s vast scope and profound implications. Navigating this challenge demands a united effort to protect democratic processes and ensure that the digital age remains a force for empowerment, rather than disenfranchisement.
Legislative and Regulatory Responses: Navigating Towards Transparency and Accountability
“Establishing clear guidelines for AI’s role in political campaigns is crucial for maintaining democratic integrity,” stated FEC Commissioner Shana M. Broussard during the discussions. The other panelists echoed this sentiment, highlighting the urgency of crafting regulatory safeguards for AI’s electoral use.
The dialogue ventured into the complex territory of developing legislation that keeps pace with AI’s rapid evolution without compromising democratic values. One proposed strategy was introducing disclaimers on AI-generated content to enhance transparency, enabling voters to make more informed choices.
However, these measures face challenges, including balancing regulatory efforts with First Amendment rights. “Efforts to enhance transparency must not infringe upon free speech,” stated panel moderator Darrell M. West, emphasizing the need for a balanced approach that respects fundamental democratic rights while also keeping the public safe.
The discussion also considered various approaches at state and federal levels, with some states leading innovative regulatory solutions that could inform national policy. These solutions range from disclaimers to more comprehensive transparency requirements for political advertising, acknowledging the need to modernize existing legal frameworks to address the digital age’s challenges.
Moreover, the event highlighted the importance of stakeholder engagement in regulatory processes, offering a democratic platform for public input on policies affecting the information ecosystem. “Public participation is key to ensuring that AI regulations in elections are both effective and equitable,” emphasized panelist Matt Perault.
The Risks of AI in Elections: Charting Unknown Waters
Implementing AI into political discourse introduces a new layer of complexity and risk, especially concerning elections. Dr. Soheil Feizi highlighted the dual-edged nature of AI in democracy. “The capacity for misinformation creation and evidence falsification stands as one of the most pressing concerns of AI in electoral contexts,” he stated, capturing the core issue.
This perspective sparked a broader discussion among the panelists, emphasizing the need for a comprehensive understanding of AI’s multi-faceted risks to electoral integrity. The concern extends beyond misinformation volume to its authenticity and targeted nature, enabled by AI’s algorithms, which can precisely target disinformation campaigns at specific demographics, exploiting societal vulnerabilities and influencing voter perceptions in subtle, yet significant ways.
Accountability challenges were another focus, as the complex network of AI-driven content creation and dissemination complicates pinpointing disinformation sources and enforcing accountability. AI’s anonymity and scalability can shield malicious actors, making it difficult to deter and penalize the spread of false narratives. “We must cultivate a digital ecosystem where transparency, accountability, and ethical AI use prevail,” Dr. Soheil Feizi advocated, underscoring the need for innovative and adaptable strategies to navigate the evolving landscape of AI and disinformation.
“We require a multi-faceted approach that encompasses robust regulations, cutting-edge technical solutions like digital watermarking, and extensive public education initiatives,” summarized panel moderator Darrell M. West, stressing the complexity of the challenge. This societal issue requires wide-ranging solutions, not technological fixes.
Panelists called for dynamic regulatory measures to keep pace with AI’s rapid development. This involves updating electoral laws to reflect the nuances of digital campaigning and disinformation, ensuring transparency in political advertising, and establishing accountability for those who misuse AI for electoral manipulation. “Regulations need to evolve as quickly as the technologies they are meant to oversee,” explained FEC Commissioner Shana M. Broussard, emphasizing the importance of agility in legislative updates.
Advancements in Technological Solutions
Technical innovations, such as digital watermarking, were highlighted as essential in the fight against disinformation. These tools can help authenticate content, aiding users in distinguishing between real and AI-generated misinformation. “Integrating technical safeguards is vital for rebuilding trust in our information ecosystem,” stated Dr. Soheil Feizi, underlining innovation’s role in combating disinformation.
The panel universally recommended widespread public education campaigns. These efforts aim to equip citizens with the critical thinking skills to discern the complex information landscape. “Empowering the public to assess AI-generated content critically is foundational to democratic resilience,” noted Matt Perault. Such initiatives can enable voters to make informed decisions, reinforcing the democratic process against the erosive effects of disinformation.
The Brookings Institute’s “The Dangers Posed by AI and Disinformation During Elections” event marked a significant advancement in the dialogue on technology’s role in democracy. The forum catalyzed a comprehensive examination of the challenges posed by AI-driven disinformation by gathering experts from diverse fields.