Skip to content

AI and US Election Rules

As the use of AI in political campaigns becomes more prevalent, the Federal Election Commission (FEC) is currently considering whether AI-generated content should be considered fraud or legitimate electioneering. The issue at hand is whether candidates should be allowed to use AI to create deepfaked media for political advertisements. While the current answer is likely yes, as political ads have always had a certain level of deception, the concern over AI should draw attention to the content itself and how it is distributed.

AI will not only be used for deepfaked images but also for personalized communications and interactive campaigning. AI chatbots representing campaigns can respond to questions instantly and on a large scale, creating a personalized town hall experience for voters. However, it is unclear who is responsible for keeping political advertisements grounded in reality. The FEC’s role is in campaign finance, while the Federal Communications Commission (FCC) regulates political advertising in broadcast media. The Federal Trade Commission (FTC) enforces truth in advertising standards, but political campaigns have been exempted from these requirements on First Amendment grounds.

To address these issues, platforms like Google have implemented policies to disclose the use of AI images, audio, and video in political advertisements. However, relying on voluntary actions by private companies is not enough to protect democracy. The FEC should utilize its limited authority to regulate AI-generated content. The existing regulation against fraudulent misrepresentation should be expanded to explicitly cover deepfaked AI materials.

Congress also has a role to play in strengthening regulation. The Honest Ads Act and the proposed REAL Political Ads Act aim to expand disclosure requirements for online content and create a legal requirement for campaigns to disclose the use of AI-generated content. However, there is a need to strengthen laws around false or misleading media in political campaigns, regardless of the means of generation or outlet of publication. Congress should allocate more funding for enforcement and clarify the boundaries between the FCC, FEC, and FTC’s roles in governing political speech.

The media can also play a crucial role in reporting on the authenticity of videos, images, and audio recordings. While deepfake technology may make it difficult to verify the truth of private conversations, the media can still provide valuable information to the public.

Individuals concerned about the impact of AI in political campaigns can submit comments to the FEC’s open public comment process before October 16. This public interest is necessary to prompt action and ensure the regulation of AI-generated content.

In conclusion, addressing the use of AI in political campaigns requires a comprehensive approach involving the FEC, Congress, and the media. By focusing on the content and distribution of political advertisements, rather than solely on AI technology, we can work towards safeguarding our democracy.

Leave a Reply

Your email address will not be published. Required fields are marked *