Deepfake Election Interference in Slovakia
In a recent incident of deepfake election interference, a well-designed and well-timed audio recording surfaced, featuring two Slovakian politicians discussing how to rig the election. The audio was immediately denounced as fake by Šimečka and Denník N, and the fact-checking department of news agency AFP confirmed that the audio showed signs of manipulation using AI. However, the recording was posted during a 48-hour moratorium ahead of the polls opening, making it difficult to widely debunk under Slovakia’s election rules. Additionally, the post exploited a loophole in Meta’s manipulated-media policy, which only considers faked videos as violating its rules.
This incident raises concerns about the growing threat of deepfake election interference. As I recently highlighted in an article, countries like Russia and China often test their attack techniques on smaller nations before targeting larger ones. Therefore, this incident in Slovakia could be seen as a preview of their actions in the upcoming US election next year. It underscores the urgent need for countries to enhance their defenses against AI-driven disinformation campaigns.
The use of deepfakes in election interference poses a significant challenge for social media platforms and fact-checkers. While efforts have been made to combat manipulated media, the focus has mainly been on videos where a person’s words have been edited to say something they never said. Audio-based deepfakes like the one in this case fall outside the purview of current policies, allowing them to spread unchecked during critical periods, such as election moratoriums.
Key Points:
1. Deepfake election interference occurred in Slovakia with a manipulated audio recording of politicians discussing election rigging.
2. The audio was difficult to debunk due to its timing during the election moratorium and exploited a loophole in social media platform policies.
3. This incident serves as a preview of potential deepfake attacks in larger elections, such as the upcoming US election.
4. The incident highlights the need for countries to strengthen their defenses against AI-driven disinformation campaigns.
5. Current policies mainly focus on video-based deepfakes, leaving audio-based deepfakes vulnerable to spreading during critical periods.