Skip to content

Large Language Models and Elections

Large language models (LLMs) are rapidly transforming the political landscape, with campaigners already using them to pressure test their possible uses. In the 2024 presidential election campaign, political operatives could use AI-generated personalized fundraising emails, text messages, and even deepfaked campaign avatars to connect with potential voters. LLMs could be used to do micro-polling or message testing and to solicit perspectives and testimonies from their political audience individually and at scale. At its best, AI could be a tool to increase the accessibility of political engagement and ease polarization. However, at its worst, it could propagate misinformation and increase the risk of voter manipulation.

LLMs have the potential to help people think through, refine, or discover their own political ideologies. Research has shown that many voters come to their policy positions reflexively, out of a sense of partisan affiliation. The very act of reflecting on these views through discourse can change and even depolarize those views. LLMs could provide campaigns with what is essentially a printing press for time, allowing candidates to exchange essay-length thoughts with a voter on their key issues.

However, AI could also go badly. In the time-honored tradition of demagogues worldwide, the LLM could inconsistently represent the candidate’s views to appeal to the individual proclivities of each voter. The fundamentally obsequious nature of the current generation of LLMs results in them acting like demagogues. Current LLMs are known to hallucinate—or go entirely off-script—and produce answers that have no basis in reality. These models do not experience emotion in any way, but some research suggests they have a sophisticated ability to assess the emotion and tone of their human users.

Campaigns should have to clearly disclose when a text agent interacting with a potential voter—through traditional robotexting or the use of the latest AI chatbots—is human or automated. A public, anonymized log of chatbot conversations could help hold candidates’ AI representatives accountable for shifting statements and digital pandering. Candidates who use chatbots to engage voters may not want to make all transcripts of those conversations public, but their users could easily choose to share them.

To help voters chart their own course in a world of persuasive AI, we need stronger nationwide protections on data privacy, as well as the ability to opt out of targeted advertising, to protect us from the potential excesses of this kind. LLMs have the potential to be a force for good in politics, but only if we take steps to ensure they are used ethically and transparently.

Leave a Reply

Your email address will not be published. Required fields are marked *