Opinions of Tuesday, 8 April 2025

Columnist: Joseph Opoku Mensah

AI at the Polls: Are voters making informed choices or programmed decisions?

Joseph Opoku Mensah Joseph Opoku Mensah

One of the most hotly contested issues in a time when artificial intelligence (AI) is drastically changing how people interact with one another is whether generative AI influenced voting behavior.

AI-powered tools like chatbots, deep learning algorithms, and language models are becoming essential to political communication as elections are increasingly being waged online as well as on the ground.

Generative AI technology, like ChatGPT, My AI, Meta AI, and Bard, uses large language models (LLMs) to create content that reflects what voters think and helps in having convincing conversations.

Through AI technology, various political campaigns and advocacy groups, together with independent actors, produce message content that matches different voter segments. AI technologies examine both social media trends and voting records and population traits so they can deliver customized political campaigns to each voter segment.

AI-powered political advertising reaches voters one by one because it adjusts language together with tone and arguments to modify how people perceive things.

Before AI integration, voters received their political information by engaging with news media as well as family members and friends and receiving information from campaign representatives about candidates and issues.

Users have started using chatbots alongside AI-based systems to gain political information, understand policy positions, and discuss disputed matters since AI systems learn everything from large political databases before producing responses that sound neutral, factual, and knowledgeable.

AI controls information delivery by choosing specific elements for emphasis as well as structuring themes and choosing what data to underscore, potentially influencing voters' perceptions. AI-generated responses present themselves as objective while incorporating bias that existed during their training phase because humans would typically detect this bias in peer conversations.

AI-driven algorithms function as primary determinants of which online information reaches voters and which information they interact with. The application of AI by social media services, together with search engines, delivers customized content to users who routinely select material that matches their existing opinions rather than presenting different viewpoints.

When voters engage with AI-assisted political discussions, they often receive chatbot-produced responses that reflect what they previously searched for and discussed on the platform before. AI systems develop a recurring cycle by producing statements keeping to a user's preferred beliefs, which strengthens the user's existing political preferences while negating unbiased dialogue.

Social media platforms such as Facebook, X (formerly Twitter), and TikTok use AI-driven algorithms to curate and recommend political content to users. These recommendation systems are designed to maximize engagement by showing users content that aligns with their interests and past interactions.

While this process increases user engagement, it also creates echo chambers—digital spaces where individuals are exposed primarily to information that reinforces their existing beliefs. During elections, this process can have significant consequences.

AI-powered content curation may unintentionally amplify misinformation, deepen political divisions, and make it harder for voters to access balanced perspectives. Deepfake videos, false narratives, and misleading statements have spread through some AI-generated content, further complicating the information landscape for voters.

The growing influence of AI in elections raises ethical concerns. Is it possible for voters to make fully independent decisions when AI tools influence the content they encounter? Should AI-generated political advertisements be regulated?

Some governments and election commissions worldwide are beginning to explore policies to monitor AI-driven campaign strategies. Moreover, there is concern about AI’s potential to manipulate public opinion. Unlike human campaigners, AI lacks ethical judgment and can be programmed to prioritize engagement over truth. If left unchecked, AI could be exploited to mislead voters through hyper-personalized, emotionally charged messaging.

As AI continues to evolve, it is crucial for policymakers, tech companies, and civil society to establish guidelines for its ethical use in political communication. Transparency in AI-generated content, fact-checking mechanisms, and voter education on digital literacy are essential steps in ensuring that AI serves democracy rather than distorting it.

While AI has the potential to enhance political engagement and make information more accessible, its role in elections must be carefully managed to prevent unintended consequences. The question remains: are we truly making independent choices, or are we subtly guided by AI-driven narratives?

While the 2024 elections marked a significant milestone, the discussion about AI's role in democracy is still in its early stages.