Model citizens: How political beliefs shape people's attitude to AI
Leftists and right-wingers have predictably diverging outlooks on the social impact of artificial intelligence, study finds.

Leftists and conservatives have radically different attitudes to AI, a new study have found.
The latest British Social Attitudes (BSA) report from the National Centre for Social Research (NatCen) found that UK citizens' point of view on artificial intelligence split along party lines, with people on different sides of the political spectrum holding wildly different outlooks on the technology's implications for society.
Demographics also have an impact, with racial groups holding diverging beliefs on the impact of AI.
Alex Scholes, Research Director at NatCen, said: "As AI becomes more embedded in society, understanding how people respond to its different uses will be critical for both policymakers and developers.
"This research shows that public attitudes are far from uniform. They are shaped not only by demographic factors but also by people’s political values.
"Importantly, even with the public's diverse views about the benefits and risks of AI, there is widespread public agreement on the need for effective regulation."
TradGPT vs leftie language models
The findings of the research indicate dramatic divides when it comes to attitudes to surveillance and discrimination. It found that:
- 63% of people with left-wing views fear facial recognition in policing could lead to false accusations, compared with 45% of those with right-wing views.
- 57% of people from a black minority ethnic group are concerned about facial recognition for policing, compared with 39% of the public as a whole.
- 23% of those with left-wing views are worried about discriminatory outcomes in the use of AI to determine welfare eligibility, compared with just 8% of those with right-wing views.
There were also differing attitudes to job losses, with left-wingers more concerned about discrimination and AI permanently consigning people to the dole queue.
- 62% of left-wing respondents are concerned that robotic care assistants will lead to job losses, compared with 44% of right-wing respondents.
- 60% of left-wing respondents are concerned about job losses from driverless cars, compared with 47% of right-wing respondents
The study found that libertarians are more likely to regard speed and efficiency as key benefits of most AI applications.
What do political opponents agree on?
Both sides are roughly in agreement that chatbots offer faster access to mental health, with 52% of left-wingers and 50% of right-wingers agreeing this is a key benefit.
Overall, around 7 in 10 people said they would feel more comfortable with AI if it were governed by laws and regulations, a sentiment held widely across political orientations.
Octavia Field Reid, Associate Director at the Ada Lovelace Institute, which was also involved with the study, said: "It is clear that people’s understanding, trust and comfort with AI are shaped by their political values and their experiences of every specific technology and the institutions using it. Policymakers need to ensure that the current AI adoption agenda aligns with public attitudes and expectations, especially within the public sector.
"This important research can help policymakers better understand the different concerns about AI across society, including those from minoritised groups, and how these intersect with other areas of public policy, such as the job market and policing."
Are large language models left wing?
Although the public is divided on its attitudes to AI, a number of studies have found that LLMs tend to be left-leaning.
"We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK," academics wrote in one famous paper. "These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the internet and social media."
This isn't necessarily surprising. LLMs are trained on enormous corpora of text scraped from the internet, including forums, news sites, social platforms and academic papers, most of which tend to skew liberal in tone and worldview within English-speaking contexts. This means the output of models reflects the statistical bias of establishment online discourse, which tends to be left-leaning.
READ MORE: UK launches investigation into how TikTok, Imgur and Reddit use kids' data
After initial training, most foundation models go through a process called reinforcement learning with human feedback (RLHF), in which human annotators rate model responses for quality, helpfulness and potential harm. These workers often come from academic or other establishment backgrounds, so this process also bakes in bias.
There’s also a further structural issue: models are trained to avoid controversy and legal risk, especially around topics like race, gender or immigration.
That often leads them to sidestep or soften traditionally conservative positions, out of ideology, but out of liability-avoidance.
The result is a model that plays it safe, sounds much more like The Guardian than Fox News in the process.
Machine is strictly neutral and apolitical, so we'll leave it up to you to decide whether that's a good or a bad thing.
Do you have a story or insights to share? Get in touch and let us know.