Excellent is frequently used while discussing artificial intelligence (AI) in tech circles. Depending on who you ask, it’s either the end of the world or the future of everything. However, AI has a different reputation outside of boardrooms, research labs, and LinkedIn posts, where real life takes place in living rooms, classrooms, supermarkets, and buses. Asking the experts what they have to say is no longer sufficient to fully understand where we are with AI. We also need to know what the public thinks.

The Public Mood: Intrigued but Uneasy

The latest data paints a complex picture of public sentiment. A nationally representative survey of 3,513 UK residents in November 2024, conducted by the Ada Lovelace Institute and The Alan Turing Institute as part of their “How do people feel about AI?” study, reveals significant gaps in public understanding and trust.

While the survey shows widespread AI interaction, trust remains limited. Trust has increased among those aged 18-34, showing a rise from 34% to 41% in Wave 4, and those aged 35-54 from 26% to 38%, according to the UK government’s Public Attitudes to Data and AI Tracker Survey Wave 4 (2024).

This disconnect is more than academic. As AI systems are increasingly tasked with decisions that affect people’s lives, from credit approvals to job screenings, many feel they’re being impacted by something they don’t fully understand.

“It’s like there’s a second brain running the world,” said Nadine, a retail manager in Croydon. “But nobody tells you the rules.”

A Tale of Two Experiences

AI is not a homogeneous experience in daily life. The UK government’s Public Attitudes to Data and AI Tracker surveys indicate that people’s perceptions of AI’s effects in various fields are not entirely consistent.
AI offers real convenience and control to certain users. Sarah, a working mother in Manchester, explains how artificial intelligence benefits her family: “My banking app detects fraudulent transactions before I ever notice them, Spotify makes the ideal songs for the kids’ car rides, and Google Assistant keeps track of our shopping lists. It’s similar to having a helpful helper.

Others, however, have a more fuzzy experience. Consider Michael, a fresh college graduate seeking employment in Birmingham. After modifying his resume for an online application, he discovered that it was denied “within seconds” after submission. “No feedback was given. A simple “no.” “I’m not even sure if anyone saw it,” he claims.
This brings up a crucial point: a lot of people believe AI is being used against them rather than with them. That difference is important. Technology fosters trust when it functions as a collaborator. It creates animosity when it acts as a gatekeeper.

Even positive experiences can feel unsettling when they’re not transparent. David, a teacher in Cardiff, appreciates how his school’s new system automatically generates lesson plan suggestions, but admits: “It saves me hours, but I worry what if it’s missing something important? What if I become too dependent on it?”

The Trust Gap

According to the Alan Turing Institute’s Public Attitudes to Data and AI research, concerns about AI oversight remain significant. The UK government’s tracking research shows mixed public awareness and concern about AI use in public services.

The worry isn’t just about automation, it’s about accountability. If an algorithm denies you a loan, flags your CV, or labels your behaviour as “risky,” who do you turn to?

“You can’t argue with a machine,” said Tom, a bus driver in East London. “And half the time, you don’t even know a machine’s made the decision.”

Interestingly, attitudes vary significantly by demographics, as shown in government research. The data reveals generational differences in AI trust and acceptance, though specific demographic breakdowns require further detailed analysis of the survey results.

There’s also a deeper cultural tension at play. Many people believe that AI is fundamentally shaped by those who design it, and right now, those designers are largely based in Silicon Valley, not Stoke-on-Trent. This geographic and cultural distance matters more than tech companies often realise.

Everyday AI Myths (and Realities)

Many common beliefs about AI don’t quite line up with current capabilities. The Ada Lovelace Institute and Alan Turing Institute research explores public perceptions of AI capabilities across different applications. Let’s examine some key misconceptions:

Myth: AI can “think” for itself
Reality: AI uses statistical pattern recognition in large datasets to process information. Modern systems lack consciousness and true knowledge, unlike people, even if they are capable of displaying complex actions and even unexpected emergent capabilities.

Myth: AI is always more objective than humans
Reality: Biases in training data and design decisions are inherited by AI systems, and they have the potential to magnify them. For instance, because Amazon’s hiring algorithm was based on traditionally male-dominated hiring practices, it discriminated against women.

Myth: Only tech-savvy people use AI
Reality: Anyone who uses Google Maps, Instagram filters, predictive text, or Netflix recommendations is already using AI, often without knowing it. The technology has become deeply embedded in everyday digital tools.

Myth: AI will inevitably replace most human jobs
Reality: While AI will automate certain tasks, research suggests it’s more likely to transform jobs rather than eliminate them entirely. The 2022 Public Attitudes to Data and AI (PADAI) Tracker Survey found that 34% of UK adults believed the impact of AI on job opportunities has been negative and 33% believed it has been positive, showing mixed public perception on this issue.

Without clear public education, these myths persist, creating both unrealistic fears and expectations that can hinder thoughtful AI governance.

Global Perspectives: Not Just a UK Story

While this article focuses on UK data, international research reveals fascinating cultural differences. Limited comparative data suggests significant variations in AI acceptance globally, though comprehensive cross-cultural studies remain limited.

Why This Disconnect Matters in 2025

The way people feel about AI is not just a PR problem; it’s a policy and business imperative. In 2025, company leaders can no longer address AI governance inconsistently or in isolated pockets of their organisations. As AI becomes intrinsic to operations and market offerings, companies need systematic, transparent approaches to building public trust.

The European Union’s AI Act, now in effect, requires that when using high-risk AI systems, humans must be informed they’re interacting with a machine. Similar regulations are emerging globally California’s proposed AI transparency requirements and the UK’s AI Safety Institute guidelines reflect growing recognition that public acceptance isn’t optional for AI’s continued development.

The business case is compelling too. Companies that proactively address public concerns may see competitive advantages, though comprehensive data on consumer preferences for AI transparency remains limited.

Recommendations: Making AI Feel Less Alien

How do we bridge the gap between innovation and public understanding? Here are evidence-based approaches that leading organisations are implementing:

Transparency as Standard Practice

Companies should make it clear when and how AI is being used in everyday services. This means going beyond generic privacy policies to provide specific, understandable explanations. Leading technology companies are beginning to open their decision-making processes to public scrutiny, though comprehensive transparency standards are still evolving.

Best practice examples include features that explain algorithmic decisions, though widespread adoption of such transparency measures remains limited.

Meaningful Human Oversight

Maintain genuine “human-in-the-loop” approaches, not just human review as a rubber stamp. People should know that there’s someone accountable behind the technology who can intervene when needed. This isn’t just good practice, it’s increasingly a legal requirement under emerging AI regulations.

Community-Centred Design

Before launching new AI systems, engage with the communities they’ll affect. What problems do they want solved? What are their specific concerns? Research suggests areas where public support may exist for thoughtful AI implementation, though specific community engagement methods require further development.

AI Literacy for Everyone

Introduce accessible AI education in libraries, community centres, and workplaces—not just schools. Educational initiatives show promise, though a comprehensive evaluation of their effectiveness is ongoing.

Programs focusing on AI literacy are being developed across various institutions, though systematic assessment of their impact remains limited.

Stories Over Statistics

To shift perception, share more stories of AI making real lives better, not just boosting efficiency or profit. Focus on human outcomes and relatable benefits. Highlight diverse voices and experiences, not just success stories from tech-savvy early adopters.

Cultural Sensitivity and Localisation

Recognise that AI acceptance varies significantly across communities and cultures. What works in London may not resonate in rural Wales, and what succeeds in Western markets may fail elsewhere. Tailor communication and implementation strategies accordingly.

Looking Forward: The Opportunity for Inclusive AI

The conversation around AI has, for too long, been driven by engineers and entrepreneurs. But as these systems quietly become part of everyday life, it’s time to bring more voices to the table not as an afterthought, but as co-creators of our AI-enabled future.

The public doesn’t want AI to be perfect; they want it to be fair, understandable, and aligned with human values. With 78 percent of organisations now using AI in at least one business function, up from 72 percent in early 2024 and 55 percent a year earlier, according to McKinsey’s latest State of AI report, this isn’t a future concern—it’s today’s reality.

The path forward requires acknowledging that technology adoption is fundamentally a human challenge, not just a technical one. When we listen to everyday voices—from Nadine’s confusion to Sarah’s satisfaction to Michael’s frustration—we create the opportunity for AI that truly serves everyone.

The question isn’t whether AI will continue advancing; it will. The question is whether that advancement will happen with public understanding and trust, or despite their absence. The choice is ours, and the window for making it thoughtfully is narrowing.

Maybe, just maybe, building AI that everyday people can understand and trust isn’t too much to ask. In fact, it might be the most important technical challenge we face.

References

Ada Lovelace Institute and Alan Turing Institute. “How Do People Feel about AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain” (2025). Available at: https://attitudestoai.uk/findings-2025
UK Government Department for Science, Innovation and Technology. “Public attitudes to data and AI: Tracker survey (Wave 4) report” (December 2024). Available at: https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-4/
Office for National Statistics. “Understanding AI uptake and sentiment among people and businesses in the UK” (June 2023). Available at: https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/understandingaiuptakeandsentimentamongpeopleandbusinessesintheuk/june2023
McKinsey & Company. “The state of AI: How organisations are rewiring to capture value” (2025). Available at: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
Reuters. “Amazon scraps secret AI recruiting tool that showed bias against women” (2018). Available at: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/

About the author

Uju Eziokwu is a data analytics professional with a passion for AI-driven business transformation. She specialises in applying machine learning, automation, and predictive analytics to help organisations in sectors like healthcare, finance, and manufacturing optimise operations and make smarter, data-informed decisions. With a strong background in Applied AI, she is committed to advancing ethical, transparent, and impactful AI systems.