The Trust Deficit in the Age of AI: Why People Hesitate — and How to Bridge the Gap
- Cecilia Machuca
- Oct 17
- 4 min read
The Paradox of Progress
Artificial Intelligence has quietly become the operating system of modern life — filtering resumes, approving loans, writing code, and diagnosing patients. Yet as AI grows smarter, human trust is shrinking.
According to a 2025 KPMG global survey, 61% of people worldwide are wary of trusting AI systems, and only 41% of Americans say they’re comfortable relying on them — well below the global average.¹ This paradox reveals a widening gap between AI’s technical power and society’s emotional confidence in it.
The result is what experts now call the trust deficit — a structural tension shaping the future of work, technology, and leadership.
Why People Don’t Trust AI
1. Opacity: The “Black Box” Problem
Most AI models, especially deep neural networks, don’t easily show how they reach decisions. That’s fine when recommending music — but not when deciding who gets a mortgage or a medical treatment.
People trust what they understand. A 2024 ScienceDirect study found that lack of explainability is the single most cited barrier to trusting AI across industries.²
2. High Stakes, Low Accountability
AI’s entry into high-risk domains — healthcare, HR, law enforcement — amplifies anxiety. In healthcare, only 36% of AI users trust chatbots for medical information.³
When mistakes happen, it’s unclear who’s accountable: the developer, the data scientist, or the user? This absence of clear responsibility is one of the most significant predictors of low trust.
3. Declining Institutional Confidence
Trust in AI mirrors trust in institutions — and both are sliding.Edelman’s 2025 Trust Barometer shows trust in AI companies fell from 61% to 53% globally over five years. In the U.S., it’s just 32%, compared to 72% in China.⁴
This isn’t just cultural — it’s systemic. People doubt whether corporations or governments deploying AI will act transparently and ethically.
4. Ethical Ambiguity & Bias
AI reflects the data it’s trained on. When those data sets contain bias, AI scales inequity. In hiring, two-thirds of Americans (66%) say they would refuse to apply to a company using AI to screen job applications.⁵
Even small errors in bias detection can have large reputational consequences, especially in HR, finance, and healthcare — fields where fairness isn’t optional.
5. Cognitive Dissonance: We Use What We Don’t Trust
Paradoxically, use and trust don’t move together.KPMG’s “Trust, Attitudes and Use of AI” report found that 66% of workers rely on AI outputs without verifying them, and 56% have made mistakes because of AI-generated content.⁶
Even more telling, 57% of employees hide their AI use from managers — worried it may be perceived as dishonest.⁷ This dual behavior — dependence without transparency — is the essence of the trust deficit.
The Trust Deficit in Numbers
Domain | Trust Level | Research Source |
General AI Trust (Global) | 61% wary, 41% trust in U.S. | KPMG (2025) |
Trust in AI Companies | Down from 61% → 53% globally | Edelman (2025) |
AI in Healthcare | Only 36% trust chatbots | KFF (2025) |
AI in Hiring | 66% refuse AI-screened jobs | Pew Research (2024) |
Hidden Workplace Use | 57% conceal AI reliance | Business Insider (2025) |
Why This Matters
Low trust doesn’t just slow innovation — it reshapes the human-technology contract.
Adoption bottlenecks: Distrust stalls implementation, even for systems that outperform humans.
Reputational risk: Companies deploying AI without transparency risk public backlash.
Regulatory pressure: Mistrust fuels political appetite for restrictive legislation.
Workplace friction: Employees who don’t trust AI tools disengage or bypass them altogether.
As one global executive summarized: “AI will not replace humans — but humans who don’t trust AI may be replaced by those who do.”
Rebuilding Trust: What the Data Suggests Works
1. Explainable, Human-Centric Design
Transparent systems build trust. Research shows users are more likely to accept AI decisions when given simple, human-readable explanations, not technical summaries.⁸
Offer clear reasoning (“The system recommended this because…”)
Disclose uncertainty or confidence intervals
Enable users to review or appeal outcomes
2. Accountability & Governance
Trust needs visible accountability.Organizations that establish AI ethics committees, third-party audits, and bias review processes show higher employee confidence and public credibility.⁹Governance is not bureaucracy — it’s reputation insurance.
3. Trust Calibration & Transparency at Scale
Studies show that showing some model uncertainty (“70% confidence this is correct”) boosts trust more than pretending certainty.¹⁰Overconfidence erodes credibility; calibrated humility builds it.
4. Co-Design With Humans in the Loop
AI shouldn’t be a black box handed down from engineers; it should be a system co-designed with the people it serves.When users participate in training, feedback, or oversight, trust levels rise up to 40%, according to a 2025 University of Queensland global study.¹¹
5. Ethical Communication & Cultural Sensitivity
Different regions perceive AI differently.Emerging economies (Asia, LATAM) show higher trust (60%) than developed economies (40%).¹²Global companies must localize not just language — but ethics narratives — to match expectations of fairness, privacy, and inclusion.
The Way Forward
Trust isn’t a feature — it’s infrastructure.In the coming decade, the organizations that win won’t be those that automate faster, but those that build confidence deeper.
As the World Economic Forum put it in its 2025 briefing:
“AI trust is the currency of the future of work — and right now, it’s devalued.”
To close the trust deficit, we need transparent algorithms, accountable leadership, and emotionally intelligent communication.Because the future of AI won’t just depend on how powerful machines become — but on how much we humans believe in them.
Sources
KPMG (2025). Trust, Attitudes and Use of AI Report
ScienceDirect (2024). Explaining the Role of Transparency in AI Acceptance
KFF Health Misinformation Poll (2025)
Edelman Trust Barometer (2025)
Pew Research Center (2024). AI and Hiring Attitudes
KPMG Global Survey (2025)
Business Insider, “Employees Are Hiding AI Use” (April 2025)
Frontiers in AI (2025). Short Trust in Automation Scale (S-TIAS)
Arxiv.org (2024). AI Accountability Frameworks
Arxiv.org (2025). Trust Calibration and Uncertainty in AI Models
University of Queensland Global AI Trust Study (2025)
Reuters (2025). Emerging Economies Lead in AI Trust


Comments