The Age of Agentic AI Building Trust and Transparency - Dr Richard Dune - ComplyPlus™ -

The Age of Agentic AI: Building Trust and Transparency

Image by GoldenDayz via Envato Elements

From autonomy to accountability: what the Guardian–AWS backlash tells us about AI ethics, regulatory compliance, and rebuilding digital trust.

When The Guardian US, in partnership with Amazon Web Services (AWS), published “The age of agentic AI: Building trust and transparency”, it was intended to be a forward-looking exploration of how artificial intelligence can be deployed responsibly across industries. Written by Clarke Rodgers, from the AWS Office of the CISO, the article promised to unpack how organisations can balance autonomy, transparency, and security in an era of increasingly “agentic” AI; systems capable of making independent decisions.

On the surface, it struck all the right chords: a message of responsible innovation, a call for proactive governance, and a vision of AI systems that not only perform tasks but do so ethically, securely, and transparently. Yet, what followed was an unexpected storm of public backlash; hundreds of readers across social media dismissed it as corporate propaganda, advertorial spin, and even “AI gaslighting.”

This disconnect between the intended message and public perception is telling. It reflects not just scepticism toward one article, but a much deeper crisis of trust in AI and, by extension, in the institutions and corporations that promote it.

In this blog, Dr Richard Dune explores what this controversy reveals about the growing trust gap between AI innovation and public confidence, as well as what responsible governance must look like in the age of autonomous technology.

What the Guardian–AWS article set out to say

The article framed “agentic AI as the next frontier: autonomous systems capable of performing actions without direct human input. It promised a future where such systems could:

  • Automate complex workflows,

  • Reduce human error, and

  • Create new efficiencies across industries, including finance, healthcare, and logistics.

However, Rodgers acknowledged the obvious tension: autonomy introduces risk. If AI can act independently, what safeguards ensure those actions remain ethical, secure, and accountable?

The piece proposed a familiar triad of solutions:

  1. Security-first design - Embedding zero-trust architectures and real-time monitoring
  2. Human oversight - Through “human-in-the-loop” and “human-on-the-loop” frameworks, ensuring critical decisions always involve human judgment
  3. Transparency-by-design - Making the inner workings, limitations, and data pathways of AI systems explainable to both regulators and customers.

Rodgers positioned these principles not only as risk-mitigation strategies but also as competitive advantages: companies that get AI governance right, he argued, will build stronger customer relationships, brand trust, and long-term resilience.

The conclusion was simple yet powerful:

Success in this space belongs to organisations that balance innovation with responsibility.”

Between innovation and influence: The fine line of corporate narratives

In theory, few would disagree. Responsible AI governance, grounded in transparency, accountability, and security, is critical.

But this was no ordinary opinion piece. It was paid content, commissioned by AWS and published through The Guardian Labs, the paper’s branded content division. That disclosure, though transparent, changed the tone entirely.

Readers didn’t see AI thought leadership. They saw corporate messaging, dressed in journalistic form. And that subtle difference mattered.

To many, it felt like a powerful tech corporation telling the public: “Trust us, we’re handling AI responsibly.” But for a growing number of people, trust in big tech and media partnerships has already eroded beyond repair.

The comments sections across Facebook and X (Twitter) were a case study in disillusionment.

Advertorial”, “Gaslighting”, “Propaganda”: The public reaction

The backlash was swift, sharp, and emotionally charged.

Some readers dismissed the article outright as PR spin or advertorial masquerading as journalism.” Others went further, calling it pure propaganda and corporate gaslighting.”

One commenter wrote:

Building trust assumes that trust has to be manufactured. No, trust is given, not asked for - the word you’re looking for is ‘gaslighting.’”

Another took issue with the very terminology:

Agency ≠ autonomy. In philosophy, agency implies self-directed will - something AI doesn’t have. Calling it ‘agentic’ is misleading.”

Others were more visceral:

F*** off with AI.
Artificial Idiots.”
I’m tired of being screwed over by automated systems and then told it’s progress.”

This wasn’t the voice of a few outliers. It was a cross-section of frustration, scepticism, and fatigue; evidence that for many people, AI isn’t an abstract ethical challenge or a technological marvel. It’s a daily irritation, a source of job insecurity, and a symbol of corporate overreach.

The real story: A breakdown of trust

The public response revealed several intersecting truths about the current AI landscape:

  1. The credibility gap
  2. The semantics of hype
  3. The lived experience of automation
  4. The emotional economy of AI.

The credibility gap

When a major newspaper runs an AI ethics piece paid for by a major AI corporation, credibility collapses. Readers perceive it not as education but as image management. The more these narratives invoke trust, the less trustworthy they appear.

This aligns with broader sociological research on digital governance: trust cannot be engineered through messaging. It must be earned through independence, accountability, and demonstrated fairness.

The semantics of hype

As one commenter astutely observed, the language of “agentic AI” smuggles in a dangerous assumption: that AI has intentions or self-directed will.

In reality, today’s large language models (LLMs) and autonomous systems follow pre-defined scaffolds and execute pre-programmed calls. The illusion of autonomy is just that: an illusion.

When corporations blur those distinctions, whether intentionally or not, they erode public confidence. The public understands hype when they see it, and reacts accordingly.

The lived experience of automation

While the article celebrated efficiency and workflow automation, many people associate AI with lost jobs, frustrating customer service bots, and opaque decision-making systems that affect their finances, healthcare, or benefits.

For those who have experienced algorithmic bureaucracy firsthand, assurances of “trust and transparency” ring hollow. The human cost of automation, i.e., alienation, disempowerment, and dehumanisation, rarely makes it into glossy corporate pieces.

The emotional economy of AI

Public anger toward AI isn’t just about the technology. It’s about power and accountability.

AI represents a system where human decision-making is displaced, yet responsibility remains opaque. When something goes wrong, from a denied loan to a customer service nightmare, it’s never clear who is accountable.

This vacuum of accountability fuels a deep moral resistance. As one reader put it, “AI doesn’t need to build trust; it needs to earn it from people, not marketing teams.”

What this means for AI governance and compliance

For organisations operating in regulated sectors such as health and social care, early years, education, and beyond, this debate is more than philosophical. It’s deeply practical.

Lessons for decision makers

The Guardian–AWS episode underscores three key lessons for anyone deploying or governing AI in real-world settings:

  1. Governance must be independent, not performative
  2. Transparency must include limitations, not just assurances
  3. Human-centred design is non-negotiable.

Governance must be independent, not performative

AI governance frameworks that live inside the same organisations developing or profiting from AI cannot claim true independence. Effective governance requires external scrutiny, ethical oversight, and regulatory alignment, not just corporate policies and self-assessments.

Within the health and social care context, this is analogous to CQC: accountability mechanisms must exist outside the organisation being evaluated.

For regulatory compliance systems like ComplyPlus™, this principle translates into transparent audit trails, human verification checkpoints, and documented evidence that decisions are traceable, reviewable, and reversible.

Transparency must include limitations, not just assurances

Too often, transparency is treated as a communications exercise: a way to explain how AI works.

But meaningful transparency also involves acknowledging what AI cannot do, where it fails, and who remains accountable when it does.

In regulatory compliance, this could mean documenting the boundaries of AI-driven tools, clarifying that they support, not replace, human professional judgment.

A culture of “humble transparency” is far more credible than promises of perfection.

Human-centred design is non-negotiable

If there’s one consistent message from public reactions, it’s this: people want human connection, not just automation.

That’s why “human-in-the-loop” governance models, highlighted positively in the AWS piece, remain essential.

But human oversight must be meaningful, not tokenistic. It’s not enough to say a person can “override” AI decisions; they must have the training, authority, and ethical mandate to do so.

This applies equally to inspectors, clinicians, educators, and compliance officers who increasingly rely on digital tools to support their work.

A mirror to a larger crisis

The Guardian–AWS backlash is not just about one article. It’s a mirror reflecting the broader crisis of trust in the digital age.

The public has grown weary of being told that complex systems are “secure”, “ethical”, or “transparent”, especially when those assurances come from the same corporations driving the transformation.

In many ways, AI has become a metaphor for the larger human dilemma in the digital era:

  • We crave innovation but fear manipulation

  • We demand efficiency but resent dehumanisation

  • We seek transparency but distrust the narrators. 

Until organisations, whether media, tech firms, or regulators, confront this contradiction honestly, public scepticism will persist.

Towards authentic trust

True trust in AI will not come from glossy narratives or sponsored content.
It will come from a new kind of governance: one that is participatory, accountable, and demonstrably human.

For regulated sectors, this means embedding AI within frameworks that already uphold ethical accountability, such as:

  • The CQC Assessment Framework’s “I statements”, which amplify lived experience as a measure of quality

  • Robust data governance models, ensuring decision-making remains traceable and legally defensible

  • Continuous professional development (CPD) that empowers staff to understand and question AI outputs rather than simply follow them.

AI’s future in compliance, healthcare, and education depends not on autonomy, but on "alignment", ensuring that technology advances organisational values rather than replacing them.

Conclusion: From agentic AI to accountable AI

The Guardian–AWS article tried to portray AI as a bridge between innovation and responsibility. But the public response revealed the opposite: a chasm between corporate optimism and societal reality.

The lesson is clear.

We cannot market our way into public trust. We must govern our way there.

At The Mandatory Training Group and through our ComplyPlus™ platform, we see technology not as a replacement for human judgment, but as a tool that strengthens accountability, transparency, and safety across regulated sectors.

The goal isn’t to make AI “agentic”, it’s to make compliance and governance intelligent, inclusive, and human-led.

Until trust is earned through action, openness, and shared accountability, AI will continue to be viewed not as an ally, but as another system of control.

The challenge for all of us, regulators, providers, and innovators alike, is to rebuild that trust, one transparent decision at a time.

About the author

Dr Richard Dune

With over 25 years of experience,Dr Richard Dune has a rich background in the NHS, the private sector, academia, and research settings. His forte lies in clinical R&D, advancing healthcare technology, workforce development, governance and compliance. His leadership ensures that regulatory compliance and innovation align seamlessly.

The Age of Agentic AI – Balancing Autonomy, Accountability, and Trust - Dr Richard Dune - ComplyPlus™ -

Contact us

Complete the form below to start your ComplyPlusTM trial and

transform your regulatory compliance solutions.

 

Older Post Newer Post

0 comments

Leave a comment

Please note, comments must be approved before they are published