AI automation and trust in business operations and project management

Efficient vs Trustworthy: AI Automation and the Hidden Cost to Teams

April 19, 20266 min read

Efficient Is Not the Same as Trustworthy: What AI Automation Is Really Costing Your Team

The hidden cost of generic message patterns in a world shaped by ai in communication

Standing at a Virgin departure gate last week, I heard the boarding announcement deliver a line I hadn't noticed before: "This is a full flight. Overhead locker space will be limited. If you'd like, we can check your cabin bag."

I looked around. The gate was packed. The message made sense. I moved to the front of the line and held onto my bag.

On the return flight, something shifted.

Between security and the boarding gate, I'd been quietly moved from a middle seat to an aisle seat. The flight wasn't full. But the announcement at the gate was exactly the same. Word for word.

The message was wrong for the moment. A generic message delivered without context. And I checked out. Added it to the already mostly ignored pile of flight announcements, and filed the whole system as not worth listening to.

The Real Problem Is Not the Message - It’s the Impact of AI Without Context

That story is about what happens to trust when ai automation is designed for efficiency rather than relevance.

Research on algorithm aversion, most notably from work by Dietvorst, Logg and colleagues, shows that people penalise automated systems more harshly than humans who make a similar mistake.

One generic or contextually wrong automated interaction does disproportionate damage to trust. People disengage. They work around the system. Or they check out altogether.

This is the dynamic playing out in retail and across ai in business operations right now.

Status reports generated with AI assistance that are technically complete but strategically empty. Automated customer updates shaped by ai in communication that create false confidence, or just confusion.

Automation in project management environments was rolled out to teams whose real operational reality was never included in the design.

Efficient? Often. Useful? Rarely.

"Automated and trusted are not the same thing. Once people stop believing the announcement, they don't just ignore that one message. They stop listening entirely."

What Most Leaders Get Wrong About AI and Leadership

The dominant framing right now is speed. AI and automation tools are being adopted at pace, and the pressure on senior leaders to pursue them is real. But in most organisations I work with, ROI measurement is still patchy.

The question being asked is "does it make us move faster?" The question rarely asked is: at what cost?

The impact of AI is not just operational. It is behavioural.

A 2024 Wharton and GBK Collective survey of senior decision-makers found that 71% believe generative AI will lead to the atrophy of employee skills, and will replace workers for at least some tasks. If your team is afraid, that fear is not a weakness. It is an operational signal.

Fear narrows options, accelerates protective instincts, and reduces people's capacity for learning.

Brené Brown's research on emotional granularity shows that most people can only name three emotions with any precision: happy, mad, and sad. Everything else sits unnamed and therefore unmanaged.

When leaders skip past that fear in the rush to deploy, they're not building momentum. They're building fragile structures, increasing the automation risk already present in the system.

What We Are Seeing: The Risk of Automation in Practice

At a practical level, the risk of automation is rarely visible at the point of deployment. It shows up later, in quieter ways.

Teams begin to second-guess outputs but stop raising them. Workarounds emerge outside formal systems. Decision-making slows, even as processes appear faster.

This is how automation risk accumulates - not through failure, but through erosion.

What We Are Doing About It - Building Trust in AI Systems

At 6R, we've been pressure-testing a simple framework with the project teams we work alongside. We call it ACE.

Acknowledge what’s actually there.
Courage isn't pretending the fear doesn't exist. It's naming it specifically enough to make it useful. That means getting into the details of what's underneath the fear for each particular role:

  • The project manager worried about accountability for AI-influenced decisions

  • The functional lead anxious about shadow systems undermining months of governance work

  • The team member wondering whether a decade of domain knowledge is now redundant

Create confidence through clarity, not compliance.

Confidence in ai automation comes from two things: clear ways of working, and deep system awareness. We now declare AI use in client contracts. We define at the start of every project where AI will and won't be used.

We test on ourselves before releasing into the wild. When automations touch external relationships like suppliers or customers, we build communication scaffolding around those interfaces, not just around the technology.

Engineer checkpoints that return decision-making to humans at the moments that matter.

We've introduced formal gated reviews, assigned sceptic roles in walkthroughs, and built in post-implementation review cycles.

Not just during rollout, but three, six, and twelve weeks later. An automation you never revisit is just complexity accumulating in the dark.

This is the work of building trust in ai systems - not through statements, but through consistent, visible practice.

A Practical Reflection for AI in Business Operations

Before your next automated workflow goes live, ask:

Is this relevant to the specific situation the person is actually in, or is it designed generically to cover every situation?

Who is named as accountable for the final decision when this automation influences the output?

When will you return to this, and what does it look like when you add the next automation on top of it?

"Thoughtful automation is relevant, helpful and true. Putting AI over a broken process is like putting lipstick on a pig."

TL;DR

The Hidden Mechanism: Efficient ai automation doesn't fail because of errors. It fails because it trains people to stop paying attention, and inattention in a complex project is a death sentence.

Diagnostic Questions:

1. When did you last audit your automated communications for contextual relevance, not just technical accuracy?

2. Do you know which team members have disengaged from a system they no longer trust, and what workarounds they've built instead?

3. Is the fear in your team being treated as a signal or dismissed as resistance?

Decision Framework: Before automating, ask whether the person receiving this message will trust it enough to act on it. Not just once, but the twentieth time.

If you can't guarantee relevance at the moment of delivery, you're not buying efficiency. You're spending trust.


Retail improvement, made practical.
Leadership thinking that drives change.

Sign up to receive new articles and strategic guidance.

Custom HTML/CSS/JAVASCRIPT

Twenty years in retail transformation teaches you one thing: change only sticks when people do. Leonie McCarthy has spent her career guiding some of Australia’s leading retailers through organisational change, operational shifts and the quiet, behind-the-scenes decisions that shape real outcomes.

Her writing carries that same steadiness - clear thinking on change leadership, retail operations, strategic communication and the human side of transformation. 

No clutter. No theatrics. Just grounded insight shaped by the work itself.

Leonie McCarthy

Twenty years in retail transformation teaches you one thing: change only sticks when people do. Leonie McCarthy has spent her career guiding some of Australia’s leading retailers through organisational change, operational shifts and the quiet, behind-the-scenes decisions that shape real outcomes. Her writing carries that same steadiness - clear thinking on change leadership, retail operations, strategic communication and the human side of transformation. No clutter. No theatrics. Just grounded insight shaped by the work itself.

LinkedIn logo icon
Instagram logo icon
Youtube logo icon
Back to Blog