Your AI Strategy Has a Feelings Problem
Call me an optimist, but when organizations roll out AI, I like to think the intentions are usually pretty sincere. Leaders want productivity, yes. I’m not naïve. But I don’t think any leader sets out trying to turn the employee experience into a dystopia. Even the most spreadsheet-oriented executive knows that “we saved time” is a hollow victory if the humans doing the work feel like they’ve wandered onto the set of Severance.
That tension – between the promise of AI and the lived experience of people on the ground – is one of the (many) topics we dug into this past week in a webinar with Eric M. Bailey and Jeff Harry: Human-Centered Leadership: Transformation Through Creativity and Connection. It was one of those conversations that had real energy: funny, sharp, occasionally blunt, and (my favorite kind) practical in a way that made you rethink things you’ve been nodding along to for months.
If you missed it (or enjoy seeing people in Lego bowties get passionate about work), you can still watch the recording here.
Putting Humans at the Center of Work
One of the reasons I enjoyed the conversation so much is that Eric and Jeff didn’t treat “AI adoption” or organizational change like a technical hurdle you can clear with training and a comms plan. They treated it like what it actually is: a human decision that happens in a particular emotional environment, with particular risks and incentives – many of them invisible.
Bestselling author, speaker and brain scientist Eric M. BaileyOpens in a new tab put it perfectly when he said: “It's really important for people to understand that the way in which we emotionally process things… through emotion first, our feelings first, and then our brain will actually maybe sometimes create some reasoning around us and analytical thinking. But the emotion happens first. And for if I could tell one thing to all leaders around the world is: You cannot reason somebody out of a position they never reasoned themselves into.”
In other words, you can have the most elegant AI use cases in the world, but if people feel threatened, exposed, or vaguely replaceable, their brains will do what brains do: protect the organism first, and worry about “innovation” later.
Global expert on play, Jeff HarryOpens in a new tab described what that looks like in a workplace without psychological safety – where people are operating in survival mode, managing perception, and keeping their best thoughts to themselves: “I feel like when we have psychological safety, I'm more likely to share my best ideas with you, but if I'm in survival mode. I'm not saying a thing. I'm just keeping my head down, right. I'm just trying to keep my job right. And survival mode is when we feel most like machines, right?”
That one sentence explains a remarkable number of stalled AI pilots (and other failed strategic initiatives).
It also helps clarify why this moment is so tricky. AI rollouts tend to begin with a lot of high-minded language – about transformation, empowerment, the future of work – while employees are privately doing a very different kind of math. Not “Can I learn this?” but “What will it cost me to learn this out loud?” Not “Is this useful?” but “Is this safe?” Not “Will this help?” but “What’s the catch?”
Watch Jeff's TEDx Talk: Beyond Hierarchy: How Play Can Heal the Division Between Us Opens in a new tab

What’s the AI Adoption Problem, Really?
When you put it that way, it becomes obvious why the same pattern keeps playing out across industries: hope is high, but so is ambivalence. In the webinar, I mentioned recent Betterworks researchOpens in a new tab that found something telling: the people most excited about AI are also the most afraid, as 50% of AI enthusiasts fear it’s going to replace them.
That is not a small footnote. That is the emotional weather system your rollout is happening inside.
This is also the moment where organizations tend to reach for the wrong explanations and prioritize the wrong things, mostly because they’re easier to operationalize. If adoption is slow, we blame the technology. The model isn’t good enough. The outputs aren’t reliable. The data isn’t clean. The tool isn’t integrated.
Or we blame the humans. "People are resistant." "People don’t like change." People “just need to be trained.” Sometimes those things are true. But what’s often truer is that people aren’t refusing to learn, they’re refusing to take a risk in an environment that doesn’t feel safe enough to reward the risk. To paraphrase the cheeky James Carville: “It’s the culture, stupid.”
Further Reading: 5 Ways Recognition Can Turn Your People Into AI Power Users
Recognition Makes Safety (and Strategy) Real
One of the most practical ways to move people out of survival mode is also one of the most underestimated. Recognition. In the Global Research we dropped in January, Recognition as an Engine for Strategy, we found that when people had been recognized in the past month, their psychological safety scores were 21% higher.
And it’s not only receiving recognition – giving it matters too: people who thanked others recently showed a 15% increase in psychological safety. The reason HR should care about this (beyond the fact that psychological safety is foundational) is that it’s where strategy can become a shared reality that moves the business forward.
When psychological safety is high, alignment jumps: teams are 40% more likely to understand values. They are also 78% more likely to feel aligned to values, 44% more likely to understand strategic goals, and 79% more likely to feel aligned to those goals.
Read the research: Global Research: Recognition is the Strategy Engine
And when recognition is connected to strategic initiatives, people are 129% more likely to understand how their work contributes. That’s not a lot of happy talk. That’s a measurable shift in whether people feel safe enough – and clear enough – to invest in where the company is trying to go.
Understanding the Human Response to Change
Love it or hate it. Savior or destroyer of worlds. We can all agree that AI is a major disruption.
When you introduce AI into work, you are introducing change. And with it comes a new relationship: between employees and leadership, between employees and evaluation, between employees and their own sense of competence and identity. You’re asking people to become beginners again, in public, while still being measured like experts. You’re asking them to experiment while keeping performance high. You’re asking them to be honest about what’s broken early enough to fix it – without always giving them proof that honesty will be welcomed rather than punished.
“One of the things that is really important to understand is that change, in and of itself, is scary,” said Eric on the webinar. “There's research was done that when we experience anything novel, anything new, our amygdala, the part of our brain that recognizes threat, is triggered literally anything new… And if people are coming into something with a fear, maybe irrational or rational a fear, you can't just talk them out of it.
“As leaders, this is a world that we're working with and so how can we create environments where people do feel psychologically safe, where people do feel they have an opportunity to express their fears, their worries? How do they feel? They can connect with people?” he asked. “I think that's something that we're really missing.”
That is why “roll it the AI initiative” is rarely the hard part. The hard part is what happens in the space between rollout and real adoption, where people decide – individually, and often without telling you – whether this initiative is something they can safely invest in, or something they should politely comply with and wait out.
The Risk of Ignoring Your Humans
When they can’t invest, we tend to see the same few behaviors on repeat. Some are harmless. Some are expensive. Most are deeply human.
| The first is polite compliance: everyone attends the training, a few people ask questions, a lot of people smile, and then the tool becomes something you “have access to,” not something you use. |
| The second is shadow adoption – people using AI quietly and unofficially, struggling alone, because it feels safer than raising their hand and admitting uncertainty. |
| The third is quiet workarounds, where teams route around your new tools to keep familiar systems alive, because familiarity is a form of safety. |
| And the fourth is the one Jeff warned about most memorably: defensive hostility, which often happens when AI starts taking the work people actually enjoy – the work that makes them feel competent and useful. And adoption begins to feel like a threat: less like improvement and more like erosion of self. |
“Stop having AI take all the cool jobs,” Jeff said. “I talk about how where AI initiatives are failing, and it's like, stop having AI take all the cool jobs. Stop stealing the fun jobs. I wish AI would just do my laundry... And I that's where I feel like we're losing track of what's the point of having this… Companies are in such a race to get market share when it comes to AI that they're forgetting the people that it's supposed to be helping.”
It’s funny. It’s also a genuinely important design principle.
Because AI doesn’t just change workflows. It changes how people feel about work. And if you’re trying to pull efficiency levers without tipping the employee experience into Severance territory, that emotional layer isn’t a distraction from your strategy.
It is the strategy.
If you want to hear more from Jeff HarryOpens in a new tab and Eric M. BaileyOpens in a new tab (and even meOpens in a new tab!) please consider joining us at Workhuman Live in April, where we'll all be speaking! Workhuman LiveOpens in a new tab is an incredible conversation among people who are thinking hard and creatively about work, and we'd love to have you there.

And please watch the webinar recording to hear the rest of this amazing conversation and share with your team.
About the author
Darcy Jacobsen
Darcy is a passionate storyteller and champion of workforce transformation, human connection, and recognition-driven culture. As an author on the Workhuman Live Blog, she loves to connect deep research insights with modern workplace dynamics to uncover what really drives engagement, belonging, and happiness at work. With a background in communications and a master's in medieval history, she brings a unique perspective to her writing, taking deep dives into all topics around organizational psychology and the science of gratitude.