There is a feeling most of us know but rarely name.
It’s the moment right before you release something you’ve held tightly, a skill, a habit, a way of doing things that has defined you.
Your hands know how to do the work. Your brain has built the grooves. And now something arrives that asks you, gently or not, to open your grip.
We are living in that moment right now, millions of us at once, across every industry and discipline. And the thing asking us to let go is artificial intelligence.
But here’s the part we don’t talk about enough: this feeling isn’t new.
It is one of the oldest feelings in human history. And every single time we’ve felt it, what waited on the other side was a version of ourselves we couldn’t have imagined from where we stood.
AI Anxiety: Why This Feeling is Ancient
The Hands Remember
Think about the scribe. For thousands of years, the written word belonged to a small class of people who spent their lives mastering the art of transcription.
In ancient Mesopotamia, becoming a scribe meant years of training, learning to press cuneiform into wet clay with a reed stylus. Your value to society was literally in your hands.
Then the printing press arrived. Gutenberg didn’t just invent a machine. He made an entire class of expertise feel suddenly fragile.
The monks who spent decades illuminating manuscripts by candlelight, their work was breathtaking, irreplaceable in its beauty.
And yet the world chose speed and scale. Not because the handwritten word was worthless, but because something larger was trying to emerge: widespread literacy, the democratization of knowledge, the Reformation, the Scientific Revolution.
All of it downstream from a willingness to let the old way go.
The scribes didn’t disappear overnight. The world didn’t end. But something shifted permanently.
The humans who leaned into the new tool, who saw the press not as an insult to their craft but as an amplifier of their purpose, they shaped what came next.
Trust the Horse You Can’t See
When the automobile first appeared on American roads in the early 1900s, people didn’t just resist it on practical grounds. They resisted it on deeply emotional ones.
The horse wasn’t just transportation. It was a relationship. You fed it. You knew its temperament. You could feel it respond to your commands through the reins in your hands. There was trust built through thousands of small interactions.
The car asked you to trust combustion. To trust engineering you couldn’t see. To sit inside a metal box and believe the machine would do what the horse had always done – get you where you needed to go – but faster, farther, and without the bond.
People mocked it. “Get a horse!” they’d shout at stalled motorists. Towns passed laws limiting automobiles to walking speed. Some required a person to walk ahead of the car waving a red flag.
It sounds absurd now. But at the time, the apprehension was real. You were letting go of something alive, something you understood, in exchange for something mechanical, something you had to simply trust.
And yet we did. Not because the fear wasn’t valid, but because the ambition was greater. The desire to move faster, to connect farther, to build something beyond the reach of a single horse – it pulled harder than the fear pushed.
The Manager’s Paradox
There’s a quieter version of this same story that plays out in every organization, every day.
An engineer gets promoted to engineering manager. She was exceptional with code. She could feel when something was off in a system. She could debug by instinct. And now her job is to stop doing the thing she’s great at and instead trust other people to do it.
This is one of the hardest transitions in any career. Not because the new role is harder technically – it’s hard because it requires letting go. You have to watch someone else write the code you would have written differently. You have to resist the urge to jump in. You have to trust that the outcome will be good enough, or even better, because you gave someone else the room to own it.
The best engineering leaders will tell you: the moment you learn to let go is the moment your impact multiplies. You stop being a single point of capability and become a force multiplier.
This is exactly what’s happening right now with AI agents. Exactly.
The New Letting Go
Today, the majority of engineers are no longer coding by hand the way they did even two years ago. They’re working with agents – AI systems that can write, debug, test, and ship code with increasing autonomy. The shift has been staggering in its speed.
And the emotional experience of it mirrors every letting-go moment in history.
You watch the agent write code. It’s not how you would have written it.
Something in your gut tightens. You want to take over. You want to rewrite the function, refactor the logic, add your fingerprints to the thing. Not because the agent’s code is wrong, often it’s perfectly fine, sometimes better than what you would have produced, but because the act of writing it was yours. It was your identity. It was the thing your hands knew how to do.
Letting go of that isn’t a technical challenge. It’s a human one.
And it’s not just engineers. Writers are facing this. Designers are facing this. Analysts, marketers, researchers, lawyers, educators – anyone whose work involves creating, synthesizing, or problem-solving is encountering some version of the same question: can I trust this thing to do what I’ve always done myself?
The answer, increasingly, is yes. But the feeling of releasing that grip doesn’t get easier just because the answer is rational.
Artificial Intelligence And Identity: What We’re Really Afraid Of
The Loom and the Luddites
We’ve been here before, and the history is instructive.
In early 19th-century England, textile workers watched as mechanized looms began replacing the hand-weaving that had sustained their communities for generations.
The Luddites, named after the perhaps-mythical Ned Ludd, didn’t destroy machines because they were stupid or afraid of progress. They destroyed machines because they understood, correctly, that something precious was being taken from them: their agency, their craft, their economic security.
They were right to feel that loss. What they couldn’t see was what would emerge on the other side: an industrial economy that, for all its brutality and inequality, eventually produced a standard of living that hand-weavers could never have achieved.
Entire new categories of work that didn’t exist before. A world where the thing you let go of was replaced by something you couldn’t have built while your hands were full.
The pattern is remarkably consistent. We grieve what we release. And then we build something astonishing in the space that opens up.
Calculators in the Classroom
In the 1970s and 1980s, a quieter but equally revealing version of this debate played out in schools across America. Should students be allowed to use calculators?
The resistance was fierce. Teachers argued that if students couldn’t do long division by hand, they would lose some fundamental understanding of mathematics. The fear was that convenience would erode competence. That the tool would make us weaker, not stronger.
What actually happened was more interesting. Students who used calculators didn’t become worse at math. They became capable of engaging with higher-level mathematical concepts earlier. The calculator didn’t replace understanding – it freed students from the mechanical drudgery that was consuming the time they could have spent thinking more deeply.
The letting go wasn’t a loss. It was a reallocation. Time and energy moved from computation to comprehension. From doing to thinking.
This is the quiet promise embedded in every moment of technological letting go: you don’t lose capacity. You redirect it.
Mental Health In The Age Of Artificial Intelligence
What We’re Really Afraid Of
Let’s be honest about the fear, because it deserves honesty.
When people resist AI, when they feel that tightness in their chest watching an agent do their work, the fear isn’t really about the technology. It’s about identity. It’s about the question: if AI can do what I do, then what am I?
This is a profound and legitimate question. It deserves to be taken seriously, not dismissed with hand-waving about “upskilling” or “adapting.”
The farmer who watched the tractor replace the plow wasn’t just losing a task. He was losing a story he told himself about who he was. The typesetter who watched desktop publishing emerge wasn’t just losing a job. She was losing a craft that had given her life structure and meaning.
These losses are real. The grief is real. And anyone who tells you the transition to AI will be painless is either selling something or not paying attention.
But here’s what’s also true, and what history shows us again and again: humans are remarkably good at finding new stories. We are meaning-making machines. When one source of identity is disrupted, we don’t collapse into nothing – we expand into something new.
The farmer became a logistics manager. The typesetter became a graphic designer. The scribe became a publisher. Not always smoothly. Not always quickly. But inevitably.
AI Adoption: The Ambition That Outweighs The Fear
There’s a reason humanity keeps choosing to let go, even when it hurts. Even when the risks are real, and the path forward is uncertain.
It’s because the drive to grow, to explore, to reach for something just beyond our current grasp, that drive is deeper than the fear. It’s wired into us at a level that precedes rational thought.
The same species that crossed oceans in wooden ships, not knowing if there was land on the other side. The same species that strapped itself to rockets and pointed them at the moon. The same species that connected every human on Earth through an invisible network of light and electricity.
We do these things not because they’re safe. We do them because something in us refuses to stay still.
AI is the next version of this. It asks us to trust something we can’t fully see. To release control over processes we’ve spent careers mastering. To believe that what emerges on the other side will be worth what we left behind.
And the evidence so far, if we’re being honest, is encouraging. Scientists report being two to three times more productive. Engineers are shipping in days what used to take weeks. Writers are producing drafts that would have taken hours in minutes, then spending their time on the part that actually matters: thinking.
The tool isn’t replacing the human. It’s replacing the part of the work that was never the most human part to begin with.
Address AI Anxiety: Delegation Is Not Surrender
There’s a crucial distinction that gets lost in the anxiety around AI, and it’s this: letting go is not the same as giving up.
When a CEO delegates strategy to her leadership team, she isn’t surrendering control. She’s exercising a higher form of it. She’s saying: I trust this system. I’ve set the direction. Now I’m going to focus my attention where it matters most.
This is what the best AI practitioners are learning to do right now. They’re not abdicating their expertise. They’re elevating it. Instead of spending their energy on the execution, they’re spending it on the judgment. On the taste. On the direction. On the questions worth asking.
The engineer who lets an agent write the code isn’t becoming less of an engineer. She’s becoming more of an architect. The writer who lets AI produce a first draft isn’t becoming less of a writer. He’s becoming more of an editor, a thinker, a voice.
The work doesn’t disappear. It transforms. And it tends to transform in the direction of the things that are most distinctly, irreplaceably human: judgment, creativity, empathy, vision.
AI Use: How Teams Shift From Execution To Judgment
A Confession
I have a confession. This article was written with the help of Claude Opus 4.6.
I’ve had this idea – this feeling about letting go – ruminating in my mind for months. I could sense the shape of it. I knew the thread I wanted to pull. But I couldn’t articulate it in a way that matched the weight of what I felt. And I didn’t have the hours it would take to sit down and wrestle it into something worth reading.
So it stayed trapped. Another idea that never saw the light of day.
Until I let go.
I brought the idea, the perspective, the thesis, the historical instinct, the conviction – and I let an AI help me give it form. And now you’re reading it. You felt something while reading it. Maybe you’ll share it with someone who needed to hear it.
If you think about it, that’s the entire point. The idea was human. The impulse to share it was human. The meaning you took from it was human. The tool just made sure it actually reached you.
That’s what letting go makes possible. Not less humanity. More of it.
What’s on the Other Side
Every generation faces its own version of the letting-go moment. Every generation fears the same things. And every generation, eventually, walks through it.
The printing press didn’t destroy the love of beautiful writing. It made it possible for millions more people to experience it.
The automobile didn’t destroy our connection to the natural world. It expanded the world we could explore.
The calculator didn’t destroy mathematical thinking. It elevated it.
The internet didn’t destroy human connection. It multiplied it – imperfectly, chaotically, but unmistakably.
And AI will not destroy human capability. It will do what every great tool has done before it: it will ask us to let go of the version of ourselves that was defined by limitation, and step into the version that is defined by what we choose to do with abundance.
The transition will not be smooth. There will be real disruption, real loss, and real grief. Some people will resist, and their resistance will be understandable. Some systems will fail. Some trust will be misplaced.
But the arc is clear. It has been clear for ten thousand years of human history. We build tools. The tools ask us to change. We hold on as long as we can. And then we let go.
And we fly.
The hardest part of progress has never been building the new thing. It has always been releasing the old one.
The engineers who thrive in the age of AI agents will not be the ones who code the fastest. They will be the ones who learn to trust the fastest. The writers, the designers, the thinkers, the builders – the same. Not because the old skills don’t matter. But because the new skills require something harder than technique. They require surrender. Not the surrender of defeat, but the surrender of a leader who has learned that their greatest power is knowing when to let someone else – or something else – carry the weight.
Frequently Asked Questions (FAQ)
1) What Is AI Anxiety in the Workplace?
AI Anxiety in the Workplace is the stress and uncertainty employees feel as Artificial Intelligence changes how work gets done—often tied to identity, performance, and control. It can show up as anxiety, lower confidence, and a strong sense of disruption in the workplace.
2) Why Are Workers Worried About Job Security and Job Displacement?
Many workers are concerned that AI use will eliminate tasks—or entire roles—leading to layoffs and reduced job security. Surveys show that roughly half of U.S. workers feel more worried than hopeful about the future use of AI at work.
3) How Does AI Anxiety Relate to Mental Health?
AI-driven uncertainty can raise stress and affect mental health—especially when people feel their capabilities are being replaced or their careers are at risk. Some workers report feeling overwhelmed by workplace AI, even when it makes them more efficient.
4) How Can Employers Address AI Anxiety With Human-Centered Support?
To address AI Anxiety, leaders should communicate what’s happening, what it means for jobs, and what support exists (training, clear policies, escalation paths). A “leadership vacuum” on AI can amplify fears; consistent, human-centered support reduces uncertainty.
5) What Does It Mean to Adopt AI Responsibly While Driving AI Adoption?
Adopting AI responsibly means using approved tools with data/security guardrails, clear use cases, and accountability for outcomes. Pair that with practical training—many employees say they want more formal upskilling to build confidence and accelerate AI adoption.
6) What’s a Practical Example of Safer AI Use at Work?
Use Generative AI for drafts, summaries, and first-pass analysis—but require human review for client-facing outputs, decisions, and sensitive data. This keeps humans in the loop, protects security, and helps employees feel heard rather than threatened.
7) Why Are Gen Z Professionals Often Mentioned in Workplace AI Anxiety?
Gen Z professionals may adopt Generative AI quickly, yet still feel anxious about job security and the future of work, so AI adoption and AI anxiety can rise together. Some surveys/reports suggest younger workers are among the most worried about AI’s impact on jobs.
Essential AI Skills for Your Team
AI Training for Teams gives your team foundational AI knowledge, preparing you to effectively integrate AI into your business.



