AI Can Already Do 80% of Your Job. What Are You Going to Do About It?
What an uncomfortable experiment taught me about leadership in the age of AI
Deb’s Note: From time to time, I bring in guest authors to Perspectives to share their point of view and bring a different take on a topic I want to learn more about. This month, I asked my friend, Charlene Li, to talk about her experience as a thought leader, AI strategist, and leadership coach. She has seen first hand the changes that AI is bringing to individuals and organizations through her work at Quantum Networks Group. She just launched her new book, Winning with AI: The 90-Day Blueprint for Success, which lays out a master plan for AI strategy.
In early 2023, a few weeks after ChatGPT launched, I ran an experiment I haven’t been able to stop thinking about.
I fed the model a set of questions from a podcast I’d recently recorded — real questions, the kind that require judgment about leadership, disruption, and organizational change. I told it to answer as if it were me: Charlene Li. Then I had people rate the responses blind, on a scale of 1 to 10, without knowing which answers were mine and which were the AI’s.
The results shook me. On some questions, the AI scored higher than I did. It was more concise, better structured. I outperformed it on the genuinely creative answers such as the unexpected angles, the questions nobody had thought to ask yet. But averaged across the whole set, AI performed at roughly 80% of my level.
I felt like chopped liver. Like my time was already running out.
The anxiety that followed was specific and unsettling. It felt like a new version of “publish or perish,” a race between me and my AI doppelgänger, where every new model release would close the gap a little further. Future fatigue set in. I couldn’t see how to stay ahead of something that was already so close behind.
And then I did something that changed how I saw the whole thing. I looked very carefully at that 20% difference.
What I found wasn’t a list of skills or topics. It was something harder to name. My stronger answers had a different quality — I was reading the emotional timber of the question, digging below the surface to the question that lay underneath it, connecting it to a conversation or idea I’d encountered recently that wasn’t in any AI training set. I wasn’t just answering. I was bringing myself to the answer in a way the model couldn’t replicate, because I was the only one who had lived my specific combination of experiences, relationships, and half-formed thoughts.
That’s when I stopped asking the wrong question.
The wrong question is: Can AI replace me?
The right question is: Who does this force me to become?
When your foundation dissolves
For most of our careers, professional identity has been built on what we know. For thirty years, mine looked like this: I ran a research and advisory firm, Altimeter Group, with a full team — researchers, editors, a controller, and sales. We produced reports slowly and deliberately. I wrote, spoke, and advised in a traditional manner. That work, and the expertise it represented, was my identity.
Maybe your career looks different. But the foundation is the same: we got promoted for knowing things and built authority on competence.
AI doesn’t just threaten that foundation. It dissolves it. I can now do everything Altimeter did — research, synthesis, content creation — at a pace and scale that would have been unimaginable then. The team I once needed? Agents. And when the foundation goes, so does the identity built on top of it.
I’ve spent thirty years studying how technology disrupts organizations and the people in them — sitting inside the transformation as it happened, not just analyzing it afterward. At Adobe, I watched leaders navigate one of the most wrenching pivots in Silicon Valley history: abandoning the packaged software model that had made them great and betting everything on the cloud. Customers signed petitions. Employees raged internally. People would pull me aside and say, in the same breath, “You won’t believe the amazing things we’re doing and the insane things we’re doing.” But the leaders never wavered. They saw the fear, understood the anxiety, and held the line anyway. That steadiness — while staying deeply human — is what separates leaders who create transformation from those who just survive it.
My central conviction has never wavered: it’s never really about the technology. It’s always about the people. That’s what Dr. Katia Walsh and I found after two years interviewing over fifty executives for Winning with AI: The 90-Day Blueprint for Success, published earlier this week. The leaders who are winning aren’t the most technically fluent. They’re the most human.
I am probably not the first person you would expect to write a book on AI. I’m not a technologist. For most of my career, AI was inaccessible to me: too technical, too opaque, requiring skills I didn’t have. ChatGPT was the first AI tool I could actually use. And even then, it took time. That early version made plenty of mistakes; you couldn’t fully trust it. So when it turned out it could do a pretty decent job of imitating me, my head was spinning. I felt pressure as someone who advises leaders on disruption, I was supposed to have answers. Instead, I had vertigo.
What I’ve come to realize is that this is exactly why I was the right person to write a book on AI. Not despite being a non-technologist, but because of it. If I could find my footing in AI, understand it deeply enough to advise the world’s largest organizations, and build an entirely new way of working with it, so can you.
That’s when I stopped trying to outrun the feeling and started asking what it was telling me. What I heard was this: I had a lot to learn. And so did everyone else. The muscle I needed wasn’t AI expertise I didn’t have. It was adaptability. The ability to learn something new, feel lost, and keep going anyway. I’d been building that muscle for thirty years without knowing it had a name.
I still feel it. In 2025, I tried vibe coding by building a group scheduling app almost entirely through AI prompts. I barely know how to use Terminal on my MacBook. I went in circles for weeks, stuck on things any real developer would have solved in an afternoon. I eventually handed it off to my son, an AI product engineer, who sorted it out in a fraction of the time. It would have been easy to read that as failure. But I’d pushed myself into unfamiliar territory, stayed there longer than was comfortable, and understood the architecture of what I’d built in a way no tutorial could have taught me. The frustration was the curriculum.
I still get imposter syndrome about AI. I probably always will. But I’ve stopped thinking that disqualifies me. If anything, it means I’m still in the game.
If you’ve felt that disorientation, you’re not having a breakdown. You’re having an awakening. The disorientation is the signal.
What leaders actually do
Here’s the distinction that changes everything: managers maintain the status quo. Leaders create change. They step into voids. In every era of technological disruption — the printing press, electricity, the internet — the defining question was never whether things would change. It was who would show up to shape what the change became.
This is that moment.
I know a truck driver, now in his mid-forties, who understood this instinctively. A few years ago, he could see autonomous vehicles on the horizon, not tomorrow, but coming. So he got licensed to transport hazardous materials. His reasoning was clear-eyed: that’s where automation will go last. The regulatory complexity, the liability, the edge cases, it would take decades. He knew that. So he invested in extending the years he could do the job he loves, not by resisting change but by reading it clearly and moving to where it would reach him last.
He’s driving with that license now, though not every run requires it. The full transition is still probably a decade away, he figures. But as someone who genuinely loves his work, he didn’t wait for disruption to arrive at his door. He led himself first.
That’s the sequence that matters right now. Lead yourself first. Then lead others. Then lead your organization. It’s not easy to disrupt yourself, especially when you can see the change coming, which goes against every instinct to protect what you’ve built. But there’s a difference between being swept along by disruption and steering into it. My truck driver friend didn’t wait to find out which one he’d be. He decided.
Lead it, or it leads you
As AI grows more powerful, the fears grow louder, and they’re not wrong. Nearly two-thirds of workers expect AI to make the workplace feel less human this year. When something feels that large and threatening, the temptation is to step back and let others sort it out.
I understand that impulse. But I’ve come to believe it runs exactly the other way.
AI is being built right now by companies, governments, and individuals with wildly different values and intentions. The leaders who will shape how this goes need to be people who care about how it goes. Here’s the data point that stops me: according to BCG’s AI at Work 2025 report, employee positivity about AI rises from 15% to 55% with strong leadership support. Your presence in this conversation is not a neutral choice.
You can only push back on something you understand. You can only advocate for responsible, ethical use of AI if you’ve actually engaged with it. Avoidance isn’t neutrality. It’s ceding the room to whoever shows up.
What the 20% actually contains
Back to my experiment. What AI couldn’t replicate was the 20% that came from a unique perspective: the unexpected angle, the creative leap, the question that only emerged from reading the emotional undercurrent of what was actually being asked.
That 20% is not a consolation prize. It’s the whole game.
But here’s the risk: it’s easy to let that edge go dull. Not because AI takes it from you, but because you stop reaching for it. When a tool can synthesize, summarize, and structure on demand, the temptation is to stop asking the questions that only you can ask — the ones born from your specific combination of everything you’ve lived, built, and learned. That synthesis isn’t in any training set. It’s yours alone. And it only compounds if you keep investing in it.
What it actually looks like to start
There’s probably something you know you should be using AI for but have been putting off. Maybe you don’t have time. Maybe it feels hard. Maybe you feel like you need to know more before you try.
You already have what you need. It’s called curiosity. Here are three places to start:
1. Use AI to learn how to use AI. Open a tool — ChatGPT, Claude, whatever you’ve been meaning to try and just ask it how to use it. Ask it to walk you through a task you do every day. You are one question away from starting.
2. Bring it into real work, not a test. Use AI on something that actually matters to you this week, a communication you’ve been struggling with, a document you’ve been avoiding. The stakes make the learning real.
3. Do it in public. Use AI visibly, with your team. Make a mistake in front of them. Say “I don’t know how this works yet, let’s figure it out together.”
And here’s what it will feel like when you start: uncertain. Messy. Like you’re missing something obvious.
Good. That feeling is called leading.
Your team isn’t watching for certainty. They’re watching for courage, for someone willing to move toward the unknown before it’s completely mapped. I didn’t have Terminal figured out when I tried vibe coding. I still don’t! But I showed up anyway, and that mattered more than the app I was trying to build.
The identity crisis AI is giving you isn’t a threat to who you are. It’s an invitation to become who you need to be.
So here’s the question I’ll leave you with: What’s the experiment you’ve been putting off and what would it mean for the people around you if you stopped waiting and just began?
Step into it.
Charlene Li is the co-author of Winning with AI: The 90-Day Blueprint for Success (with Dr. Katia Walsh), available now. She is a New York Times bestselling author and one of the leading voices on leadership, disruption, and digital transformation.






Yup! I’m with you. I’ve trained an AI coach I give my clients to get ready to talk to me That way we spend our precious human time 1:1 on things only humans can do. Listen between the lines. Care. Dream. Think about meaning. Think about impact for people as well as systems and bottom lines. It’s frankly leveling up not only me but my coaching practice. I’m loving it.
Love this, such valuable advice. Step in to the unknown, even if you don't feel very bold, that's the only way courage grows.
My favorite quote: "I was bringing myself to the answer in a way the model couldn’t replicate." That feels like the key differentiator between human and AI.