← Back to Articles
·8 min read

What "empowered teams" actually means when AI does half the work

Marty Cagan's concept of empowered product teams has been the north star for product organizations for years now. Give a team a problem to solve, not a feature to build. Trust them with the outcome. Let them figure out the solution.

I still believe in this model. But I think we need to have an honest conversation about what happens to it when AI starts doing a significant portion of the actual work.

The tension nobody is naming

Here's the thing that's been nagging at me. Empowered teams are built on the idea that the people closest to the problem are best positioned to solve it. The product trio (PM, designer, engineer) brings together complementary perspectives, and the magic happens when they collaborate directly with customers and with each other.

AI changes the shape of that collaboration. When an engineer can generate boilerplate code in minutes, when a designer can produce mockups from a text description, when a PM can synthesize hundreds of customer conversations overnight, the nature of each role shifts. The question isn't whether AI replaces these roles. It doesn't. The question is what these roles become when the execution floor drops out from under them.

I've been talking to PMs at companies that have aggressively adopted AI tooling, and a pattern keeps emerging. The teams that were already empowered are getting more empowered. The teams that were feature factories are becoming faster feature factories. AI is an amplifier, not a transformation.

What changes about the product trio

PMs shift toward more strategic work, not less. When AI can synthesize data faster, the PM's value moves toward asking better questions and making harder judgment calls. You spend less time processing information and more time deciding what to do with it. This sounds like an upgrade, and in some ways it is. But it also means PMs who relied on information gathering as their primary contribution are exposed. If your main value was knowing what customers are saying because you read all the tickets, AI just made that table stakes.

Designers move up the abstraction ladder. When AI can generate UI variations quickly, the designer's job shifts toward systems thinking. What's the right interaction pattern? What principles should guide the design system? How should the experience feel at a conceptual level? The craft of pixel-pushing matters less. Understanding human behavior matters more.

Engineers focus on architecture and judgment. When AI handles routine coding, engineers spend more time on hard problems: system design, performance, reliability, and deciding which AI-generated code is actually good. The engineer's judgment becomes more important, not less. There's more code to evaluate and more risk of subtle errors in production.

The new collaboration dynamics

Something interesting happens when everyone on the team can prototype faster. The feedback loops tighten. A PM can describe a concept, a designer can mock it up in an hour, an engineer can have a working prototype by end of day, and the team can test it with users the next morning.

That sounds great, and it can be. But it also creates new problems.

Speed can kill discovery. When you can build things fast, the temptation to skip the "should we build this?" question gets stronger. I've seen teams ship three experiments in a week and learn nothing because they didn't set up proper success criteria before they started. Velocity without direction is just expensive wandering.

There's also the "just try it" trap. AI makes it cheap to build things, which makes it tempting to build instead of think. Sometimes "let's just try it and see" is the right call, but when it becomes the default, you stop doing the hard work of understanding the problem space. You end up with inconclusive experiments and no clear product direction.

Another risk is decision fatigue. When the team can generate more options faster, someone still needs to decide which option to pursue. Without clear strategy and success criteria, more options just means more arguments.

What empowerment looks like now

The definition of an empowered team needs updating for the AI era. The original concept still holds: give teams problems, not solutions. But the operating model needs to evolve.

Empowered teams need stronger product strategy. When teams can move faster, the cost of moving in the wrong direction goes up. A clear product strategy that everyone understands and can reference when making decisions becomes critical. This is Melissa Perri's point about the product operating model, and it's become even more relevant as execution speed increases.

They also need better success metrics. If your team can ship three experiments a week, you need clear metrics to evaluate them. Otherwise you're just generating activity. Outcome orientation isn't optional anymore—it's the only way to keep pace with the speed AI enables.

Empowered teams need higher-quality people. This is the uncomfortable truth: when AI handles routine execution, the bar for human contribution goes up. You need PMs who can think strategically, not just organize backlogs. Designers who understand behavior, not just tools. Engineers who can evaluate code quality, not just write it. The "empowered team" model always assumed you had strong people. Now it demands it.

Finally, empowered teams need new working agreements. When an AI tool generates something that's 80% right, who reviews it? Who's accountable for the quality? These questions don't have standard answers yet, and each team needs to figure out their own. The best teams have explicit agreements about when AI output gets used directly and when it needs human review.

A practical framework

If you're leading a team trying to figure this out, here's a starting framework.

First, audit where your team spends time today. Categorize activities into "judgment work" (decisions, strategy, customer understanding) and "execution work" (building, documenting, analyzing). AI will accelerate the execution work. Your job is to make sure the time saved gets redirected toward judgment work, not just toward shipping more features.

Second, strengthen your strategy and success criteria before you accelerate your execution. It doesn't help to build faster if you're building the wrong things. Spend the first few weeks getting clear on outcomes before you optimize for speed.

Third, invest in your team's judgment skills. Run more design reviews, more customer interviews, more strategy discussions. These are the skills that matter most in an AI-augmented team, and they atrophy if you don't exercise them.

Fourth, set up explicit quality gates for AI-generated work. Not because AI output is bad, but because unchecked output of any kind leads to accumulated technical and design debt. Someone needs to own quality, and "the AI did it" isn't an acceptable answer when something breaks.

The leadership question

Ultimately, whether AI helps or hurts your empowered team depends on leadership. Leaders who see AI as a way to get more features shipped faster will create faster feature factories. Leaders who see AI as a way to give teams more time for discovery, strategy, and judgment will create genuinely empowered teams.

The technology is neutral. The organizational choices around it are not.

I keep coming back to something Cagan has said in different ways over the years: the difference between good product companies and bad ones isn't the tools they use. It's whether they've actually embraced a product operating model or just bolted product titles onto a project management culture.

AI doesn't change that diagnosis. It just makes the consequences of getting it wrong more visible, and more expensive, faster.


This article is part of a series on product management in an AI-transformed landscape.