You're Not a Bridge Anymore: How AI Is Redefining Product Management

For 20 years, I translated between customers and engineers. Now I architect decision systems with AI. But here's the catch: you're designing for two customers: humans and their AI agents. Design for one, forget the other, and it fails.

Dr. Yoram Friedman
7 min read
You're Not a Bridge Anymore: How AI Is Redefining Product Management

I left my radiology residency because I couldn't reconcile myself to implementing evidence without questioning it. I wanted to innovate, to push back, to be the contrarian in the room. Medicine doesn't have space for people like that.

So I became a product manager. First at Microsoft as an outbound PM, then at SAP on the NetWeaver team, building the foundation for what would eventually become the Business Technology Platform. I had no training for this. No framework. No roadmap.

But my hiring manager, another physician, told me something that made sense immediately. Your skill reading between the lines with patients, pinpointing what they really need, and architecting a safe, evidence-based solution to that need, is not something you're leaving behind. That's the foundation of product management.

He was right, but not in the way I understood at the time.

The Framework That Works

Over the past twenty years, I've hired product managers, and I ask them all the same opening question: "How do you define the role of a product manager?"

I have never heard the same answer twice. This variation doesn't tell me the profession is confused. It tells me the profession is evolving.

A few weeks after I started at SAP, I attended a training from the Haas School of Business. Nothing the professor said was new to experienced product people. But it gave me a systematic way to think about the work: Customers are buying benefits. Benefits are delivered through capabilities. A segment is a group of customers interested in the same benefits.

The implication was immediate and lasted twenty years: when you sit with a customer, understand what benefits they're willing to pay for. When you sit with the development team, understand what capabilities need to be built to deliver those benefits.

You speak benefits with one audience and capabilities with another. You translate between them. You are, in essence, an observer representing the customer, not yourself.

This is where I need to be direct. Too many times I've heard PMs say "I don't think it should do XYZ" or "I don't like this approach." Your personal taste isn't the point. The only opinions that matter are grounded in customer value and evidence. You are rarely part of the customer segment you're building for. You don't get to have preferences. You get to have convictions about what the data shows, what customers need, what trade-offs are worth making.

The bridge is objective. It observes. It translates. It routes. It doesn't choose based on what the bridge builder prefers.

This definition has held for twenty years. The frameworks changed. The fundamental job didn't. Until now.

What Changed: The Administrative Tax Dropped to Zero

Over the past five years, the profession has undergone a structural shift. In 2020, product management was defined by output: features shipped, velocity, roadmap adherence. By 2025, the profession shifted to outcomes, measured by customer retention, revenue influence, and problem resolution.

The decision-making engine shifted from intuition and HIPPO decisions to real-time data, predictive AI, and evidence. Product is now a peer to engineering, not a supplier to engineering. The PM interaction model shifted from manual coordination to orchestration of intelligent systems.

This shift happened because of AI, but it's not caused by AI. It's enabled by AI.

Here's what changed: the administrative tax on being a bridge dropped to near zero.

Research tracking PM productivity shows substantial time savings on artifact-heavy work. Early studies suggest 40-90% improvement in the time required for PRD development, customer feedback synthesis, competitive intelligence, and roadmap documentation. In practice, a modern PM with AI assistance reclaims approximately 30-40 hours per week previously consumed by artifact creation and information gathering.

The question is not: "How do I use those hours to create more artifacts?"

The question is: "What becomes possible now that I have that time back?"

The answer is clear: framing, trade-off decisions, and stakeholder alignment. You move from gathering information to judging quality. You shift toward deeply understanding market signals and making high-stakes trade-offs that no system can navigate alone.

But there's something deeper happening. It's not just faster artifact creation. It's that you can now architect the bridge as a system.

From Bridge Operator to Bridge Architect

For twenty years, I was the bridge. I understood both languages: benefits and capabilities. I translated on the fly, continuously, in meetings and in documents and in conversations. I was the mechanism of translation.

Now, because of AI, I don't just optimize that process. I can architect multiple types of bridges without engineering's involvement.

I can architect a decision council. When I ran a design thinking workshop with nine AI agents on a healthcare problem, each agent embodied a different stakeholder perspective: clinical safety, workflow integration, regulatory governance, business viability, equity considerations. They disagreed by design. The disagreement was the point. The system generated insights no single perspective could produce alone. That finding changed the product direction. It took three hours and cost under ten dollars.

I can architect a prototype and validation loop. Using no-code tools and Claude, I can explore whether a capability concept actually resonates with customers before engineering builds it. I can show customers what we're thinking about, iterate on the concept, stress-test the strategy. All without asking engineering to invest in an MVP first.

I can architect a customer conversation at scale. I can run a workshop with multiple perspectives, explore trade-offs, stress-test assumptions, all with AI agents that embody expertise and skepticism I couldn't represent alone.

These aren't artifacts. They're structures I build to validate thinking, test concepts, make better decisions about what to build and why. They don't replace engineering; they make engineering's work more strategic.

Using AI to write PRDs faster is still being the bridge. Architecting decision councils, prototype validation, strategy stress-tests. All without engineering. That's being the bridge architect. The distinction matters.

Two Masters: Designing for Humans and AI Agents

Here's the complexity. My Benefits/Capabilities/Segments framework still works, but now I have two distinct customer segments with fundamentally different benefit structures.

First, the human customer. They want outcomes: task completed, decision made, problem solved. The AI agent is a capability to them. They don't care how it works; they care that it works.

Second, the AI agent itself. It's also a user of my system, with its own distinct needs: clear, structured data; well-defined APIs and decision boundaries; transparent reasoning; clear escalation rules.

I'm designing for two audiences that want different things. A feature might deliver perfect human benefits but be opaque to an AI agent. An API might be crystal clear to an agent but invisible to the human user.

This is what Agent-Based Experience means: humans and AI agents as distinct customer segments, both using the product natively. The question I ask changes from "How do customers use this?" to "How do both humans and agents use this, and do they need different capability structures?"

Why Methodology Shifts Even Though the Framework Holds

The reason JTBD, GIST, and PLG adapt is not because they broke. It's because the conditions changed on both sides. In my experience, there are now two parallel realities I must account for:

First, users increasingly arrive with their own AI assistants. I'm not designing for humans alone anymore. I'm designing a product that humans will use through AI agents. The interface might be invisible. The user might never see my design. The AI agent will. This changes what value means and how I measure success.

Second, my team plans and executes with AI as a teammate. This is where methodology adapts. I'm no longer just thinking about what users need. I'm thinking about what users' AI agents need. I'm designing for two audiences simultaneously.

The frameworks survive because they're about understanding jobs and building outcomes. What changes is the implementation, the speed, and the governance.

This matters because the statistics are clear: most AI pilots fail to scale. In healthcare AI, 80-95% of pilots fail to move into production or generate ROI. The pattern is always the same. The PM optimizes for what the AI system does. It's brilliant at the technical problem. But then a physician has to use it at the end of a twelve-hour shift, in a workflow that's already at capacity. The system adds cognitive load instead of reducing it. It works at 2pm when the team is fresh, but breaks at 3am when critical care is understaffed. It doesn't talk to the EHR. The nurse has to re-enter data. The integration point isn't there because nobody planned for how the human actually works.

You designed for the agent. You forgot to design for the human using that agent. If you plan for only one, it fails. You have to plan in parallel. The business case has to account for both. The workflow integration has to account for both. The benefits you promise have to be true for both the system and the person running it.

What Stays the Same

My accountability doesn't change. I remain accountable for the quality of the systems I architect. I still need to understand customer pain deeply. I still own the outcomes. I still get to say "no" when the team wants to optimize a feature when the real problem is something else.

The bridge principle endures: observe, translate, route, prioritize, own outcomes. The core skill remains unchanged.

How to Start: From Bridge Operator to Bridge Architect

If you're a PM today, moving from bridge operator to bridge architect looks like:

  • Document your decision policies. When do you escalate? What principles guide your trade-offs? What's non-negotiable? Write it down. This becomes the guardrail layer for systems you build.
  • Design one small decision system with defined autonomy. Start simple. One problem. Clear roles. Specify which decisions the system makes alone, which need human review, which escalate to leadership.
  • Audit an AI-assisted workflow. Take something you already do with AI (feedback synthesis, PRD drafting, competitive analysis). What can be safely automated? What must stay under human judgment? What's currently fuzzy?
  • Learn model evaluation. You don't need to be a data scientist. You need to know when an AI recommendation is trustworthy and when it's fluent nonsense. This is the new critical skill.
  • Separate tool use from system design. Using Claude to write faster is still being the bridge. Building a decision council with irreconcilable perspectives is architecting the bridge.

Start with one. The others follow.

The Fundamental Shift

For twenty years, PMs operated the bridge. You were the translator. The mechanism. You kept information flowing between customers and teams.

Now, you don't operate the bridge. You architect bridges that operate themselves.

The bridge is automated now. But someone still has to architect it. Someone still has to own the quality of its decisions. Someone still has to ensure it remains grounded in what customers actually need, not what the system thinks they need.

That someone is you.

Share