CPO

CPO

The product lens: how Chief Product Officers are using AI to reshape workflows, ship customer-facing features, and rethink what a PM does day to day.

The product lens: how Chief Product Officers are using AI to reshape workflows, ship customer-facing features, and rethink what a PM does day to day.

Inside the engineering engine: how 42 CTOs (83% of whom are founders) are building, buying, and scaling AI across the software development lifecycle.

Inside the engineering engine: how 42 CTOs (83% of whom are founders) are building, buying, and scaling AI across the software development lifecycle.

Inside the engineering engine: how 42 CTOs (83% of whom are founders) are building, buying, and scaling AI across the software development lifecycle.

// HIGHLIGHT 01

download[ ]

share[ ]

// HIGHLIGHT 01

[ ]

Every CPO uses AI for design. But hardly 25% trust it for analysis.

AI adoption across the PM value chain follows a striking gradient—one that mirrors the CTO's SDLC curve but with a distinctly product-flavoured twist

At the creative end of the PM workflow, AI has become the default.


Design assistance has reached 100% adoption with 62.5% of CPOs already scaling AI within their teams. By contrast, PRD generation and user research synthesis lag at the “scaled to team” level, signaling that these tools are still used more by individual PMs than embedded as standard practice.

That confidence drops sharply when the work shifts from creation to judgment.

For analytical decision-shaping tasks like A/B test analysis and feature prioritisation adoption falls to just 25%. The further a task moves from “make something” toward “decide something” the less PMs turn to AI.

But as models mature and agentic tooling becomes more tightly woven into PM workflows, the judgment gap may narrow faster than expected. Given the pace at which AI capabilities have evolved, it would be worth watching whether analytical tasks move up the adoption curve in the next 12 months.

PMs have embraced AI where the cost of a mediocre output is as little as a revision. They haven't embraced it where the cost is a wrong decision.

This gap reflects a fundamental difference in what PMs are willing to delegate. Generating a first draft of a PRD is low-risk; a human will refine it. But deciding which feature to build next, or interpreting what an A/B test result means for the roadmap, carries real product consequences.

Where is AI used in your PM workflow?

Code Generation

Documentation

Debugging

Code Review

Test Automation

DevOps/Infra

DevOps/Infra

Code Generation

Documentation

DevOps/Infra

Debugging

Code Review

Test Automation

DevOps/Infra

// HIGHLIGHT 02

download[ ]

share[ ]

// HIGHLIGHT 02

[ ]

Faster prototyping means more bets, cheaper experiments, and quicker funerals for bad ideas.

AI has dramatically accelerated the creative half of the PM workflow. 100% of CPOs report a reduction in time from idea to prototype with 62.5% reporting a reduction of more than 50%.

This confirms a broader survey finding: 45% of founders cite “faster experimentation” as their most unexpected AI benefit. The CPO data provides the product-specific proof point. The experimentation speed-up is real - it’s dramatic and it’s being felt most acutely in the prototyping phase.

And the compounding effect matters. When a product team can turn an idea into a testable prototype in a fraction of the time it used to take, the entire development cadence shifts. More ideas get tested. Bad ideas die faster. Good ideas reach users sooner. For product teams speed is more than just a benefit of AI adoption; it's become the primary driver of it.

How has AI changed the time from idea to prototype?

// HIGHLIGHT 03

download[ ]

share[ ]

// HIGHLIGHT 03

[ ]

Every PM has a stack, but no two stacks look the same.

Across four categories of PM tools (PRDs, prototyping, design, and user research), a small number of tools dominate, leaving a major gap.


Figma Make leads design at 75%. But practitioners are candid: a significant portion of Figma Make's output needs heavy rework before it's usable. It's a starting point, not a finishing line.

For everything else, CPOs are reaching for Claude. It dominates PRDs with a 50% share and anchors prototyping workflows alongside ChatGPT. Most CPOs now build functional prototypes through dialogue rather than a canvas.

Figma Make

Design

UI design, component generation, design iteration.

Claude

PRDs + Prototyping

Writing specs, conversational prototyping, code generation.

ChatGPT

Prototyping

Rapid ideation, interface mocking, dialogue-based prototyping.

Cursor

Prototyping + Dev

Vibe coding, pushing designer-built code to production.

Lovable

Prototyping

No-code/low-code app prototyping, PM-led concept building

Granola

User Research

Meeting notes, transcript capture, session summaries

Dovetail

User Research

Interview synthesis, affinity mapping, insight

extraction

Obsidian + Claude

Knowledge Management

Second brain, context management, personal AI workflows

This explains why 43% of CTOs still flag a “talent shortage.” They’re not short on engineers. They’re short on engineers with AI-specific skills. The talent gap is about capability more than the headcount.

The contrast with the CEO is sharp and worth noting. CEOs see the macro outcome: we need fewer people. CTOs see the execution reality: we need different people. Both are right. The workforce is thinning, and the required skill set is evolving. The result is a talent reshuffle, not a reduction.

The one glaring absence is user research. No tool has won here. Granola handles meeting notes and transcript capture. Dovetail surfaces for interview synthesis. But a coherent research stack that takes CPOs from raw signal to validated insight doesn't exist yet.

Cursor has entered the stack too, particularly for designers who've started pushing code to production themselves. And Lovable is gaining ground fast, especially with PMs and POs looking to prototype without design involvement.

// HIGHLIGHT 04

download[ ]

share[ ]

// HIGHLIGHT 04

[ ]

AI assists execution. Humans still own judgment. The line between them is quality.

When asked about the biggest challenges in adopting AI for product work, one answer towers above the rest

quality concerns about AI outputs at 62.5%.

quality concerns about AI outputs at 62.5%.

This echoes what CTOs report

This echoes what CTOs report

68%

68%

cite accuracy and hallucinations as their top technical hurdle

44%

44%

name it the leading POC killer.

But the quality problem looks different from the PM’s chair. It’s less about model hallucinations in a technical sense and more about whether the output meets a product's quality bar. When a PM evaluates AI-generated content, designs, or recommendations, the standard is “would I ship this to users?”

For design and prototyping, AI output is good enough to iterate on as a starting point. For user research, feature prioritisation, and A/B test analysis, the quality bar is much higher because these outputs directly shape product decisions. Until AI can consistently meet that bar, CPOs will continue to draw a line between “AI-assisted execution” and “human-led judgment.”

PM-Specific AI Adoption Challenges (N=8, multi-select):

PM-Specific AI Adoption Challenges (N=8, multi-select):

PM-Specific AI Adoption Challenges (N=8, multi-select):

// HIGHLIGHT 05

download[ ]

share[ ]

// HIGHLIGHT 05

[ ]

The CPO’s Playbook: How product teams are driving AI

CPOs are believers, but measured ones. 71% report being more excited about AI than a year ago, a strong signal, though slightly more tempered than the 83% ecosystem average. The gap comes from their proximity to the user. CPOs have seen enough of what AI can and can't do in a product context to be enthusiastic without being evangelical.

That measured confidence shapes how they're building.

When asked how product teams are formalising AI adoption, the most common approaches are documentation and individual tool budgets. One captures what works, the other gives PMs permission to find it. These teams are also actively trying to spread what's working rather than leaving it to individual discovery through workshops, hackathons, and dedicated AI champions.

Driving AI Adoption in Product (N=8,

multi-select):

Documentation/playbooks

Documentation/playbooks

Documentation/playbooks

Documentation/playbooks

37%

37%

37%

37%

AI champion recognition

AI champion recognition

AI champion recognition

AI champion recognition

25%

25%

25%

25%

Nothing formal yet

Nothing formal yet

Nothing formal yet

Nothing formal yet

25%

25%

25%

25%

Regular training sessions

Regular training sessions

Regular training sessions

Regular training sessions

25%

25%

25%

25%

Internal hackathons

Internal hackathons

Internal hackathons

Internal hackathons

25%

25%

25%

25%

Individual AI tool budgets

Individual AI tool budgets

Individual AI tool budgets

Individual AI tool budgets

37.5%

37.5%

37.5%

37.5%

External expert workshops

External expert workshops

External expert workshops

External expert workshops

25%

25%

25%

25%

But the ceiling, when it comes, isn't cultural or organisational. Tool maturity and output quality register as scaling barriers at 86%, the highest of any persona in this survey. CPOs are waiting for the tools to be good enough to trust at scale, which is exactly the kind of bar you'd expect from the person who ships what users actually experience.

Hiring reflects the same patience. 37.5% are freezing or reducing headcount, mirroring the broader ecosystem trend. Another 37.5% say it's too early to tell. Product teams are doing more with the same team while they wait for the tools to close the gap.

Explore By Roles

Deep-dive into the data that matters most to your function

AI Adoption Survey Report · Indian Startups 2025