How to Build an AI Product Roadmap When the Technology is Changing Every Six Months
AI Product Management : The PM who ties their roadmap to a model version will be wrong by Q3. The PM who ties it to a user outcome will still be right in two years.
Dear readers,
Thank you for being part of our growing community. Here’s what’s new this today,
The AI Product Management: How to Build an AI Product Roadmap When the Technology is Changing Every Six Months?
Note: This post is for our Paid Subscribers, If you haven’t subscribed yet,
In a traditional product role, a twelve-month roadmap is already ambitious. In an AI PM role, a twelve-month roadmap can be invalidated by a single model release. GPT-4 launched. Claude updated. Gemini shipped multimodality. Llama went open-source. Each of those events forced product teams to rewrite plans they had spent weeks building.
The interviewer asking this question is not testing whether you can plan. Every PM can plan. They are testing whether you understand the specific instability of AI as a platform, and whether your planning philosophy has adapted to that reality.
Most candidates answer this like a traditional roadmap question. They talk about sprints, backlogs, and quarterly reviews. That answer is incomplete. The best answer acknowledges the problem directly, names the structural reason traditional roadmaps fail in AI, and then explains a planning approach built for uncertainty rather than despite it.
“The question is not whether you can plan. It is whether your planning system is built for a world where the technology you are planning around could be superseded before you ship.”
Feature-Based Roadmaps Are Brittle by Design
A feature-based roadmap says: in Q2 we will ship a summarisation feature using Model X, in Q3 we will add image understanding using API Y, in Q4 we will integrate retrieval from vector database Z. That roadmap looks like a plan. It is actually a list of technology bets.
The problem is not the ambition. The problem is the coupling. When you tie roadmap items to specific model versions, vendor APIs, or architecture choices, you create a plan that can be invalidated from the outside. A competitor releases a better base model. A vendor changes pricing. A new open-source alternative appears that costs ten times less. Each event does not just delay a feature — it obsoletes the rationale for building it the way you planned.
The numbers that make this urgent
A 2025 McKinsey survey reports that 78% of organisations now use AI in at least one business function, up from 55% just a year earlier. The pace of capability change is accelerating. According to Gartner, 30% of generative AI projects will be abandoned after proof of concept. The leading cause is not funding. It is misaligned planning assumptions that break when the technology shifts.
The deeper issue is that most AI PMs learned roadmapping from traditional software, where features are deterministic. You build a button, the button appears. You write a rule, the rule executes. AI products are probabilistic. You build a summarisation feature, and the quality depends on the model, the prompt, the data, and the evaluation criteria — all of which can change independently. A plan built for deterministic execution will fail when applied to probabilistic delivery.
The Framework: Outcome-Anchored Horizon Planning
The solution is to restructure the roadmap around two principles. First, anchor everything to user outcomes, not technical implementations. Second, organise time into horizons with different levels of specificity, so the roadmap is detailed where you have confidence and directional where you do not.
Outcome-based roadmaps have been advocated by practitioners for over a decade. What is new is how urgently AI demands this approach. In traditional software, outcome-based planning is a best practice. In AI product development, it is a survival requirement.



