Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

Building Things That Actually Matter

Every product leader I know is drowning in advice. Build faster. Ship more features. Be customer-obsessed. Use the available data and trust your gut.

Brandon Green

Dec 30, 2025

What I know from experience is that most product failures aren't execution problems, they're the result of building the wrong thing for the wrong people at the wrong time.

I've spent over a decade building and leading product and UX teams through PE acquisitions, leadership changes, market pivots, and those terrifying moments when you realize your roadmap is solving problems nobody actually has. The difference between teams that ship successfully and teams that don't comes down to discipline. Specifically, a disciplined approach to answering a handful of critical questions before you write a single line of code.

This isn't another think-piece about "customer-centricity." This is the playbook I wish I'd had ten years ago. It would have prevented many issues before they became reality.

What I know from experience is that most product failures aren't execution problems, they're the result of building the wrong thing for the wrong people at the wrong time.

I've spent over a decade building and leading product and UX teams through PE acquisitions, leadership changes, market pivots, and those terrifying moments when you realize your roadmap is solving problems nobody actually has. The difference between teams that ship successfully and teams that don't comes down to discipline. Specifically, a disciplined approach to answering a handful of critical questions before you write a single line of code.

This isn't another think-piece about "customer-centricity." This is the playbook I wish I'd had ten years ago. It would have prevented many issues before they became reality.

Start with the correct problems (or waste everyone's time)

The first trap most teams fall into? Jumping straight to solutions. Someone in Sales wants a feature, a competitor launched something shiny, an executive has a "vision," and suddenly you're prioritizing based on who yells the loudest, not what moves the business forward.

AI won't magically tell you what to build. But it can surface patterns in the data you already have. Those support tickets, user interview notes, and analytics buried in Confluence pages, AI can synthesize them faster than any human. The trick is asking better questions. What problems keep recurring? What's the business impact if we solve them? Where are users getting stuck?

Most teams already have 70% of what they need scattered across Slack threads and old research decks. You don't need to gather more data, you need to connect what you have to outcomes that matter. Spend a week doing this correctly, and you'll save yourself three months of building the wrong thing.

But data only gets you 80% of the way there. The last 20% requires human judgment, politics, and strategic fit. AI can surface the pattern that support tickets mentioning "export" have tripled, but it can't tell you that your exec team has privately decided to pivot to enterprise. You need both pattern recognition and a grasp of the business reality.

Once you think you know the problem, validate it by asking customers two questions. "What do you do today to solve this?" and "How much would you pay for a solution?"

Start with the correct problems (or waste everyone's time)

The first trap most teams fall into? Jumping straight to solutions. Someone in Sales wants a feature, a competitor launched something shiny, an executive has a "vision," and suddenly you're prioritizing based on who yells the loudest, not what moves the business forward.

AI won't magically tell you what to build. But it can surface patterns in the data you already have. Those support tickets, user interview notes, and analytics buried in Confluence pages, AI can synthesize them faster than any human. The trick is asking better questions. What problems keep recurring? What's the business impact if we solve them? Where are users getting stuck?

Most teams already have 70% of what they need scattered across Slack threads and old research decks. You don't need to gather more data, you need to connect what you have to outcomes that matter. Spend a week doing this correctly, and you'll save yourself three months of building the wrong thing.

But data only gets you 80% of the way there. The last 20% requires human judgment, politics, and strategic fit. AI can surface the pattern that support tickets mentioning "export" have tripled, but it can't tell you that your exec team has privately decided to pivot to enterprise. You need both pattern recognition and a grasp of the business reality.

Once you think you know the problem, validate it by asking customers two questions. "What do you do today to solve this?" and "How much would you pay for a solution?"

Build for the customers who'll stay.

Every product team faces the dilemma of deciding between doubling down on current customers or chasing the shiny new segment that could 10x your TAM?

The answer is almost always fix retention first. Unless you're hitting 60%+ penetration in your core segment, you can't afford to leak customers while chasing new ones. If you're losing 5% of customers each month, no amount of new logo acquisition will save you.

Here's the framework, segment your customers by revenue, retention, engagement, and velocity, then use AI to find patterns. Which segment has the highest LTV and lowest CAC? That's your winning ticket. If 80% of revenue comes from 20% of customers, you just found your ICP (Ideal Customer Profile). Everything else is either not worth your time or just a future bet.

If your core customers love you and you're hitting a growth ceiling, expansion makes sense. But test it cheaply first by running landing page experiments. Do 10-15 interviews with the new segment. Analyze competitor reviews to understand what they're missing. Spending $5K and two weeks on validation beats $500K and six months building features nobody cares about.


Validate before you build

Most teams skip validation because they think they need to "get it live." Common sense tells you that spending three weeks validating an idea is infinitely faster than spending three months building something nobody wants.

Validation means testing behavior, not opinions. Build a fake door, a landing page that describes the feature, and a measure for signups. Run prototype tests with 8-12 users. Do a "Wizard of Oz" test where you manually deliver the outcome before automating it. AI can help here too, use it to rapidly synthesize user feedback, identify patterns across test sessions, and flag where assumptions are breaking down.

You're not looking for perfection. You're looking to prove demand with minimal investment. If users won't engage with a fake version, they won't use the real one. This single step kills 60% of bad ideas for less than 5% of the build cost.

And here's the part nobody seems to do. Set a kill threshold before you start. "If 40% of users don't adopt in 30 days, we cut it." Otherwise, you end up arguing over how bad it has to be before you pull the plug.

Build for the customers who'll stay.

Every product team faces the dilemma of deciding between doubling down on current customers or chasing the shiny new segment that could 10x your TAM?

The answer is almost always fix retention first. Unless you're hitting 60%+ penetration in your core segment, you can't afford to leak customers while chasing new ones. If you're losing 5% of customers each month, no amount of new logo acquisition will save you.

Here's the framework, segment your customers by revenue, retention, engagement, and velocity, then use AI to find patterns. Which segment has the highest LTV and lowest CAC? That's your winning ticket. If 80% of revenue comes from 20% of customers, you just found your ICP (Ideal Customer Profile). Everything else is either not worth your time or just a future bet.

If your core customers love you and you're hitting a growth ceiling, expansion makes sense. But test it cheaply first by running landing page experiments. Do 10-15 interviews with the new segment. Analyze competitor reviews to understand what they're missing. Spending $5K and two weeks on validation beats $500K and six months building features nobody cares about.


Validate before you build

Most teams skip validation because they think they need to "get it live." Common sense tells you that spending three weeks validating an idea is infinitely faster than spending three months building something nobody wants.

Validation means testing behavior, not opinions. Build a fake door, a landing page that describes the feature, and a measure for signups. Run prototype tests with 8-12 users. Do a "Wizard of Oz" test where you manually deliver the outcome before automating it. AI can help here too, use it to rapidly synthesize user feedback, identify patterns across test sessions, and flag where assumptions are breaking down.

You're not looking for perfection. You're looking to prove demand with minimal investment. If users won't engage with a fake version, they won't use the real one. This single step kills 60% of bad ideas for less than 5% of the build cost.

And here's the part nobody seems to do. Set a kill threshold before you start. "If 40% of users don't adopt in 30 days, we cut it." Otherwise, you end up arguing over how bad it has to be before you pull the plug.

Ship, measure, iterate

Launching a feature isn't the finish line. Most teams declare victory at launch and move on, then, six months later, nobody's using it, and nobody knows because nobody's measuring.

Success measurement starts before you build. Define what "good" looks like. Adoption rate. Retention rate. Impact on your north star metric. If you don't set the bar pre-launch, you'll rationalize any outcome as success. Humans are great at that.

Track leading indicators in the first 48 hours. Are users finding it? Completing the core action? Dropping off at a specific step? The first week tells you if you have a catastrophic UX problem. Don't wait 30 days to discover nobody can figure out how to use it.

By day 30, measure retention. By day 60, measure business impact. A feature with solid engagement but zero revenue impact is a nice-to-have, not a priority.

Good benchmarks? 30-50% adoption in the first month, 40-60% retention, and measurable movement on your north star within 60 days. Anything under 20% adoption and 30% retention is a signal to kill or pivot fast. AI can accelerate this analysis, but the hard part is having the discipline to act on what it tells you.


Build AI where it actually helps

Every product team is feeling pressure to "add AI." Most AI features are solutions looking for problems. Demos that get good tweets but zero adoption.

Real AI value comes from automating high-frequency, low-judgment tasks. Summarizing support tickets. Auto-tagging content. Suggesting next actions based on patterns. Not "write our product strategy" or "make high-stakes decisions."

A simple test is asking: Is this task repetitive and time-consuming? Can you quantify the time saved? Is accuracy important but not life-or-death? If yes to all three, AI might help. If the task requires deep context or creativity, you need an intelligent human.

And this is non-negotiable. Always include human-in-the-loop controls. AI suggests, users approve. Show confidence scores. Let users override with one click. Users will forgive an 85% accurate AI that explains its reasoning; they'll abandon a 95% accurate black box.

Ship, measure, iterate

Launching a feature isn't the finish line. Most teams declare victory at launch and move on, then, six months later, nobody's using it, and nobody knows because nobody's measuring.

Success measurement starts before you build. Define what "good" looks like. Adoption rate. Retention rate. Impact on your north star metric. If you don't set the bar pre-launch, you'll rationalize any outcome as success. Humans are great at that.

Track leading indicators in the first 48 hours. Are users finding it? Completing the core action? Dropping off at a specific step? The first week tells you if you have a catastrophic UX problem. Don't wait 30 days to discover nobody can figure out how to use it.

By day 30, measure retention. By day 60, measure business impact. A feature with solid engagement but zero revenue impact is a nice-to-have, not a priority.

Good benchmarks? 30-50% adoption in the first month, 40-60% retention, and measurable movement on your north star within 60 days. Anything under 20% adoption and 30% retention is a signal to kill or pivot fast. AI can accelerate this analysis, but the hard part is having the discipline to act on what it tells you.


Build AI where it actually helps

Every product team is feeling pressure to "add AI." Most AI features are solutions looking for problems. Demos that get good tweets but zero adoption.

Real AI value comes from automating high-frequency, low-judgment tasks. Summarizing support tickets. Auto-tagging content. Suggesting next actions based on patterns. Not "write our product strategy" or "make high-stakes decisions."

A simple test is asking: Is this task repetitive and time-consuming? Can you quantify the time saved? Is accuracy important but not life-or-death? If yes to all three, AI might help. If the task requires deep context or creativity, you need an intelligent human.

And this is non-negotiable. Always include human-in-the-loop controls. AI suggests, users approve. Show confidence scores. Let users override with one click. Users will forgive an 85% accurate AI that explains its reasoning; they'll abandon a 95% accurate black box.

Execute without breaking things

The final challenge is balancing speed and quality. "Move fast and break things" works until broken things kill your business. "Measure twice, cut once" works until competitors ship 10x faster.

The answer is to risk-tier your features. High-risk features such as payments, data integrity, and security undergo extensive testing, staged rollouts, and rollback plans. Low-risk features like UI polish and experimental betas get basic testing and ship fast (yes, this is coming from a UX leader). Use feature flags to decouple deployment from release. If something breaks, kill it in 30 seconds.


The real difference

So I'll repeat it, here's what separates teams that ship successfully from teams that don't. Discipline.

Discipline to validate before building. Discipline to focus on one customer segment at a time. Discipline to kill features that aren't working. Discipline to say "this one needs to be slow" when it touches money or data integrity.

AI can accelerate pattern recognition, scenario modeling, and feedback analysis, but it can't replace the hard conversations, the strategic bets, or the courage to kill your darlings when the data says they're not working.

You don't need to be perfect. You need to be disciplined about asking the right questions, testing your assumptions, and iterating based on what you learn. Do that consistently, and you'll outperform 90% of product teams still building based on who yells loudest.

Now go build something that actually matters.

Execute without breaking things

The final challenge is balancing speed and quality. "Move fast and break things" works until broken things kill your business. "Measure twice, cut once" works until competitors ship 10x faster.

The answer is to risk-tier your features. High-risk features such as payments, data integrity, and security undergo extensive testing, staged rollouts, and rollback plans. Low-risk features like UI polish and experimental betas get basic testing and ship fast (yes, this is coming from a UX leader). Use feature flags to decouple deployment from release. If something breaks, kill it in 30 seconds.


The real difference

So I'll repeat it, here's what separates teams that ship successfully from teams that don't. Discipline.

Discipline to validate before building. Discipline to focus on one customer segment at a time. Discipline to kill features that aren't working. Discipline to say "this one needs to be slow" when it touches money or data integrity.

AI can accelerate pattern recognition, scenario modeling, and feedback analysis, but it can't replace the hard conversations, the strategic bets, or the courage to kill your darlings when the data says they're not working.

You don't need to be perfect. You need to be disciplined about asking the right questions, testing your assumptions, and iterating based on what you learn. Do that consistently, and you'll outperform 90% of product teams still building based on who yells loudest.

Now go build something that actually matters.