Side build · AI Tooling

Dispatch

Built an AI-powered support routing system for a US fintech during a one-week product audit. Automated classification, deduplication, team routing, and workload balancing across 2000+ monthly tickets.

Fractional product leadership 2025

Overview

From manual chaos to instant AI routing

Product

AI Support Pipeline

End-to-end system: ticket classification, duplicate detection, intelligent routing to the right team and person, auto-generated FAQ, and a conversational query interface.

Client & context

US Fintech

Fast-growing company processing ~2000 support tickets per month, with a 10-person support team spread across multiple specialized squads.

My Role

Fractional product leadership

One-week engagement: identified the bottleneck, designed the system, built a working prototype with OpenAI API, and validated it against real ticket data.

The problem

2000 tickets a month, all routed by hand

Manual triage

One person sorting everything

Every ticket had to be manually categorized, then assigned to the right team, then balanced across agents. One person spent a significant chunk of their time just routing, not solving customer problems.

Downstream waste

Duplicates, imbalance, no self-serve

Duplicate tickets clogged queues. Some agents were overloaded while others sat idle. Many questions already had answers buried somewhere, but there was no FAQ, no knowledge base, nothing.

The system

Four layers of automation, one pipeline

Each ticket passes through a multi-step AI pipeline: classify, deduplicate, route, and balance, then the same data feeds a live FAQ and a conversational query layer.

1

Classify & deduplicate

OpenAI API scans each incoming ticket, assigns a category from the company's taxonomy, detects near-duplicates, and flags tickets that already have known answers.

2

Human-in-the-loop fallback

If the model's confidence score for classification or deduplication drops below 95%, the system holds the ticket in a designated verification queue for human review, ensuring deterministic quality.

3

Route & balance workload

High-confidence tickets are distributed evenly across the correct specialized squad based on current load, preventing one person from drowning while others wait.

4

Generate FAQ & enable queries

Recurring patterns are surfaced to auto-generate a structured FAQ. A conversational chatbot lets the team query the ticket base directly, e.g. "How many refund requests last month?"

Execution

One week from audit to working prototype

Product thinking

Map the real bottleneck first

Started by observing the support workflow end-to-end. The problem wasn't response quality, it was everything before the response: sorting, routing, finding existing answers. That became the scope.

Quality Metrics

Evaluation-driven iteration

Didn't just "guess" if the prompt worked. Built an evaluation dataset of 200 past tickets and ran testing scripts to measure hallucination rates and categorization accuracy across different prompt iterations.

Trust & Transparency

Designing AI UX patterns

To get support agents to trust the new tool, routing decisions included citations (showing exactly what part of the ticket triggered the classification) and easy human override buttons if the AI was wrong.

Co-construction

Build with the operators, not for them

Involved the support team from day one, understanding their workflow, getting feedback on routing rules, adapting to how they actually work. Internal tools need the same product rigor as customer-facing ones: without buy-in, nothing ships.

Delivery

Working prototype in 5 days

Delivered a functional prototype validated against real ticket data. By prototyping standalone first, we de-risked the logic and the human-in-the-loop workflows before writing any integration code into their live system.

Result

From hours of manual sorting to instant routing

~15h/week

Manual triage eliminated

Tickets classified and routed in seconds instead of ~3 minutes of manual sorting each, freeing the equivalent of ~15 hours per week of pure routing work.

~92%

Classification accuracy

Automated category assignment matched the company's existing taxonomy with high accuracy, validated against a sample of manually classified tickets.

~30%

Duplicate tickets flagged

Near-duplicate detection surfaced ~30% of incoming tickets as repeats or variations of existing issues, reducing queue clutter and enabling a self-serve FAQ.

Takeaway

What I learned

Prototype standalone, integrate later

Don't wire into a live system on day one. Build a working prototype on the side, validate it with real data, prove the logic works, then connect. This approach is faster, less risky, and makes stakeholder buy-in much easier because you can show results before asking anyone to change their workflow.

Treat internal tools as real products

The biggest risk with AI automation isn't technical, it's adoption. If you build something and drop it on a team, they'll resist it. Co-constructing with operators from the start creates something that actually fits their workflow, and people who feel ownership don't push back on change.