Since day one of using an AI agent for support at RB2B I've tracked one stat: resolution rate. Not ticket volume, not response time. Whether the AI actually solved the user's problem without a human stepping in.

Lately, the numbers have been sitting in the high-70s to low-80s, occasionally touching 90. For an AI support agent handling a technical product, that's a range worth protecting. Here are the three changes that got us there.

1. Real-Time Installation Debugging

Script installation is where a significant chunk of our support volume comes from. Users either can't get the script to fire, or they think it's working when it isn't.

A few weeks ago we shipped an in-app installation debugger. It's comprehensive: covers the major failure modes (wrong placement, conflicts with tag managers, CSP issues) and a long tail of minor ones. Users can self-diagnose without opening a ticket.

The more interesting part is what happens when they go to the AI for help anyway. Knowing that would happen, I built an API into the debugger that the AI can call at query time. When a user asks about their installation, the AI doesn't just pull from documentation. It pulls live diagnostic data about that specific user's install. It knows what's broken right now, not just what can go wrong in theory.

That distinction matters. An AI that says "here are common installation issues" is fine. An AI that says "your script is firing on page load but not on route changes, here's why" is actually useful.

2. Expanded Real-Time Account Data

The AI now has live access to a meaningful slice of account data: credit usage, integration errors, account and script statuses, and a few other fields that tend to come up in support conversations.

This matters because a large category of support questions aren't really about how the product works. They're about why something isn't working for a specific account right now. "Why aren't I seeing leads?" is a different question if the answer is "your script is inactive" vs. "you've used your credits for the month" vs. "your CRM integration has an auth error."

Without account data, the AI is guessing. With it, the AI can close the loop.

3. Procedures Replace Tasks

I've been using Intercom's Tasks feature in beta since September. It was solid. The upgrade to Procedures, the official release of the feature, is on another level.

The logic is cleaner. Conditions are more precisely defined. And the addition of multi-step procedures with sub-procedures unlocks workflows that previously required stitching together workarounds.

The practical result is that the AI can now handle more complex, conditional support paths without falling back to "let me connect you with a human." The paths that used to dead-end now have exits.

The Through-Line

Each of these updates does the same thing in a different context: it closes the gap between what the AI knows and what it needs to know to actually resolve the problem.

Documentation is a foundation. Real-time data, diagnostic APIs, and conditional logic are what sit on top of it. Resolution rate is a lagging indicator. These are the inputs.