A user asked our AI support agent a simple question: "Why can't I export more than 30 days?"

The agent knew the answer. The knowledge was all there. But this is what it said:

Before
why can't i export more than 30 days?
A

You can’t export more than 30 days because CSV exports don’t support a custom date picker right now. You can only choose preset ranges, and the largest preset is Past 365 Days (so 30 days is just one of the smaller options).

Also, there’s no workaround to export data older than 365 days, and you can’t combine multiple exports to bypass that limit.

Good news: the 30-day limit was expanded recently, so you can now pick 90, 120, or 365 days instead of being stuck at 30.

Arbi · AI Agent

Starts with "you can't", then immediately explains how you can. Answers a different question than was asked. Volunteers limitations nobody asked about.

The user asked why. The agent answered how — and then contradicted itself by explaining exactly what the user could actually do. Three paragraphs to say almost nothing useful.

This is a very common failure mode in AI agents, and it's not really an AI problem. It's a prompt problem. The agent had no guidance on how to interpret a "why" question, so it defaulted to throwing everything it knew at the wall.

The Fix

I added four sentences of guidance to the agent's instructions:

  1. Answer the question that was asked. If the user asks "why", explain the reason.
  2. Don't pivot to explaining the mechanics of the feature unless that's directly relevant to the why.
  3. Don't say "you can't X" and then immediately describe how to do X. If you're going to mention what's available, frame it positively from the start.
  4. Don't volunteer limitations that weren't asked about. Adding those unprompted creates confusion and makes the response feel like it's hedging.

Same question. Same knowledge base. Here's what the agent said after:

After
why can't i export more than 30 days?
A

Actually, you can export data for more than 30 days! We recently expanded the available timeframes for CSV exports.

You can now choose from these preset ranges: Today, Past 3 Days, Past 7 Days, Past 30 Days, Past 90 Days, Past 120 Days, and Past 365 Days.

The maximum export window is 365 days — this is because we're designed to deliver profile data in real-time rather than serve as a long-term data store, and data older than a year is considered stale anyway.

To export a longer timeframe, just go to your Exports page, click the "Select a Date Range" dropdown, and choose one of the longer options.

Arbi · AI Agent

Answers the actual question. Leads with what the user can do. Gives the reason clearly. No contradictions, no unprompted caveats.

The Takeaway

AI agents drift toward completeness. When they don't have explicit instructions about how to interpret a question, they try to cover everything they know. That's not helpful — it's noise.

The intervention isn't more training data or a better model. It's clearer rules about what kind of answer each kind of question deserves. "Why" questions get reasons. "How" questions get steps. You don't mix them unless one genuinely requires the other.

Four sentences. The response went from confusing to correct.

How I Actually Train Arbi

Arbi is our AI support agent at RB2B. I didn't build it once and walk away. I treat it more like a new hire I'm actively coaching — the difference being that when Arbi gets something wrong, I can fix it immediately and the fix applies to every conversation from that point forward.

My process is simple: I read real conversations, spot patterns in where the responses break down, and write guidance in plain language that addresses exactly what went wrong. Not abstract rules. Specific ones, derived from actual failures.

The example in this post is a good illustration of how that works. I didn't go looking for a response about CSV exports. I found it while reviewing conversations, recognized the pattern immediately, and had the fix written and deployed in about ten minutes. The next user who asked the same question got a completely different answer.

Most of Arbi's improvements have come from exactly this kind of iteration. The agent doesn't get better because the underlying model gets better — it gets better because the instructions get more precise. That's the work. Reading conversations, finding the drift, writing the correction.

It compounds quickly. Each rule added on the back of a real failure makes the agent sharper, and the failures get rarer. The job shifts from fixing obvious mistakes to catching subtler ones. That's a good sign.