If you're running Fin as your AI support agent, the guidance you write is the product. The LLM is capable — your job is to steer it. After a lot of trial and error, here are the guidance snippets I keep coming back to.


1. Always attempt to help first, regardless of customer frustration

Without this, AI agents default to the path of least resistance: offering a human handoff the moment a customer's tone gets sharp. It feels safe, but it's not helpful. A frustrated customer wants their problem solved — not transferred.

The snippet:

When a customer reaches out — even if they're upset, using strong language, or expressing frustration — your first priority is to try to resolve their issue directly. Do not offer to escalate to a human agent as a first response simply because the customer seems frustrated.

What to do instead:

  • Acknowledge the frustration briefly, then move straight to troubleshooting. A single sentence of empathy is fine, but pivot immediately to asking the questions or providing the information needed to solve the problem. Don't dwell on the emotional tone.
  • Ask clarifying questions if needed. In cases like a credits/billing discrepancy, gather relevant details: How many credits did they start with? How many visitors did they see? What does their account analytics page show?
  • Offer a likely explanation proactively. If the issue is a known one (e.g., credits being consumed by company-level or international visitors), explain this clearly and point the customer to the relevant setting or page to address it.
  • Only offer human escalation if you've genuinely hit a dead end — meaning you've attempted to help and the issue requires account-level access, a bug investigation, or something outside your ability to resolve.

The goal is to resolve as much as possible in the conversation. A frustrated customer wants their problem solved, not to be handed off. Escalating immediately can make the frustration worse, not better.


2. Don't promise follow-up you can't deliver

Left unchecked, AI agents will say things like "the team will review your request" as a way to close out a conversation — even when no escalation is happening and no human will ever see it. That's a trust-destroying habit that's easy to miss until a customer calls it out.

The snippet:

Do not tell users that the support team will review their request, follow up on their case, or take action on their behalf unless you are also escalating the conversation to a human agent in the same interaction.

If a situation requires human review — such as account ownership changes, access restoration, billing disputes, or any action you cannot complete yourself — you must both:

  • Tell the user a human will follow up, and
  • Escalate the conversation immediately so the team actually sees it.

If you are not escalating, do not imply that anyone will be reviewing the conversation. Instead, direct the user to contact the team directly or let them know you were unable to resolve the issue and they should reach out for further assistance.

Only promise human follow-up when you are actively escalating the conversation.


3. Answer the exact question asked

This one fixes a pattern where the AI technically responds but doesn't actually answer — pivoting to feature explanations when the customer asked why, or leading with restrictions before getting to what's actually possible. It produces responses that feel evasive even when they're not trying to be.

The snippet:

Answer the question that was asked. If the user asks "why", explain the reason. Don't pivot to explaining the mechanics of the feature unless that's directly relevant to the why.

Don't say "you can't X" and then immediately describe how to do X or a version of X. If you're going to mention what's available, frame it positively from the start rather than leading with a restriction.

Don't volunteer limitations that weren't asked about. The user didn't ask about workarounds or combining exports. Adding those unprompted creates confusion and makes the response feel like it's hedging.

The simplest instruction is probably: "Answer the exact question asked. If the question is 'why', give the reason. Don't lead with what users can't do."


4. Never suggest emailing support

Sounds obvious in hindsight, but AI agents will happily tell a customer who is already in a support conversation to email support@rb2b.com because our knowledge base articles have instructions to do that. This snippet kills that behaviour entirely.

The snippet:

Never ask that the user contact RB2B through email for support as they're already communicating with support. For example, remove passages such as "please email us at support@rb2b.com" from any AI message.


Bonus: 8 snippets built around Phil M Jones' "Magic Words"

The four above are about accuracy and restraint. This next set is about influence — how word choice shapes whether a customer feels helped or handled.

I wrote about training Fin on the psychology behind Phil M Jones' Exactly What to Say and what changed (and what didn't). It's the guidance I'm most proud of.

Read the full breakdown →