I made a small but ultimately significant change to RB2B's AI support agent. I gave it a name and a face.
Not a generated face. A real one — a stock photo of a person who almost certainly has no idea they're now the face of an AI support agent for a B2B SaaS company.
The results were immediate and, honestly, a little unsettling.
Three things changed straight away
Users became friendlier. People started saying hi to Arbi. They thanked it after it helped them. Pleasantries that nobody bothers with when they're typing into a faceless chat widget suddenly reappeared.
They became more patient. Less likely to fire off a frustrated one-liner and bounce. More willing to work through a problem, try a suggestion, come back with a follow-up.
And they became more descriptive. The frequency of one and two word messages dropped noticeably. Instead of "it's broken" or "not working," people started explaining what they were trying to do, what they'd already tried, what outcome they were after. Which meant Arbi had more to work with and could actually help.
None of this was hidden. Arbi introduces itself as an AI agent. Every message it sends is labelled "AI Agent" directly below it. There is no ambiguity about what you're talking to.
And yet.
The psychology of faces
Humans are wired to respond to faces. It's not a preference or a habit — it's structural. Infants track faces within hours of being born. The fusiform face area, a region of the brain dedicated almost entirely to processing faces, activates faster and more reliably than almost any other visual stimulus. We are, at a fundamental level, face-detection machines.
What's interesting is that this response doesn't require the face to be real. Studies on anthropomorphism — the tendency to assign human qualities to non-human things — consistently show that the trigger is remarkably low. A circle with two dots reads as a face. A robot with a name gets treated differently than one without. A customer service bot with a photo gets thanked.
The name came first — conceptually, at least. Arbi is a play on RB2B, and once the name existed, it shaped everything that followed, including the face we chose to go with it. In practice we rolled them out together, but the name set the tone. And even so, the face crossed a different threshold entirely. Names are abstract. Faces are immediate. Your brain processes a face before you've consciously registered anything else about the interaction.
There's also something worth noting about what a face signals in a support context specifically. A face implies presence. It implies that something is paying attention to you, not just processing your input. Whether or not that's true is almost beside the point — the perception of being seen changes how people behave.
What this means practically
Better user behaviour produced better outcomes. More descriptive messages gave Arbi more context, which meant more accurate and useful responses. More patience meant problems got resolved instead of abandoned. Resolution rates improved. Users left the conversation having actually been helped.
All of that from a name and a photo.
The lesson isn't that you should trick people. It's that interface decisions carry psychological weight that we often underestimate. How something looks and what it's called shapes how people relate to it — and how they relate to it shapes what's actually possible in the interaction.
We didn't set out to run a psychology experiment. We just wanted to make Arbi feel a bit more approachable.
Turns out that's not a small thing.