The mistakes brokerages make when they train agents on AI
Business Building

The mistakes brokerages make when they train agents on AI

Emma Pace · 2026-03-30 · Business Building

Most brokerages that run AI training and see no adoption made the same five mistakes. The problem is rarely the tool and rarely the agents. The pattern I see most often is a training design that was never built to produce behavioral change, just to signal that the brokerage is "doing AI."

Here's what each mistake looks like and what to do instead.

Mistake 1: Picking the tool before identifying the workflow

The brokerage books a ChatGPT lunch-and-learn. Agents show up, see a demo, nod along, go back to their desks, and open their actual work. The tool doesn't fit anywhere obvious, so nothing changes.

This is the most common starting point and the most fixable. The question isn't "which AI tool should we adopt?" The question is "which task are our agents doing manually that AI could plausibly handle?" That question has a different answer for every brokerage depending on team size, transaction volume, and where time is actually going.

In my experience, the highest-leverage entry points for most residential teams are three things: first-draft listing descriptions, follow-up email sequences after open houses, and market summary notes before buyer consultations. All three are high-frequency, time-intensive, and easy to evaluate. Pick one. Train to that one. Let the tool choice follow the problem, not the other way around.

If you run the process in reverse — here's the tool, now figure out how to use it — agents spend the training session pattern-matching to their own workflows and often don't find the connection before the lunch ends.

Mistake 2: Skipping the voice work

AI output sounds generic by default. That's not a flaw in the model. It's a prompt design problem. Most training sessions skip the step that fixes it.

Voice work means teaching agents to give the model enough context that the output sounds like them. That includes their market, their typical client's vocabulary, their personal tone (formal or conversational), and examples of their best past writing. Without that context, ChatGPT produces a listing description that reads like it came from a neutral third party who has never met the agent or seen the unit.

The consequence is predictable. Agents use the tool once, edit the draft heavily, decide it's slower than just writing the thing themselves, and stop using it. They're not wrong in that moment. The tool didn't help because it didn't have enough signal.

The fix is a 20-minute session I'd run before anything else: have each agent produce a voice sample, build a reusable prompt that includes that sample, and then test it on something real. When the output comes back sounding like them, adoption tends to follow. When it doesn't, the tool feels like extra work.

Mistake 3: No workflow target

Training without a measurable target produces enthusiasm for a week and no change after. I've seen this across brokerages of different sizes. The enthusiasm isn't the problem. The lack of a specific, accountable next action is.

A workflow target sounds like this: "By end of next week, every agent on this team will have used Claude to draft the follow-up email after their next showing, and we'll review five drafts together on Friday." That's a target. Agents know what they're supposed to do, when to do it, and that someone will look at the output.

Without a target, "use AI more" competes with every other thing on an agent's task list. In a volume-driven practice, the tasks that don't have an explicit deadline or accountability structure tend to slip. AI training is not different.

The target doesn't have to be ambitious. One task, one tool, one week, one check-in. That's enough to generate real data about what's working and what needs adjustment before you scale the training further.

Mistake 4: Lecturing instead of pairing

Most AI training sessions look like this: someone at the front of the room demonstrates the tool, agents watch, agents ask a few questions, everyone leaves. That format is optimized for information transfer, not skill development.

There's a meaningful gap between watching someone use a tool on a demo prompt and using the tool yourself on your actual work at 9pm with a deadline. The second one is the skill that matters. Training sessions that are entirely demonstration-based don't build it.

Pairing means sitting with an agent and using the tool together on something real. A live listing. An actual follow-up email. A real market summary. The agent drives. You coach. The session ends with a specific thing that exists and that the agent produced, not a thing you produced while they watched.

Pairing doesn't scale to 40 agents in a single session. But it scales better than a lecture that produces no lasting change. A model that works reasonably well: demonstrate to the full group once, then pair with the five most willing early adopters, and let them become the internal coaches for the next wave. Agents trust other agents more than they trust a training session.

Mistake 5: No follow-through

The pattern I see most reliably is this: training happens, there's a short spike in tool usage, then it drops back to near-zero within two to three weeks. This isn't because agents didn't like the tool. It's because they hit a specific problem — a prompt that produced bad output, a workflow step that didn't quite fit, a question they didn't know who to ask — and with no structure for follow-through, they just stopped.

Sustained adoption requires a structure for when agents get stuck in real work, not just in a training room. That means a check-in at one week, another at one month, and a lighter touch at three months. It also means a low-friction way for agents to ask questions. A Slack channel, a shared prompt library, a 15-minute weekly office hour. Something.

The brokerages I've seen get genuine adoption built a container for the questions that come after training. The ones that didn't, ran a great lunch-and-learn and then wondered why nothing changed.

What I'd do differently

If I were designing AI training for a brokerage today, I'd run it in this order: identify one high-frequency, time-intensive workflow. Build a voice-calibrated prompt for it. Set a specific target with an accountability date. Pair with five agents on real work before rolling out to the full team. Then build a follow-through structure for the first 90 days.

That's slower than a lunch-and-learn. It's also the version that actually changes how agents work. The brokerages that treat AI training as a one-time event get one-time results. The ones that treat it as a behavior-change problem get adoption.

The tools are good enough. The workflows are there. The gap is almost always in the training design.


FAQ

Why do brokerages fail to get AI adoption after training? The most common reasons are picking a tool before identifying a workflow problem, skipping voice and prompt calibration, running a one-time lecture with no follow-through, and failing to set a specific workflow target agents are accountable to. Training without a measurable target tends to produce enthusiasm for a week and no change after.

What is the right order to introduce AI tools to agents? Start with the workflow problem, not the tool. Identify the task agents spend the most time on that AI can plausibly improve, then pick the tool that fits that task. Reversing this order tends to produce demos that don't stick.

What does "voice work" mean in AI training for realtors? Voice work means training agents to give the AI enough context about their tone, market, and client base that the output sounds like them. Without it, AI drafts sound generic and agents stop using the tool because the output needs too much editing to be useful.

Should brokerages buy AI tools for their whole team at once? Not necessarily. Starting with a small group of willing agents and one high-value workflow tends to produce better adoption data than a brokerage-wide rollout. Broad rollouts before the workflow is proven tend to produce broad disengagement.

How much follow-through is needed after an AI training session? More than most brokerages budget for. A single session produces short-term interest. Sustained adoption tends to require check-ins at one week, one month, and three months, plus a structure for agents to ask questions when they get stuck in real work, not just in a training room.

What is pairing in the context of AI training? Pairing means sitting with an agent and using the AI tool together on their actual work, not demonstrating it on a fictional example. Watching someone else use a tool in a training session is a different skill than using it yourself on a live listing email at 9pm.


Emma Pace — strategic marketing consultant, AI coach for realtors, keynote speaker. Realtor at Monstera Real Estate. Builds AI-operated marketing systems at emmapace.ca.

Want AI-operated marketing in your business?

I install the systems I write about here — for SMBs, realtors, and teams that need ROI in 90 days. Book a 20-minute discovery call.

Work with Emma →