What brokerages are actually asking about AI in 2026
Market Commentary

What brokerages are actually asking about AI in 2026

Emma Pace · 2026-04-04 · Market Commentary

The questions brokerages are asking about AI in 2026 are not the questions from 2023. They've moved past "should we do this?" Most brokerages are somewhere in the middle of doing it, whether they sanctioned it or not. The questions now are operational, legal, and strategic: who's liable, who controls the output, where the data goes, and whether any of this is actually building a competitive edge or just producing busier agents.

Here's a synthesis of what I hear consistently across brokerage conversations.

"Who's liable when the AI says something wrong?"

This comes up more than anything else. And it's the right question to ask first.

The short answer is: the brokerage is still liable. AI-generated content doesn't reassign responsibility. In Ontario, the managing broker is accountable for the communications that go out under the brokerage's name or from its agents. That doesn't change because a model generated the first draft.

What this means practically is that any AI output touching a client, a prospect, or a legally sensitive document needs a human review step before it leaves the building. That's not an AI problem. It's the same standard that applies to a junior agent's first offer letter.

Where brokerages get into trouble is the assumption that automation means unsupervised. Speed is valuable. Speed without a review layer on material communications is where liability exposure lives. The pattern I see in brokerages that are handling this well: they've defined which AI outputs need review and which don't. Social media drafts, internal summaries, and CRM notes are low-stakes. Client-facing pricing conversations, offer communication, and anything that could be read as advice are not.

"How do we control what our agents are actually using?"

The honest answer is: you probably can't, fully. And attempts to ban AI tools outright tend to produce the same outcome every compliance-by-prohibition policy produces. Agents use the tools anyway, just without oversight.

The brokerages moving most thoughtfully on this are building preferred-stack models. They're saying: here are three tools we've vetted, here are the account types that protect your data, here are the use cases we've approved. That's not a ban. It's a standard. Agents who want to use something outside the preferred stack can, but they're operating outside the guidance and they know it.

The enforcement challenge is real. But a documented policy with a preferred stack does two things even if it's imperfect. It protects the brokerage from liability ("we had a published standard") and it gives compliant agents a clear path that doesn't require them to figure out data governance on their own.

"Where does our client data actually go?"

This question has gotten sharper as AI tool use has normalized. In 2023, brokerages were asking whether their agents were using AI. In 2026, the question is where prospect and client data is traveling when it passes through these tools.

The answer varies dramatically by tool and account type. Consumer-tier accounts on platforms like ChatGPT and Claude may, depending on settings and terms, use inputs to improve models unless the user actively opts out or uses a business or enterprise plan. Enterprise plans on most major platforms offer contractual data protections, no training on inputs, and clearer retention policies.

Brokerages should do a simple audit: what AI tools are agents actively using (ask, don't assume), and what account types are those agents on. Free personal accounts and enterprise accounts are different products with different data handling. OpenAI's enterprise privacy page and Anthropic's privacy policy are starting points, but the terms evolve, and a brokerage's legal counsel should be in the loop on anything material.

No AI tool is going to be zero-risk on data handling. The goal is understanding the actual risk level, not finding a tool with no risk.

"Are we actually differentiating, or just adding noise?"

This is the question I find most interesting to sit with, because the brokerages asking it tend to already sense the answer.

The brokerages that are building actual competitive separation are not the ones with the most AI tools. They're the ones that have standardized a small number of high-volume, high-leverage workflows. Lead follow-up speed. Listing content quality and turnaround. CRM hygiene. Agent onboarding. These are not glamorous use cases. They're also the ones where consistent execution at scale compounds into a real advantage.

The brokerages adding AI tools without workflow discipline tend to end up with agents who have ten subscriptions and no repeatable process. That produces cost, not edge.

In my experience, the differentiator question is better asked as: what are the three workflows that happen most often in this brokerage, and what does it look like if those three workflows run consistently and well? Start there. The tools are secondary.

"How fast do we need to move?"

Faster than feels comfortable to the cautious voices in the room. Slower than the vendor's sales rep is suggesting.

The honest context is that the brokerages that built even a few reliable AI workflows in 2024 and 2025 have a genuine operational advantage now. The gap between early adopters and late majority is real and it's widening, not narrowing. The underlying models are improving quickly. The workflow infrastructure, once built, compounds.

That said, the "move fast" pressure from vendors and consultants has also produced a lot of expensive bad decisions. Brokerages that deployed AI tools across their entire agent base without a training plan, a use policy, or a clear problem to solve ended up with confused agents and a bill for tools nobody was using well.

The practical middle path: identify one high-volume problem the brokerage has right now. Build or adopt one AI-assisted workflow that addresses it. Run that workflow for 60 days. Measure whether the problem got better. Then expand. This is slower than buying five platforms at once and faster than waiting for perfect policy.

What I'd actually do in a brokerage right now

If I were advising a managing broker today, I'd prioritize in this order.

First, an honest audit of what AI tools agents are already using. Not what the brokerage has licensed. What agents are actually running. That tells you the real risk surface and the real appetite.

Second, a clear data policy. Acceptable tools, acceptable account types, what data categories should never go into an AI prompt (sin: SIN numbers, full legal names on sensitive communications, financial details). Simple and enforceable beats thorough and ignored.

Third, one standardized workflow the whole brokerage agrees to run the same way. Lead follow-up is the obvious candidate because speed matters, volume is high, and the variability between agents is usually enormous. Getting that one workflow consistent does more for brokerage performance than a library of AI tools nobody uses the same way.

The compliance questions, the liability questions, the differentiation questions, they're all real. But they tend to get cleaner once there's a working system to reason about. Abstract AI policy is harder to write than policy for a specific, observable workflow.

That's the pattern I keep seeing. Brokerages that grounded the AI conversation in a specific operational problem moved faster, made better decisions, and built something defensible. Brokerages that tried to solve AI in the abstract first are still in the policy-writing phase.


FAQ

What are brokerages most concerned about with AI adoption in 2026? Compliance exposure and liability top the list, followed closely by agent autonomy (who controls what the AI says to clients) and data ownership (where prospect and client data lives when it passes through third-party AI tools). Differentiation is also a growing concern. Brokerages that waited to adopt AI are now asking how to catch up without making expensive mistakes.

Who is liable when an AI-generated communication to a client is wrong? This is unsettled in most jurisdictions, including Ontario. The brokerage's managing broker is generally responsible for communications sent under the brokerage's name or by its agents. AI-generated content doesn't shift that responsibility. Brokerages should treat AI drafts the same way they treat agent-drafted communications. Review before they go out, especially on anything material or legally sensitive.

Can brokerages control what AI tools their agents use? They can set policy, but enforcement is difficult. Most brokerages are moving toward a preferred-stack model, recommending specific tools with approved configurations, rather than a blanket ban. Blanket bans tend to produce shadow usage with no oversight, which is the worst outcome from a compliance standpoint.

Where does brokerage prospect data go when agents use AI tools? It depends on the tool and the account type. Consumer-tier accounts may use inputs to improve models by default unless the user opts out or uses an enterprise plan. Brokerages should audit which AI tools agents are actively using and check each tool's data retention and training policies. Enterprise plans typically offer stronger protections.

Is AI actually differentiating brokerages yet, or is it still theoretical? It's differentiating in pockets. The brokerages seeing real separation are the ones that have standardized a small number of high-volume workflows, not the ones that have added the most AI tools. More tools without workflow discipline tends to produce more noise, not more edge.

How fast should a brokerage move on AI adoption? Faster than feels comfortable, slower than the vendor's sales rep is suggesting. Waiting for a perfect policy before allowing any AI use tends to push adoption underground, which creates more risk, not less. Starting with one high-volume problem and one workflow is the pattern that tends to produce durable results.


Emma Pace — strategic marketing consultant, AI coach for realtors, keynote speaker. Realtor at Monstera Real Estate. Builds AI-operated marketing systems at emmapace.ca.

Want AI-operated marketing in your business?

I install the systems I write about here — for SMBs, realtors, and teams that need ROI in 90 days. Book a 20-minute discovery call.

Work with Emma →