
What a brokerage AI policy should actually say in 2026
A brokerage AI policy in 2026 needs to cover four things: which tools are approved at which account tier, what client data can and cannot enter a prompt, when and how output gets reviewed before it reaches a client, and what disclosure standard the brokerage holds. Most policies I've seen either go too broad to be useful, or go so granular they're obsolete before the ink dries.
Why most brokerage AI policies fail before anyone reads them
The pattern I see most often is one of two extremes. Either a broker of record hands down a one-liner ("don't use AI for anything client-facing") that gets ignored immediately, or legal sends over a 12-page document full of definitions no agent will parse.
Neither works. Agents are using AI tools whether the policy exists or not. The question is whether they're using them well or carelessly.
A policy that bans AI use entirely tends to push usage underground, which is worse. Agents will still prompt ChatGPT for listing descriptions, they just won't tell you. A policy that's too long to read produces the same result: the agents skim it, miss the key rules, and operate on instinct.
The goal of a brokerage AI policy is to set three or four clear guardrails that agents actually follow, not to create a compliance document that protects the brokerage on paper while changing nothing in practice.
The approved-tools list (and why account tier matters)
This is the section most policies skip entirely, and it's the most operationally important one.
The same AI tool can behave very differently depending on which plan you're on. Most major AI platforms, including OpenAI and Anthropic, offer enterprise or API tiers that include data opt-outs and do not use your inputs for model training. Their consumer and free tiers may not offer the same protections by default.
A brokerage AI policy should specify:
- Which tools are approved for agent use.
- Which account tier is required for each tool (consumer, pro, enterprise, API).
- Which tools are approved for internal use only versus client-facing output.
An approved-tools appendix is better than baking specific product names into the core policy. The appendix gets updated every six months. The core policy stays stable.
One practical note: if your brokerage has a CRM with built-in AI features, such as Follow Up Boss or Sierra Interactive, review that platform's data terms separately. The policy should reference how each integrated tool handles data, not just standalone AI assistants.
Client-data rules: what can and cannot go into a prompt
This is where regulatory exposure actually lives. Canada's PIPEDA requires that personal information be collected only for the purpose disclosed to the person and be protected against unauthorized access. Feeding a client's full name, address, financial situation, or mortgage details into a consumer-tier AI assistant almost certainly does not meet that standard.
A brokerage AI policy should draw a clear line:
Permitted: anonymized scenarios, general drafting tasks, market commentary, document templates, listing description drafts using the property address (not client names or details).
Not permitted without a data agreement in place: client names, contact information, financial details, offer terms, mortgage specifics, legal names on title, anything from a status certificate tied to a specific buyer.
The test is simple. If you removed the client's name from the prompt and it still works, you probably didn't need the name in there. Start there as a default rule.
This section of your policy should also address what happens if an agent has already entered client data into a non-approved tool. Define a remediation step, even if it's just logging the incident and notifying the client. The habit of reporting matters as much as the rule itself.
The output-review step: human eyes before it goes to the client
AI output errors in real estate can range from mildly embarrassing to materially significant. A listing description that overstates a feature, a market analysis that cites an outdated stat, a clause explanation that's technically wrong — these are not hypotheticals.
The pattern I see in brokerages that use AI well is a consistent output-review step built into the workflow, not treated as optional. The policy should state this clearly: no AI-generated content goes to a client without review by the responsible registrant.
This applies to:
- Listing descriptions and marketing copy.
- Client emails drafted by AI.
- Market summaries, CMA narratives, or area commentary.
- Any document the client will sign or rely on for a decision.
It does not need to apply to internal drafts, personal notes, or brainstorming outputs the agent never sends.
The output-review step is also the right place to address factual accuracy. Agents should be checking AI-generated statistics against primary sources, not forwarding them to clients based on the assumption that the model got it right. AI tools, including the best current models, will occasionally produce confident-sounding incorrect information. That's not a flaw to design around; it's a workflow step to design in.
Disclosure standards: what to tell clients and when
There is no universal Canadian legal requirement to label marketing copy as AI-generated as of 2026. The policy question for a brokerage is not just "are we legally required to disclose?" but "when does disclosure serve our clients and our agents?"
A reasonable internal standard looks like this:
- Marketing copy and listing descriptions reviewed and approved by the agent: no disclosure required, same standard as agency-drafted copy.
- AI-generated market analysis or opinion content presented to a client as advice: disclose that AI tools assisted in drafting, and ensure the agent has reviewed and can stand behind the content.
- AI-generated communications where the client has a reasonable expectation of direct communication from the agent (e.g., a personal note on a price-reduction conversation): review closely; in many cases, the agent should write these from scratch or heavily edit the draft.
The brokerage's disclosure standard does not need to be lengthy. One clear sentence in the policy, applied consistently, is more useful than a tiered matrix no one remembers.
What to leave out
A few things that often appear in broker-drafted AI policies and tend to create more confusion than clarity:
Banning specific tools by name in the core policy. Tool names change faster than policies do. If Claude becomes Sonnet 5.1 and the policy says "Claude Pro," you've created a gap. Keep the core policy about principles; put specific tools in the appendix.
Prohibiting all AI use for client communication. If your agents are using AI to draft a first-pass email that a human then reads and rewrites, that's fine. Blanket bans tend to produce selective compliance.
Long definitions sections. Agents don't need three paragraphs on "what is a large language model." They need to know what they can do, what they can't do, and what to do if they're not sure.
Provisions that require sign-off on every AI-assisted output. This creates a compliance bottleneck and incentivizes agents to just not tell you they're using AI. A review requirement placed on the agent is more durable than a sign-off requirement placed on the broker.
What I'd actually write
If I were drafting a brokerage AI policy for a team right now, it would be one page, maximum. Core policy covers principles: approved-tier requirement, data minimization, human review before client delivery, and the current disclosure standard. A separate appendix covers the approved-tools list, updated twice a year.
The policy should also name someone responsible for keeping the appendix current. That person should be doing a brief review every time a major tool updates its data terms, and a fuller review every six months. This is not a large time investment. It's a 30-minute task, done consistently.
The brokerages I've seen handle AI well tend to treat the policy as a live document rather than a one-time compliance exercise. The ones that struggle tend to write the policy once, file it away, and never look at it again while the tool landscape moves on without them.
Consult your brokerage's legal counsel before finalizing any AI policy. The regulatory environment around AI and data privacy is active and province-specific obligations may apply beyond the general framework outlined here.
FAQ
What should a brokerage AI policy include? At minimum: an approved-tools list specifying required account tier, rules on what client data can and cannot enter an AI prompt, a disclosure standard for client-facing AI-generated content, a required output-review step before anything goes to a client, and a process for updating the policy as tools change. Keep the core policy to principles; put specific tool names in a separate appendix.
Do Ontario brokerages have a legal obligation to have an AI policy? As of 2026, there is no Ontario regulation that explicitly mandates a written AI policy for real estate brokerages. However, RECO's general obligations around competence, honesty, and client confidentiality apply to AI use. Brokers of record are responsible for the conduct of their registrants, which includes how registrants use AI tools with client data. Consult your brokerage's legal counsel for current obligations.
Can realtors put client information into ChatGPT or Claude? It depends on the tool, the tier, and the type of information. Consumer-tier accounts may use inputs to train future models by default. Enterprise or API tiers typically offer data-use opt-outs. A brokerage AI policy should specify which tier is permissible and prohibit inputting PIPEDA-covered personal information — names, addresses, financial details, mortgage information — into any tool without a proper data-handling agreement in place.
Does AI-generated listing copy or marketing need to be disclosed to clients? There is no universal Canadian legal requirement to label marketing copy as AI-generated as of 2026. A reasonable internal standard: disclose when AI output is presented directly to a client as advice or analysis, and treat AI-drafted copy reviewed and approved by the agent the same as agency-drafted copy.
How often should a brokerage AI policy be updated? Write the core policy around principles, not specific product names, and review it annually or after any regulatory change. Keep a separate living appendix for the approved-tools list and review that every six months. The tool landscape moves faster than most policy review cycles.
What are the biggest mistakes brokerages make when drafting an AI policy? Three common ones: writing a policy so long no one reads it; banning AI entirely rather than creating guardrails, which pushes usage underground; and failing to specify which account tier is required, so agents use free consumer accounts with weaker data protections without realizing it.
Emma Pace — strategic marketing consultant, AI coach for realtors, keynote speaker. Realtor at Monstera Real Estate. Builds AI-operated marketing systems at emmapace.ca.
Want AI-operated marketing in your business?
I install the systems I write about here — for SMBs, realtors, and teams that need ROI in 90 days. Book a 20-minute discovery call.
Work with Emma →