hello@acme-re.com
323.274.4332
Schedule a discovery call
EAGLE ROCK
4516 Eagle Rock Blvd.
Los Angeles, CA 90041
WEST ADAMS
4772 1/2 W Adams Blvd.
Los Angeles, CA 90016
by Courtney Poulos, ACME CEO and Founder
AI put it in writing—people move to California because of the lack of state income tax.
I stared at the beautifully formatted presentation from Gamma, an AI tool that claims it can create real estate marketing materials from simple prompts. The layout looked professional. The fonts were clean. Everything appeared polished.
Except California has one of the highest state income tax rates in the country.
This wasn’t a minor typo. This was the kind of fundamental error that could humiliate an agent or worse, create legal liability. And it perfectly captures the imminent crisis facing our industry.
After 20 years in real estate, I’ve developed instincts about how transactions flow. I know which questions matter and when to ask them.
But lately, something strange happens when clients use AI to craft their responses to me. First-time buyers start asking for title reports before we’ve even written an offer, when title reports typically come after contract acceptance, during due diligence. They ask in-depth questions that aren’t relevant to their particular transaction—for example, radon? In Los Angeles, not a lot of subterranean living—in DC, radon is as common as a termite inspection.
It’s like the client doesn’t trust an expert’s guidance and is letting the computer try to take the wheel.
This erosion of trust puts experienced agents on the defensive. I’ve spent two decades learning how to navigate the purchase and sale process in a way that makes buyers feel reassured and protected.
But what happens to newer agents who don’t have that foundation when clients come armed with AI-generated advice?
Here’s the reality nobody wants to face: you can’t sue AI after it causes injury.
I haven’t seen lawsuits yet, but they’re coming. Agents are using AI to review legal contract language and providing those responses to clients. When that advice proves wrong, who holds the liability with the state? Who is accountable for the training of these AI agents, anyway?
The answer will likely fall in the professional’s lap. Stanford research reveals that AI models produce hallucinations between 69 and 88 percent of the time when queried about legal matters.
Think about that. We’re building new professional practices on technology that’s wrong more often than it’s right about legal questions.
I wouldn’t trust a robot to raise my child, perform surgery, or provide psychoanalysis. Why would I trust a robot with someone’s multimillion-dollar investment?
Real estate is a soft science. Data is data, but AI is only as good as the information fed into it.
Take Sidekick, which somehow appeared on our MLS platform. Its CMA analysis can’t see off-market sales or assess property condition from photos and descriptions. An agent might know about a recent off-market sale that changes everything about a property’s value. AI doesn’t.
This problem is getting worse, not better. Some companies are doubling down on private listings, creating exclusive data pools that leave both AI systems and many agents operating with incomplete information. And pulling only from assessor records gives no guidance to agents using that data with regard to condition of the property. If off-markets persist as a thorn in the side of the industry, expect that the comp data will be even harder to get.
We’re heading toward a fragmented future where only agents with access to exclusive data can compete, while everyone else works with flawed algorithms.
Zillow is attempting to become the new gatekeeper. Could the MLS become optional, or, we could go back to selling real estate brokerage by brokerage. A return to the Stone Age. One way or the other, we are handicapped by a lack of correct information. AI cannot fix that. Don’t trust an AI CMA or any app that tells you it can make one until it can: review off market sales and has the ability to deduce their condition, monitor for difference between advertised square footage and assessor’s square footage, understand the differences between neighborhoods not just on a radius basis. HouseCanary is the closest I’ve seen and even that you have check with a fine-tooth comb.
The most dangerous part isn’t the technology itself. It’s how desperately agents are adopting AI for marketing, content creation, and CMAs without proper vetting.
Our industry leadership is failing to test these products before encouraging widespread adoption. Similar to how we sold ourselves out to portals years ago, we’re racing to embrace technology without considering the incorrect information and homogenization that current AI applications provide.
I see agents losing their voice, the one they’ve fought years to develop. Even with the most specific prompts, AI-generated content feels robotic and impersonal.
Here’s where emotional intelligence wins every time. Picture the buying cycle: you get preapproved for more than you think you should spend, start hunting, nothing looks good, then you find the perfect property at a great price.
You want to write an offer. The AI app gives you a valuation missing accurate comparables. Your agent knows the property will sell for more based on market nuances AI can’t detect.
If you trust the AI, you lose your dream home.
This isn’t theoretical. Studies show AI systems routinely charge Black and Latinx borrowers higher rates for identical loans, the content creation apps frequently use terms like “family-friendly” in their draft MLS descriptions (which is a violation even though I don’t think it should be)…now we’re not just risking individual transactions. We’re also risking fair housing violations on a massive scale.
Before any AI technology goes live on MLS platforms or gets promoted by professional associations, it needs to be tested like vaccines need to be tested.
Scientific. Rigorous. With focus groups examining real-world scenarios.
We need to test for accuracy, bias, legal compliance, and impact on professional relationships. We need to understand what happens when these tools fail, not just when they work.
The testing should happen at the national level, through MLSs, or by professionals who understand both technology and real estate practice.
The foundation of consumer experience and our professional esteem is at risk.
We’re at a crossroads. We can continue rushing toward AI adoption without proper safeguards, or we can demand the rigorous testing that protects both agents and consumers.
The technology isn’t inherently evil. In fact, it’s very useful. But deploying it without understanding its limitations and biases puts everyone at risk.
Some aspects of real estate transactions will always require human judgment. Pricing strategy, emotional intelligence, local market knowledge, and ethical decision-making can’t be reduced to algorithms.
The question isn’t whether AI will change real estate. It’s whether we’ll implement it responsibly or let it quietly break the trust and expertise our industry depends on.
I choose rigorous testing over reckless adoption. I choose human expertise enhanced by technology, not replaced by it.
Watch Courtney vs the AI at youtube.com/@realestateAIcoach.