The case for AI in CRE is well established. Faster underwriting, wider market coverage, and more consistent analysis across a portfolio, the efficiency gains are real and well-documented. Adoption has followed. Roughly 82% of U.S. adults are now using AI tools for real estate insights, a number that shows how quickly the technology has gone from new to normal.
What gets less attention is what’s happening when teams actually try to make decisions with it.
Speed and efficiency are real. But there’s a growing gap between how fast AI produces outputs and how much confidence people actually have in those outputs. In low-stakes environments, that gap is manageable. In CRE, where a pricing call, a lease decision, or an acquisition recommendation can move millions of dollars, it’s a real problem.
The Confidence Problem
A February 2026 survey by Keyway and The Appraisal found that 44% of investment committees don’t trust AI-generated analysis, and only 27% have any level of trust in AI for financial underwriting. The top concern, named by 41% of those surveyed: unreliable outputs and hallucinations.
The core issue isn’t just accuracy, it’s explainability. In a professional setting, an output has to be defensible, not just correct. When a pricing recommendation or market assessment reaches an investment committee, the question isn’t only whether the number is right. It’s whether the reasoning behind it can be clearly articulated, examined, and stood behind. AI outputs that can’t meet that bar don’t get acted on, regardless of how good the underlying model is.
Why Bad Outputs Are Hard to Catch
The issue isn’t that AI can’t produce useful outputs; it often does. The problem is that it’s hard to tell in the moment when it’s getting something wrong. Research from MIT found that AI models are 34% more likely to use confident language, words like “definitely” and “certainly,” precisely when they’re producing incorrect information. In a leasing or acquisition context, an output that sounds confident but turns out to be wrong isn’t just an analytical mistake. It’s a credibility problem for whoever acted on it.
This is what makes the trust problem so difficult. The errors don’t announce themselves. A hallucinated comp, a market trend that doesn’t hold up, a recommendation built on a flawed assumption, these can look just as polished and confident as a correct output. Without a process for checking the work, it’s easy to miss them until it’s too late.
The Right Model for High-Stakes Decisions
The firms handling this well aren’t treating AI outputs as final answers. They’re treating them as starting points, something experienced professionals review, question, and pressure-test before acting on. That’s not a flaw in the technology. It’s the right way to use it in a high-stakes environment where the cost of being wrong is real.
The value of AI in leasing isn’t just speed; it’s whether the people using it trust it enough to act on it with confidence. Building that trust takes transparency in how outputs are made, consistency in how they’re reviewed, and a clear sense of where human judgment still needs to lead. The teams that get that balance right will move faster and with more confidence than those still working it out.