
Artificial intelligence is advancing at a breakneck pace. It’s probably faster than many people in the real estate industry can keep up with.
Agents are constantly being told that they must adapt to the new AI era or be left behind. Proptech companies are rapidly releasing new AI-powered technologies that promise to enhance workflows. Public safety and even AI-based violence are also being called into question as dissatisfaction grows in some quarters.
Amid these rapid changes, new dangers are emerging: AI-powered cybersecurity threats.
This topic has recently come into the spotlight with Anthropic announcing a new AI model called “Mythos,” but it’s currently only available to a limited number of users. Anthropic reportedly delayed the release of the model due to its amazing features and started an initiative called Project Glasswing.
According to Anthropic, Mythos has already discovered software vulnerabilities across “all major operating systems and all major web browsers.” And according to a growing number of cybersecurity experts, such tools could fundamentally change the threat landscape.
Historically, many serious cybersecurity vulnerabilities persisted not because they were impossible to discover, but because discovery required a rare combination of expertise, time, and patience.
AI tools like Mythos could change that equation. Just as AI makes real estate agents’ jobs easier, this technology also lowers the barrier to entry for cybercriminals and has the potential to significantly enhance their capabilities. In that scenario, vulnerability discovery would no longer be the bottleneck, and the balance between defenders and attackers would be much more difficult to predict.
AI is amplifying familiar threats
In the real estate industry, Anthropic’s Mythos is just one part of the growing threat that AI poses to cybersecurity. Artificial intelligence has already proven to be very useful in real estate fraud.
According to the FBI Internet Crime Complaint Center, cybercriminals stole more than $275 million from at least 12,368 victims last year through real estate fraud. This was a significant increase from the combined total for 2024 and 2023.
The agency defines real estate fraud broadly, including bogus investment transactions and rental and timeshare fraud. The report notes that victims span all age groups, with similar levels of incidents reported among people in their 20s to 50s. FBI officials say AI-powered fraud is a key accelerator, making scams bigger, more convincing, and harder to detect before they cause harm.
Cybersecurity experts warn that scammers are increasingly leveraging AI tools like ChatGPT to generate sophisticated and convincing phishing emails that erase many of the traditional red flags used to spot scams.
Technically, OpenAI prohibits the use of models for the purpose of generating malware, promoting fraud or deception, or engaging in illegal activities. Its systems are designed to deny direct requests such as creating phishing emails or building fraudulent websites.
However, it can help lower the barrier to malicious activity, streamline investigations, refine language, and expand the type of content that underpins phishing campaigns.
Deepfakes and low-cost generative AI tools that can create realistic voice clones are also pushing phishing into more sophisticated and difficult-to-detect territory.
Traditionally, business email compromise (BEC) attacks have relied on gaining access to legitimate email accounts, often through phishing, or spoofing domains to trick employees into sending money or sharing sensitive information. Because these scams were primarily text-based, they are likely to be flagged by spam filters and scrutinized for telltale signs such as suspicious domains or email headers. Although BEC remains widespread, improvements in filtering and awareness have made these tactics difficult to implement.
Voice cloning is changing that dynamic. By introducing a sense of urgency and friendliness, you tap into instincts that email can’t replicate. You might pause to see where the email is coming from, but you’re less likely to hesitate if your boss calls you, sounds stressed, and immediately asks for help.
This evolution has accelerated the rise of “vishing,” voice phishing that uses AI-generated voices. These attacks can evade traditional email defenses and even some voice authentication systems. By creating high-pressure, real-time scenarios, attackers increase the likelihood that victims will act quickly and without verification.
Vulnerable systems meet smarter tools
The technological tools that facilitate real estate fraud are becoming increasingly sophisticated. But cybersecurity experts say the bigger risk is the weak defenses many agents and brokerages may still maintain.
“The question is not whether Anthropic’s new model introduces new vulnerabilities to the real estate industry,” Luke Irwin, CEO and principal consultant at Aegis Cybersecurity, told Inman. “The more accurate concern is that they’ll find something that’s already there.”
Irwin said that in each case, vulnerabilities already exist in the platforms used by real estate agents and brokers. “What Mythos represents is a way to more quickly identify those weaknesses across large codebases,” he said. “This increases the risk for organizations that do not patch and properly maintain their systems or rely on providers who do not do the same.”
He said tools like Claude and ChatGPT already provide strong support for phishing, spoofing, and social engineering. Variants discussed in the criminal world, such as FraudGPT, are already showing how AI can be used to increase the scale and quality of malicious communications.
“When this is combined with poor email security, weak management, and inconsistent staff awareness, the potential for wire fraud, unauthorized access to CRM platforms, and leakage of sensitive customer and commercial data increases,” Irwin said.
Irwin said the fundamentals of cybersecurity are more important than ever for agents and brokerages looking to use AI securely. “First, you need clear policies that define what AI tools can be used, what data can and cannot be input,” Irwin said. “Second, there needs to be a risk assessment process to assess safety, effectiveness, bias, and business suitability.”
Finally, he said staff and agents need training to understand how to properly use these tools and where the boundaries are. If an organization refuses to adopt AI outright (which seems highly unlikely these days), staff will often use AI anyway, creating what is commonly referred to as “shadow AI.”
“In many cases, shadow AI simply reflects an organization’s failure to modernize in line with employee expectations, creating risk in any case,” Irwin said.
Expanding risk – often without realizing it
The use of AI is widespread in the real estate industry. In a recent RPR survey of 225 real estate professionals, 82% reported actively using AI in their business. But while real estate agents may use AI, they don’t always consider its cybersecurity implications.
Amy Simpson, director of product marketing at Huntress, said general knowledge about AI safety is quite limited among companies and brokerages that don’t have large cybersecurity departments.
“It’s not uncommon for employees to upload files directly to models like Claude or ChatGPT to ask for help completing tasks or getting work done,” Simpson told Inman. “What they don’t realize is that by uploading these content to the model, they are essentially allowing the model to read, access, and potentially store information about that data.”
Simpson said this is a problem because this data can begin to surface in other users’ searches, directly expanding the attack surface that businesses have to deal with in ways that are completely invisible.
“Typically, when there is an attack surface, companies can visualize it as much as possible and take steps to protect it,” Simpson said. “The same is not true for AI-based threats, which are notoriously more difficult to visualize and implement controls to stop.”
In other words, the use of AI could “significantly expand” a company’s attack surface without giving the company much of an opportunity to build effective defenses. Mr Simpson said this is a complex situation and few companies or real estate agents are paying enough attention.
Traditional security tools are increasingly outclassed by the rise of AI-powered cyber threats. Last year, the World Economic Forum reported that 87 percent of cybersecurity leaders identify AI-related vulnerabilities as the fastest growing risk, yet 90 percent of organizations admit they are not yet ready to defend against AI-powered attacks.
Hidden risks in AI-generated answers
Simpson also pointed out that there have already been some cases where malicious users create phishing links and distribute them in organic search results in hopes of appearing in chatbot responses.
“When the AI tool starts scraping these websites, it includes these links as ‘proof’ or references that what the AI tool is saying is correct,” Simpson said. “Unbeknownst to them, they present phishing links directly to users via the chat box.”
She said the ability to use AI agents to manipulate these results is very concerning, especially when customers are researching neighborhoods or companies or asking questions about brokers, like in real estate.
“AI systems need to take stronger measures to improve the traceability of their systems and verify the information they collect so that AI companies can protect their customers,” Simpson said.
So, given all these threats, how can brokerages and agents protect themselves? Simpson said effective AI implementation must come with a great deal of data protection and safety.
“Before using AI tools and systems, you must first create a detailed framework for what data employees can share in these systems and what is off-limits,” she said. “It may seem too pedantic, but AI systems pose enormous data risks when misused.”
Email Nick Pipitone
