MAIN MENU
Devolutions Blog

Announcements, updates, and insights from Devolutions.

Security
Thumbnail for Goodbye “obvious scam”, hello “seems legit”: The rise of AI phishing

Goodbye “obvious scam”, hello “seems legit”: The rise of AI phishing

AI phishing is making scams harder to detect. Learn simple user checks and key security controls to reduce risk, protect workflows, and limit the impact of compromised credentials.

Photo of Steven Lafortune Steven Lafortune

Most people recognize conventional phishing because they have lived with it for years. Their personal inboxes and phones are full of fake delivery texts, “your bank account is locked” emails, and how can we overlook the “urgent plea from a diplomat” asking for help to move an absurd amount of money out of their country. These blatant, and often laughable scam attempts are no longer shocking, surprising, or newsworthy. They are background noise.

What has changed, however, is the speed and quality of the deception. Social engineering that once looked sloppy and easy to dismiss is now being mass-produced with generative AI. The result is cleaner writing, fewer obvious red flags, and messages that sound more like real coworkers and vendors. Microsoft’s Digital Defense Report 2025 reports that AI-driven phishing can be significantly more effective than traditional campaigns, and Verizon’s 2025 Data Breach Investigations Report notes that AI-generated text in malicious emails has doubled over the past two years.

For IT pros, this shift is expected. Many have anticipated it, understand the patterns, and know how to protect themselves. The bigger issue is that many non-technical employees and clients are still anchored to an older mental model of phishing: suspicious-looking emails and texts that are easy to spot. Today’s AI-enabled lures do not always look dubious, and they increasingly target the areas where the damage is biggest: corporate identity systems, internal communications, and financial workflows.

AI phishing: Why it’s now a major workplace problem, not just a personal one

Conventional phishing aimed at personal accounts usually targets credential theft, payment card fraud, or account takeover. Now, AI phishing in the workplace is adding two key factors:

AI phishing examples

Here are a few examples IT pros can share with non-technical colleagues and clients to help them grasp what modern AI-phishing in the workplace looks like. Note that this is not an exhaustive list, but highlights some of the most common snares:

The key message here is “This message looks obviously fake!” is no longer a dependable safeguard. Attackers do not need flawless writing or deep knowledge of their victims’ workflows. AI fills those gaps.

Essentially, AI phishing succeeds when speed beats verification. The most effective defenses combine two things: simple user habits that interrupt the “act now” impulse, and organizational controls that limit damage if someone slips. Let us start with the first part, since users always have been, are, and will be the weakest link in the IT security chain.

Educating users: 4 essential checks

As mentioned, AI phishing has training that centers on “spot the obvious scam” far less effective, because many malicious messages now look polished, plausible, and familiar. Instead, it is practical and effective to educate users on a set of repeatable checks they can apply in seconds. Four in particular are essential: process check, identity check, link/attachment check, and information/verification check.

1. Process check

Before evaluating who a message is from, users should first pay attention to what the message is asking them to do. AI phishing often succeeds by exploiting business processes, not by looking obviously suspicious. The process check is a quick way to spot requests that do not fit normal workflows, especially when the message is urgent or tries to bypass standard controls.

Users should ask: “Does this request match a normal workflow?” Specifically, they should watch for:

2. Identity check

Before responding to a suspicious message, users should verify the sender through a channel that is known to be safe. This could mean calling the sender using a saved directory number, sending a new email to a trusted address (not replying to the original one!), or confirming in person when this is practical and efficient.

These checks shift decision-making from “does this look fake?” to “does this make sense?” and “am I confident about the identity?”

Even when a message looks legitimate, users should treat links, attachments, and QR codes as high-risk; especially when they are unexpected, urgent, or tied to logins. Advice in this area include:

4. Information and verification check

AI phishing gets more convincing when attackers have more context. Users should assume that small overshares can be weaponized. The rules in this area include:

Taken together, these four checks do not require non-technical staff to become security experts. Rather, they create consistent habits that could thwart AI phishing attempts, and prevent identity theft and breaches.

Empowering organizations: 4 critical control areas

While good user habits are essential in this fight, they cannot carry the entire burden, especially when AI makes phishing more convincing and scalable. As such, the next layer is organizational: controls that reduce how often attacks succeed, limit what an attacker can do with stolen credentials or sessions, and build verification into the workflows that control access and move money. Here are some recommendations that are applicable for all organizations, including small and mid-sized businesses with lean IT teams and budgets:

1. Make phishing-resistant authentication the default, not the exception

Assume that passwords will be phished, reused, or stolen. The key question is therefore not “Can we stop every phish?” but “What happens when a password leaks?” Recommended controls include:

Strong authentication does not eliminate social engineering, but it turns many credential theft attempts into dead ends.

2. Reduce the payoff of a stolen session or token

AI phishing increasingly targets session tokens and OAuth consent, and not just passwords. Prioritize controls that limit persistence and lateral movement. This includes:

These controls often determine whether an incident stays contained or becomes an internal phishing platform and data exfiltration event.

3. Protect the workflows attackers target most

Many high-loss incidents happen because a legitimate business process is exploited. Strengthen high-risk workflows with built-in verification through the following:

When the process itself demands verification, then organizations stop relying on individual judgment under pressure.

4. Make reporting fast, safe, and rewarding

The faster a suspected phishing attempt is reported, the smaller the damage. To that end, organizations should enable:

Insight & advice from our Chief Security Officer, Patrick Pilotte

What I find particularly relevant in this article is the paradigm shift it proposes: we should no longer rely on “what seems suspicious,” but on “what doesn’t follow a normal process.”

Where I would add a nuance is that AI didn’t create sophisticated phishing, it made it accessible at scale. What was previously reserved for targeted attacks (BEC, CEO fraud) can now be reproduced massively with little effort. To my mind, the real issue is therefore not just the quality of the messages, but their industrialization.

I would also add that attacks are no longer email-only. We’re seeing more and more hybrid scenarios combining emails, internal messaging platforms (Teams, Slack), and even voice phishing with vocal cloning, which dramatically increases credibility.

Finally, rather than viewing the user as the weakest link, it’s more accurate to say that our business processes are often optimized for speed rather than verification, and that’s precisely where attackers exploit the gap.

The final word

AI phishing works when people are rushed and systems are permissive. The answer is not to expect perfect vigilance from users. Rather, it’s to build a strong safety net rooted in verification habits that are easy to apply under pressure, and reinforce them with strong organizational controls that limit the impact of mistakes.

More from Security

Read more articles