Most people recognize conventional phishing because they have lived with it for years. Their personal inboxes and phones are full of fake delivery texts, “your bank account is locked” emails, and how can we overlook the “urgent plea from a diplomat” asking for help to move an absurd amount of money out of their country. These blatant, and often laughable scam attempts are no longer shocking, surprising, or newsworthy. They are background noise.
What has changed, however, is the speed and quality of the deception. Social engineering that once looked sloppy and easy to dismiss is now being mass-produced with generative AI. The result is cleaner writing, fewer obvious red flags, and messages that sound more like real coworkers and vendors. Microsoft’s Digital Defense Report 2025 reports that AI-driven phishing can be significantly more effective than traditional campaigns, and Verizon’s 2025 Data Breach Investigations Report notes that AI-generated text in malicious emails has doubled over the past two years.
For IT pros, this shift is expected. Many have anticipated it, understand the patterns, and know how to protect themselves. The bigger issue is that many non-technical employees and clients are still anchored to an older mental model of phishing: suspicious-looking emails and texts that are easy to spot. Today’s AI-enabled lures do not always look dubious, and they increasingly target the areas where the damage is biggest: corporate identity systems, internal communications, and financial workflows.
AI phishing: Why it’s now a major workplace problem, not just a personal one
Conventional phishing aimed at personal accounts usually targets credential theft, payment card fraud, or account takeover. Now, AI phishing in the workplace is adding two key factors:
- Corporate identity is a master key. One compromised mailbox or single sign-on session can lead to internal phishing, data theft, vendor fraud, and persistence. Microsoft incident response data found that 28% of breaches were initiated through phishing or social engineering. That is why “it is just one click” is not a harmless framing. It is often the first domino.
- Business processes move money. Invoice changes, payroll redirects, gift card scams, and vendor onboarding are designed to be fast, not forensic. Data from the FBI’s IC3 underscores the scale: in 2024, IC3 received 21,442 Business Email Compromise complaints with adjusted losses over $2.7 billion USD.
AI phishing examples
Here are a few examples IT pros can share with non-technical colleagues and clients to help them grasp what modern AI-phishing in the workplace looks like. Note that this is not an exhaustive list, but highlights some of the most common snares:
- The “executive urgency” trap: A message that appears to be from the CEO instructs someone on the finance team to send an urgent wire before a board meeting (or something else of significant importance). Often, the attacker will quickly try to move the conversation to a personal number because “I’m traveling.”
- The HR document lure: A message claims to involve a candidate complaint, a policy update, or termination paperwork, and directs the recipient to a fake Microsoft 365 sign-in page.
- The vendor invoice swap: A real supplier thread is hijacked or convincingly spoofed, and the attacker slides in “updated banking details” (often with a feasible reason for the change).
- The MFA reset/helpdesk pretext: An attacker posing as an employee says they lost their phone, asks IT to reset MFA, and pressures for a quick exception to “keep work moving.”
- The scan-to-view QR code: An email or PDF includes a QR code that claims the recipient must scan it to view a secure document, leading to credential harvesting or token theft.
- Domain impersonation at scale: Look-alike domains are registered quickly and used to impersonate brands, vendors, and internal services across broad campaigns.
The key message here is “This message looks obviously fake!” is no longer a dependable safeguard. Attackers do not need flawless writing or deep knowledge of their victims’ workflows. AI fills those gaps.
Essentially, AI phishing succeeds when speed beats verification. The most effective defenses combine two things: simple user habits that interrupt the “act now” impulse, and organizational controls that limit damage if someone slips. Let us start with the first part, since users always have been, are, and will be the weakest link in the IT security chain.
Educating users: 4 essential checks
As mentioned, AI phishing has training that centers on “spot the obvious scam” far less effective, because many malicious messages now look polished, plausible, and familiar. Instead, it is practical and effective to educate users on a set of repeatable checks they can apply in seconds. Four in particular are essential: process check, identity check, link/attachment check, and information/verification check.
1. Process check
Before evaluating who a message is from, users should first pay attention to what the message is asking them to do. AI phishing often succeeds by exploiting business processes, not by looking obviously suspicious. The process check is a quick way to spot requests that do not fit normal workflows, especially when the message is urgent or tries to bypass standard controls.
Users should ask: “Does this request match a normal workflow?” Specifically, they should watch for:
- Urgency paired with secrecy (“do not tell anyone,” “need this before the meeting”)
- Unusual payment or account change requests (bank details, vendor remittance changes, payroll updates)
- Credential re-entry prompts (unexpected sign-in pages, “session expired” links, QR “secure document” prompts)
- Channel switching mid-workflow (email to SMS, Teams to personal WhatsApp, “use this new number”)
- Requests to bypass approvals (“just this once,” “I’ll approve later”)
2. Identity check
Before responding to a suspicious message, users should verify the sender through a channel that is known to be safe. This could mean calling the sender using a saved directory number, sending a new email to a trusted address (not replying to the original one!), or confirming in person when this is practical and efficient.
These checks shift decision-making from “does this look fake?” to “does this make sense?” and “am I confident about the identity?”
3. Link and attachment check
Even when a message looks legitimate, users should treat links, attachments, and QR codes as high-risk; especially when they are unexpected, urgent, or tied to logins. Advice in this area include:
- Do not use links in the message to sign in. Open the service normally (bookmark, typed URL, official app) and navigate from there.
- Be cautious with “view document” attachments and QR codes, particularly if they trigger an unexpected sign-in page or credential prompt.
- If a link or file creates pressure to act fast, stop and run the process check and identity check first.
4. Information and verification check
AI phishing gets more convincing when attackers have more context. Users should assume that small overshares can be weaponized. The rules in this area include:
- Avoid sharing sensitive operational details in broad channels (such as vendor banking updates, payment processes, approver names, internal screenshots, travel plans, project specifics).
- Never share one-time passcodes, MFA prompts, or “verification” details with anyone. Users should know that IT will never ask for them.
- Verify through a safe, known channel if someone requests sensitive details to “confirm identity” or “unblock an urgent task.”
Taken together, these four checks do not require non-technical staff to become security experts. Rather, they create consistent habits that could thwart AI phishing attempts, and prevent identity theft and breaches.
Empowering organizations: 4 critical control areas
While good user habits are essential in this fight, they cannot carry the entire burden, especially when AI makes phishing more convincing and scalable. As such, the next layer is organizational: controls that reduce how often attacks succeed, limit what an attacker can do with stolen credentials or sessions, and build verification into the workflows that control access and move money. Here are some recommendations that are applicable for all organizations, including small and mid-sized businesses with lean IT teams and budgets:
1. Make phishing-resistant authentication the default, not the exception
Assume that passwords will be phished, reused, or stolen. The key question is therefore not “Can we stop every phish?” but “What happens when a password leaks?” Recommended controls include:
- Phishing-resistant MFA where possible (FIDO2/passkeys/security keys), and strong MFA everywhere else.
- MFA enforcement for privileged roles and high-risk actions (admin portals, payment changes, mailbox rule creation, vendor management).
- MFA fatigue protection (number matching, device binding, and controls that reduce push-spam acceptance).
Strong authentication does not eliminate social engineering, but it turns many credential theft attempts into dead ends.
2. Reduce the payoff of a stolen session or token
AI phishing increasingly targets session tokens and OAuth consent, and not just passwords. Prioritize controls that limit persistence and lateral movement. This includes:
- Conditional access and risk-based policies for impossible travel, unfamiliar devices, and anomalous sign-ins.
- Tight mailbox controls (auto-forwarding, suspicious inbox rules, newly created delegates).
- OAuth and third-party app governance (limit user consent, require admin approval for risky scopes, review new app grants).
- Least privilege by default (standing admin rights kept rare, time-bound, and auditable).
These controls often determine whether an incident stays contained or becomes an internal phishing platform and data exfiltration event.
3. Protect the workflows attackers target most
Many high-loss incidents happen because a legitimate business process is exploited. Strengthen high-risk workflows with built-in verification through the following:
- Out-of-band confirmation for banking and payroll changes using trusted contact details already on file.
- Two person approval for urgent payments and new vendor setup.
- Verified vendor communication (approved domains, contact lists, remittance change controls).
- Clear “never do this” rules (never share MFA codes, never approve unexpected MFA prompts, never install remote tools from unsolicited requests).
When the process itself demands verification, then organizations stop relying on individual judgment under pressure.
4. Make reporting fast, safe, and rewarding
The faster a suspected phishing attempt is reported, the smaller the damage. To that end, organizations should enable:
- One-click reporting in email and collaboration tools.
- Rapid feedback loops (“This was malicious, great job” or “This was safe, and here is why”).
- A no-shame culture that rewards all reporting, even false positives.
Insight & advice from our Chief Security Officer, Patrick Pilotte
What I find particularly relevant in this article is the paradigm shift it proposes: we should no longer rely on “what seems suspicious,” but on “what doesn’t follow a normal process.”
Where I would add a nuance is that AI didn’t create sophisticated phishing, it made it accessible at scale. What was previously reserved for targeted attacks (BEC, CEO fraud) can now be reproduced massively with little effort. To my mind, the real issue is therefore not just the quality of the messages, but their industrialization.
I would also add that attacks are no longer email-only. We’re seeing more and more hybrid scenarios combining emails, internal messaging platforms (Teams, Slack), and even voice phishing with vocal cloning, which dramatically increases credibility.
Finally, rather than viewing the user as the weakest link, it’s more accurate to say that our business processes are often optimized for speed rather than verification, and that’s precisely where attackers exploit the gap.
The final word
AI phishing works when people are rushed and systems are permissive. The answer is not to expect perfect vigilance from users. Rather, it’s to build a strong safety net rooted in verification habits that are easy to apply under pressure, and reinforce them with strong organizational controls that limit the impact of mistakes.

Steven Lafortune


