Multiple industry reports confirm that phishing remains the single most prevalent attack vector in cybercrime, with nearly half of employees admitting that they’ve fallen for phishing scams.
Unfortunately, things are unlikely to get better any time soon, as cybercriminals are now using AI to create even more advanced phishing campaigns with flawless writing and deeply personalized messaging. Weaponized with AI, today’s phishing scams feel so natural that even seasoned security teams have a hard time telling them apart from legitimate emails.
The effectiveness of AI-supported phishing is confirmed by a study published in November 2024, where researchers automated spear phishing campaigns using ChatGPT and Claude, and achieved a staggering 54% click-through rate, compared to just 12% for baseline attacks.
This article will explain exactly why AI phishing attacks are much harder to spot, even for security experts, and what you can do to stay ahead.
AI phishing is unlike anything we’ve seen in the past. Traditionally, phishing emails used generic templates created by non-native speakers, resulting in awkward phrasing and typos. There was also little to no personalization in the emails, as that would require hours upon hours of manual work, while the goal was to hit as many inboxes as possible.
But AI has completely eliminated these weak points. With a simple prompt, criminals can now generate grammatically perfect content with a professional tone that seamlessly matches corporate branding, or even personal writing quirks.
Within seconds, AI agents can crawl publicly available information on the victim through their social media or websites, and build a highly personalized target profile. They then weave these genuine details into the phishing messaging, making it way more believable.
This process can be easily repeated at scale, which is a likely contributor to the increase in phishing email volume. So, not only does AI improve the quality of the messages, but it also makes them way quicker to distribute – quite a scary combination.
With no grammatical mistakes and glaring red flags, employees are more likely to fall for these AI-driven scams. Ironically, AI emails sound more natural and “on-brand,” inducing a sense of false familiarity. There is still an element of urgency there, but it isn’t so blatantly obvious to raise suspicion.
Not just humans, but spam filters are also struggling to flag AI-generated messages, as they are trained on traditional filter rules like misspellings and poor formatting. One recent study evaluating major email detectors (Gmail Spam Filter, Apache SpamAssassin, Proofpoint) found that LLM-rephrased phishing emails saw “notable declines” in detection accuracy across the board.
Another factor that is lowering detection rates is that attackers can easily A/B test their subject lines and content variants and use those with the highest open and click-through metrics. Each wave of phishing becomes more refined, and in turn, harder to detect. Using AI feedback loops, these experiments can be fully automated at scale.
Outside of email phishing, AI has also opened the doors for an uptick in vishing (voice phishing) and video scams powered by deepfake technology. This will likely become the dominant form of phishing in the future.
There are already a few examples of how criminals are using these technologies: Voice cloning tools can now realistically impersonate executives, which criminals use to target employees on platforms like WhatsApp, Slack, and Teams.
Even North Korean threat actors are getting in on the action, using real-time deepfake video to pass interviews and get hired in Western companies, likely to help fund their regime.
The goal of many phishing campaigns is to trick the victim into giving out their login credentials. Enabling multi-factor authentication is an important first step in creating a wall between your digital assets and the AI-empowered attackers.
More modern email detection tools beyond spam filters are also necessary. These include DMARC/SPF/DKIM enforcement to validate senders, and an AI-anomaly detector that spots subtle red flags humans might miss, such as unusual sending patterns, metadata inconsistencies, and mismatched domains.
But while these technical measures are great, they won’t address the core issue: phishing is ultimately a people problem. That’s why a security awareness program centered around phishing simulation can be the real game changer.
With a hands-on approach, phishing simulation will expose the workforce to all of the AI-driven scams they are likely to encounter on the job, and build the instincts employees need to question and report even the most sophisticated AI phishing attempts.
Artificial intelligence has brought great benefits to the cybersecurity industry. But when it comes to phishing, it’s hard to find any positives. Thanks to AI, phishing attacks are now way more advanced and difficult to detect. What was once easy to spot with a bit of attention can now easily blend into the noise of everyday communication, posing a significant risk to organizations of all sizes.
But while the situation is concerning, there are certainly ways to tip the balance back in your favor, whether it’s through regular phishing training or deploying the most advanced AI defensive tools.