We’ve all seen phishing emails asking for personal information out of the blue. Historically, these were easy to spot with their bad formatting and spelling mistakes. But with artificial intelligence, traditional tells are removed, making it harder for even the savviest among us to catch phishing attempts.
If you work in fraud, risk, or trust and safety, you’re likely concerned with:
- Being prepared: You want to make sure you can manage the damaging effects of phishing scams to protect your company and customers.
- Understanding more sophisticated phishing: With AI, phishing campaigns can sound much more credible and innocuous with much less effort on a fraudster’s part. As much as possible, you want to be able to prevent known bad actors from getting onto your platform in the first place.
- Knowing which tools to use: If fraudsters are using AI, you might need more sophisticated tools to protect your customers.
Most businesses have experienced phishing attempts. With the prevalence of AI and large language models (LLMs), such as ChatGTP, no one is immune to AI-driven attacks. A single errant click can wreak havoc on your business, so regardless of what you’ve seen on your platform, it’s important to know what to look for with AI fraud and have a plan in place.
How phishing has changed with AI
Phishing usually occurs when a fraudster impersonates a business or individual and tries to obtain personally identifiable information (PII) like passwords, credit card details, and physical addresses.
You may be adept at handling certain types of phishing, but more sophisticated bad actors might be using AI and other tools (which we’ll dive into later), which can quickly overwhelm your team.
Fraudsters might try to access this sensitive information via an open text field, such as private messages or posting publicly on a message board. The message is written in a way that tries to trick the user into clicking on a malicious link or opening a malware attachment that will steal the user’s PII. The cybercriminal may also sell that person’s information online to other scammers or use it to manufacture a fake identity.
Phishing not only targets your customers, it can also go after your employees. This is called business email compromise (BEC), where a fraudster will pose as a manager and spear phish employees at the company. They’ll usually be looking for corporate information (e.g., access to a business’ intranet or extranet) or access to corporate bank accounts.
In the past, the telltale signs that a message was a phishing scam was that it used a generic greeting, the formatting would be unusual, and/or the language was full of spelling mistakes and grammar errors. These are often paired with an urgent request purporting to be from senior leadership at a company, usually from a person whose requests are honored without scrutiny.
But AI technology is enabling more sophisticated phishing attempts that your team may not be trained to handle. With AI, fraudsters are able to communicate more clearly, scale their attacks, and send messages that appear to be legitimate.
How AI makes it harder to stop phishing attacks
AI can write more polished, personalized, and legitimate-looking copy
As mentioned above, poor grammar, spelling mistakes, and even love language have long been considered giveaways for phishers. With AI phishing, bad actors can use LLMs to remove these idiosyncrasies and sound more like a native speaker, luring victims into a false sense of security.
A lot of fraud detection software relies on keyword detection or filtering exact text strings/phrases, but this tactic no longer applies when the copy is free of traditional tells.
Fraudsters can also use generative AI to crawl social media platforms and the internet for user-generated content and other public information. The result is phishing emails that are highly personalized and hard to distinguish from genuine correspondence.
As a company looking to protect its users and employees, you’ll want everyone to remain vigilant against phishing scams and ensure security awareness. Etsy, Apple, Verizon, and others offer guidance on how to identify phishing emails and protect against them.
AI can keep tweaking and improving messages
LLMs make it easier for scammers to test and adjust a message. If one suspicious email gets flagged as spam, they can simply go back to the AI writing tool and ask to make changes, such as shortening the text to make it sound more professional or changing the emphasis.
Before AI, it would take hours to continuously tweak a phishing message. Now, fraudsters can use LLMs and automation to adjust the message and then keep attempting to resend. If your fraud system is tuned to block or react to certain messages, bad actors get instant feedback and can keep improving their copy until they’re successful.
As a trust and safety team, that means you can no longer block a specific string of sentences and words that phishers are using to scam users. Since the copy can continuously change, you’ll need a more advanced fraud prevention tool that looks at other signals of the fraudster themselves, such as IP address, geolocation, and user ID.
AI can enable scammers to scale exponentially
Fraudsters know they need to send phishing messages en masse to find a victim. It used to take hours or days to spin up different messages with varying targets. Now, with AI, it takes minutes.
With an LLM, fraudsters can scrape hundreds of social accounts, manufacture a realistic message, and then contact users at scale. They can also use AI bots to survey and analyze thousands of devices, allowing them to use huge amounts of data to craft realistic-sounding messages. Phishers can also use these AI tools to write up new lines of code and launch more complex scams that required a lot more work in the past.
As a company, this means you need to spot large-scale fraud rings and fight fraud with security solutions that can expand easily.
How to catch AI-powered phishing attacks with Persona
Persona is a unified identity verification platform that helps companies fight sophisticated fraud with a variety of products and solutions, including verifications, link analysis, Know Your Customer (KYC), and Know Your Business (KYB).
We understand there is no silver bullet to fighting fraud, which is why our platform consists of product building blocks that allow you to customize workflows to best suit your team and business needs. While you can never predict the future, it’s smart to have multiple lines of defense against known cyber threats exacerbated by AI.
Here’s how we’re helping companies protect their users from phishing scams.
Stop fraud at the source with KYC/KYB onboarding
While it's always important to assess the text of a phishing message, it’s equally important to leverage data from the accounts that sent these messages. That’s why a key way to minimize fraud and phishing is to catch a fraudster before they’re able to create an account or contact users on your platform.
If your company is regulated, like fintech or a marketplace, you may be obliged to have KYC or KYB in place. But even if it’s not a regulatory requirement, KYC can help keep out bad actors. For example, there’s no requirement to have KYC in place for dating apps, but having a comprehensive process in place can prevent fraudsters from phishing other users and keep your platform safe.
At Persona, we help companies implement KYC and KYB onboarding that’s designed to prevent fraud and meet compliance needs without harming conversion. When someone creates an account, there are multiple ways to evaluate the level of risk this user represents.
Our Verifications product can compare user details against authoritative databases such as the Department of Motor Vehicles and global telecommunication databases. With Verifications, you can also customize what you want to check. You can choose to only check government ID, set up multi-factor authentication, or also ask the user to submit a selfie, for example.
Our Dynamic Flow product allows you to orchestrate user journeys, from collecting information through to decision-making. This allows you to add friction in your flows for users who have greater risk. For example, to access a certain part of your platform, you can ask users to submit a certain type of document, or if users fail a check, you can have them submit more personal information.
Find fraud patterns and block entire fraud rings with link analysis
A dynamic onboarding process can keep certain fraudsters at bay, but it won’t stop them all. For example, what happens when a fraudster joins a marketplace and tries to take over a legitimate seller’s account to commit a phishing scam? Anywhere good users congregate online — dating apps, job sites, social media platforms, and games with message boards or chat functions — is where phishers try to gain entry. For that reason, it’s important to have the right measures in place to continuously analyze fraud happening on your platform.
That’s where our link analysis tool, Graph, comes in. When a fraudster creates an account, they’ll use unique information for their name, address, location, and contact information. In Graph, we can also look at “passive signals” that they may not explicitly share, such as their IP address, device ID, or geolocation, and see if they are shared between many accounts, which could indicate a fraud ring.
When a user is inputting information during verification, Persona can flag it for review if passive signals such as IP address, hesitation time, or device ID indicate suspicious behavior. Our Workflows tool will flag the account and can decline it outright or send it to Cases for manual review. Graph can take this data and find potential links with other accounts.
By looking at both passive and active signals (the information users choose to share), we can uncover patterns and study relationships that are not visible in plain raw data. That’s also how we can uncover fraud even if the user is using AI to create fake messages, improve their copy, and scale. We don’t rely on grammar errors and spelling mistakes to uncover fraud, and instead use multiple reliable signals from the fraudsters themselves that are harder to bypass.
With Graph, you’ll be able to analyze account data and risk signals to quickly expose and block entire fraud rings.
Adopt a multilayered approach to verification to make it harder for fraudsters to sign up
Authenticating users is key to fighting fraud, but you can’t add too much friction to your onboarding process or you may run the risk of turning off good users. That’s why it’s important to find a balance. The best way to do that is to set up custom onboarding flows that adjust based on the risk level of the user.
You can use Persona’s Dynamic Flow and Workflows tools to create a custom onboarding process. You can drag and drop various functions and use if/then/else paths to further authenticate based on specific events, all while keeping your logo, colors, and voice consistent so customers feel confident in the brand and process.
For example, Coursera, an e-learning platform, was able to use Dynamic Flow to create a custom onboarding process depending on a student’s use case. Students who want to complete a course by a university should be assessed differently than someone who just wants to learn about a subject they’re interested in.
With Persona, Coursera was able to create a custom verification experience for each type of use case. This allowed them to ask enough information for certain students to be verified in seconds, whereas others went through multiple authentication layers. With each flow, Coursera was able to offer better user experiences and maintain high pass rates for good users without compromising on fraud detection.
Read the full case study here: Coursera scales its global user base and ensures strong academic integrity with Persona
By adding multiple verification layers based on a user’s needs and behavior during onboarding, you can make it a lot harder for fraudsters to get on your platform, send phishing messages, and scam you and your customers.
Block entire fraud rings all at once and set up block lists
Once you have confirmation that a user could be sending phishing attacks and committing fraud, you need an effective way to block them so they can’t easily return.
A lot of phishing scams will attempt to get a user off-platform, such as getting them to donate to a fake charity campaign or send money via electronic payment platforms. That’s why it’s not good enough to block someone based on a single attribute like their name or email address — they can easily return with other credentials. It’s important to make sure bad actors never make it back on your platform or else you’ll be back to square one in trying to stop them.
WIth our Graph tool, once a fraud ring is blocked, users can automatically be routed to a block list. Should the fraudster try to get back in or open an account, they won't be able to since they'll be on a block list triggered through Workflows. Graph will create these block lists based on passive signals like IP address, device ID, or geolocation, which means no matter how much the fraudster tweaks their AI-generated phishing emails, they won’t be able to create an account with your company.
Protect your users from AI-generated phishing attacks with Persona
As AI phishing attacks become more sophisticated, the need for more advanced technology and automation will be greater. The time of dead giveaways is gone, and even experienced fraud analysts will not be able to filter for AI-generated fraud every time. Catching fraudsters at the identity verification phase is crucial, while also having the tools in place to monitor fraud on your platform.
With Persona, we’re helping companies defend the credibility of their platforms with onboarding, continuous monitoring, and full-cycle fraud prevention. Reach out to us to learn more.