Recently, Persona’s trust and safety architect, Jeff, chatted with Brian Killeen, director of financial crime, fraud, and investigations at Guidehouse, and Ahmed Siddiqui, CPO of Branch, about why synthetic fraud continues to be such a threat and how to minimize this type of fraud both immediately and going forward.
We recapped the main takeaways here but also wanted to recap the Q&A portion of the discussion. You can also watch the full recording to get the most out of the event.
What data attributes or data types are important to highlight when mitigating synthetic ID fraud use cases?
Brian recommends identifying specific data attributes for each product line and enriching this data with more insights. For example, if someone has no address history or their phone account has only been around for a year, they might be an immigrant, so you may want to look at other signals, such as their social media information. Do they have a social or digital footprint?
He also shares that velocity checks, like the number of accounts opened by a device or the number of accounts per phone or email, can also help — while fraudsters are getting more sophisticated, they’re using the same phones. “They'll have a lot of burner phones, don't get me wrong, but they'll be opening up multiple different accounts.”
Additionally, you can consider evaluating device characteristics. For example, if someone’s IP address says they’re signing on from Dallas when they usually log in from Miami, it may be due to fraud.
That said, while looking at these different data points can help you make a more informed decision, you don’t want to ask users for too much information — you have to find the right balance and tailor the amount of friction based on each individual’s risk signals and use case.
What role does biometrics have in mitigating payment fraud and verifying identities?
Ahmed explains that there are both benefits and drawbacks to using biometrics to mitigate fraud and verify identities. On one hand, they’re more common and can introduce less friction than other verification methods. “Historically, biometrics were hard to get. But with most of our mobile devices, we can do a Touch ID or Face ID pretty quickly. And what's great about it is it doesn't create that much friction. It's pretty fast on the devices we have today.”
On the other hand, there are still a lot of devices that don’t use biometrics. As such, Ahmed says, “It's a good signal, but it's not something you can 100% rely on simply because there might be instances where people don't have it turned on — or the device might not even be theirs. That's why you need to use it in conjunction with other mechanisms.”
Jeff agrees, stressing the importance of employing multiple checks to ensure your fraud system doesn’t have a single point of failure. “We have government systems and databases you can check against. They may not be the most avant-garde, but they’re often still worth doing to see what passes and what fails. And then you can add intelligence along the way. Biometrics can be one of these things. And again, you can step these checks up or down based on your risk segmentation and ability to discern whether this is probabilistically more legitimate or fraudulent, and really alter the experience there.”
To sum things up, Jeff explains, “You can use biometrics, and you can use selfies. But you may not want to apply these universally across the platform unless required or needed. It’s important to put bookends around what percentage of your user population this might hit and make sure it's not overly security-focused.”
What solutions can help reduce synthetic identity fraud? Is there a way to assess and monitor for synthetic identity fraud patterns?
Our marketplace partner, SentiLink, has synthetic identity fraud intelligence that can help organizations catch synthetics faster.
However, you can also leverage internal signals. Jeff explains, “If you have particular signals or data elements that are generally reflected of true identities, is the absence of those things perhaps a synthetic identity? You expect a real live breathing person to be able to provide certain information — why doesn’t this exist for this other onboarding account?”
Do synthetic IDs typically have a profile with the IRS and pass the TIN verification?
Unfortunately, this situation isn’t unusual. Jeff points out that this may be partially due to the fact that you don’t need a ton of paperwork to create a TIN. “If you go to the IRS website and check out the criteria required to create a TIN, I think you may be surprised by what is asked for and what isn't asked for. Don’t automatically assume that because it's associated with the IRS, it's got to involve a lot of paperwork. This may be why so many records came back as true positive responses.”
Because it’s relatively easy to corrupt data, Brian stresses the importance of introducing other data sources into your decisions. “There's no golden record. That's why it's so important to make sure you start with understanding what you’re seeing, what the market is seeing with synthetics, and what are the correct data sources you need to bring together to make better decisions. Because there's not a one-stop shop to get that single answer, unfortunately.”
With ChatGPT and artificial intelligence technology starting to be used for fraud, e.g., increasingly accurate deepfakes, how do you think the new wave of generative AI will affect synthetic fraud risk, and how should we prepare for this?
Unfortunately, Jeff notes that it’s reasonable to expect the quality of generative AI to increase rapidly — whether it’s used to create fake documents, selfies, or other information uploaded to your platform. In fact, even though he’s spent years working in fraud prevention, when he saw the photo of Pope Francis in Balenciaga, he found himself making a mental argument for how it could be plausible.
While Jeff used to put fraud into two general buckets: really sophisticated fraud and high-volume less sophisticated fraud, he’s worried that these concepts now are co-mingling and high-volume fraud will grow increasingly sophisticated — which is worrisome because these tools are accessible to everyone.
As Ahmed puts it, “If I'm using ChatGPT to improve my day or make my life easier, I bet fraudsters are also using it to make their lives easier as well. That's why I think we'll probably start to see more variety of types of attacks — because generative AI is actually generating different types of fraud attacks.”
Brian concurs, saying it’s easy to weaponize these types of tools. However, he ends by pointing out that there are also new technologies being developed to counter these attacks. “There are more advancements in technology with document verification that are coming. And now, some verification places, instead of just a static photo, will do a video image where you have to turn or repeat a number. The more you have the deepfake do various different things, it allows for the technology to identify some of the nefarious behavior. So it's definitely going to weaponize the bad actors, but I've read that we're making some strides to try to control those things as well.”
Interested in learning more about how to identify and mitigate synthetic fraud? Check out the main discussion recap here.