Industry
Published June 09, 2025
Last updated June 09, 2025

(Annotated transcript) Deepfakes and AI-based fraud: Strategies to protect your business

The expanded and annotated transcript for the Deepfakes and AI-based fraud: Strategies to protect your business webinar
Louis DeNicola
Louis DeNicola
undefined

What’s in this annotated transcript?

We took the webinar’s transcript and spruced it up a bit to create a more helpful and readable document. As a result, this isn’t a word-for-word transcript. Instead, you’ll find: 

  • Extra insights: Pat Hall and David Maimon shared some additional thoughts with us after the webinar. We added and color-coded them in the transcript below. 

  • Fewer fillers: We lightly edited the transcript and removed filler words: um, ah, so, I mean, repetitions, and the like.

  • Links and visuals: We added links to related resources, a few of the slides from the webinar, and some images that weren’t. 

You can also watch the webinar and follow along with the slide deck.

[0:00] Deepfakes and AI-based fraud: Strategies to protect your business

Kerwell Liao: Hi everyone, thanks for joining. We're really excited to have you in this webinar on deepfakes and AI-based fraud, as well as strategies that you can use to protect your business against that kind of fraud. We're going to get started right now with a couple of housekeeping items so that you're aware of how things will go today. We're going to record this session and share it via email after the fact. If you can't stay for the entire session, don't worry, you'll get that in your email after the fact. Except for the live Q&A, we are not going to record that, so that anyone that is submitting a question feels free to ask any type of question that they would like to.

Persona
Notes:

Although we had planned on saving the Q&A until the end, the webinar turned into a lively chat. Pat and David answered most of the questions throughout the hour, and everything is available in the recording. There were a few questions we didn’t have time to answer live. We’re adding them to the end of the transcript in a bonus Q&A section. We’re also keeping all the questioners anonymous.

On the note of Q&A, please submit those questions through the chat in the webinar, and feel free to submit them at any time. We'll try to answer them if they're relevant, and if we can work them into the conversation live. 

But we also do have time, as I mentioned, at the end of the webinar to address any Q&A that we are not able to address live during the webinar. Feel free to keep those questions coming, or even if you have any comments, if you want to react to any of the content that we're sharing, any of the topics that we're sharing, feel free to submit those in the chat as well. And then hopefully there won't be any technical difficulties for any folks today, but we know that can happen sometimes, please feel free to use the chat for any of those technical difficulties as well.

As a bit of a preview for how we're going to run the session today. We'll start off by introducing our great panelists that we have on the next slide, and then we'll run through a case study on an example of a common type of fraud that may plague a lot of businesses. 

From there, we're going to talk about some trending fraud techniques that we're seeing across Persona and SentiLink. And then we're going to talk about how businesses can use signals strategically to address the kinds of new fraud techniques that we're seeing that we're going to talk about in section number three today. 

We'll wrap up with some key takeaways to zoom out from a lot of these specific details that we'll run through, and then go through Q&A. With that, I'm really happy to introduce today's speakers. First, I'll hand it over to Pat. Can you please introduce yourself?

[2:12] Introductions

Pat Hall: Hey everyone. My name is Pat Hall, I'm a Product Architect here at Persona. I've only been with Persona for about five months now. What product architects do here, is we help to solution for customers in the very unique, bespoke cases we can get in the identity world. Prior to this, for the last six years, I was at both Uber and DoorDash, helping to build out their identity programs and background checks as well.

Pat
Notes:

We monitor a combination of inbound customer reports about emerging fraud vectors and our own macro-level insights into fraud trends. We do both because customer feedback gives us an ear-to-the-ground pulse and our internal research helps us catch fraud patterns customers might not see yet.

Kerwell Liao: Awesome. Thanks Pat. Great to have you on board. And then now I'll pass it over to Dr. David Maimon. Can you introduce yourself.

David Maimon: Thanks, Kerwell. Yeah, David Maimon, I'm the head of Fraud Insights in SentiLink. I've been with the company during the last year and a half or so. In addition to my role in SentiLink, I'm also a professor at Georgia State University. And my professional development and area of research really focuses on online crime with a focus on fraud. 

So during the last seven years or so, I've been heavily immersed in darknet environments, swimming there with criminals, bringing all this really cool data back home to SentiLink and the university, and sort of try to make sense of the online fraud ecosystem.

David
Notes:

There are groups operating on the darknet and on Telegram, WhatsApp, Signal, and other places. We started getting into these communities a while ago, when this ecosystem just emerged, so the vetting process was less stringent. We were able to establish sock puppet identities, talk to the right people, and then get access to the right communities. Now, we get access to more and more communities because we're vetted and people essentially trust our identities.

Kerwell Liao: Fantastic. Thank you so much, David. Great to have you on board as well, and thanks for joining us here on this webinar and sharing your insights from your research. And then I'm Kerwell, a product marketing manager at Persona, I'll be moderating this webinar. And on that note, we would like to share a little bit about Persona for anyone who isn't familiar.

So in a nutshell at Persona, we help businesses and individuals engage in trusted online interactions. And we do that by providing the building blocks that businesses and other types of organizations need so they can collect and verify identity information, as well as orchestrate and automate the entire process from one end to the other. We've created a unified identity platform where businesses can configure these building blocks in any number of ways to suit any use case that they might have related to digital identity. Things like AML compliance, trust and safety, and of course, the topic of today's webinar, deepfake and fraud prevention. That's a little bit about Persona, but I do want to also pass it over to David to share a bit about SentiLink.

David Maimon: Sure. SentiLink is a leading provider of innovative identity and risk solutions. And we're in a mission to increase trust by helping our partners to prevent bad actors from engaging in financial fraud while using stolen or synthetic identities. What we do on a daily basis is, we process over 3 million identities. Our risk scores help determine whether the identities are legitimate, synthetic, or stolen. And by doing this on a daily basis, we help prevent around 60,000 identity theft attempts per day. The company was founded in 2017, and since then we pride ourselves in the fact that we're able to help the financial ecosystem here in the United States to grow.

[5:17] Defining deepfakes and replay attacks

Kerwell Liao: Fantastic. Thanks for that overview, David. And before we dive into the case study itself, I know there's a wide range of backgrounds represented in our audience today. I wanted to establish some common ground with a few definitions for some concepts that we're going to cover throughout the webinar today. And I'm sure a lot of folks have heard some of these terms before, but a lot of people kind of use them to refer to different things. I just want to set some groundwork for how we're going to be referring to them in this content today.

In terms of how we're going to be referring to:

  • Deepfakes, we talk about them in terms of people using them to animate a photo of someone's face that they might have gotten through various means, and making it seem like it's a video, or taking one person's face and swapping it onto another person's head. 

  • This is in contrast to things like synthetic faces, which we would consider to be completely net new faces generated by giving an AI tool a text prompt. 

Persona
Notes:

We’ve identified 50 distinct classes of AI-based face spoofs and use these classes to better understand how fraudsters use GenAI.

And then the last concept that I want to talk about, and we actually have some examples to share to bring this to life, is the concept of a replay attack. And the basic idea is that a replay attack involves reusing an image or a video of someone to go through an identity verification flow using that person's face. And if that's a little bit abstract right now, don't worry, we're going to cover some examples with some visuals as well for what that would look like. Stay tuned for that.

So those are some definitions to keep in mind, but actually on that note, we're going to get into the next part of our content and turn it over to this case study. 

[6:57] A case study from a financial institution 

Kerwell Liao: To kick off the case study, I'm going to start off with this pretty standard flow that I'm sure everyone here has seen before, while signing up for various types of services. This is from a company that had built their own onboarding flow, and they were collecting pretty standard pieces of information such as the user's name, email, phone number, their address, Social Security number, along with images of their government ID and a selfie. And in the background they were also running liveness checks to make sure that it was not just an image of someone's face, for example. Pretty common things that financial institutions and marketplaces might collect in their onboarding flows.

I should also note that this example was from before they were working with Persona or SentiLink, but at this point they had about 5 million users already. They had built a lot of their own internal tooling, including this onboarding flow, as well as their internal fraud tooling to alert them to suspicious transactions and other types of suspicious account activity. Based on that tooling, initially things at this point seemed to be working fine. 

The Social Security numbers that they were checking with databases, they checked out, seemed fine. The addresses appeared to be real, and then the photos and the IDs that they were gathering also looked quite real, they also passed things like liveness checks that they were running. But then at a certain point in time, suddenly their risk model started flagging more suspicious transactions on new accounts, which is kind of strange. They had never seen this before.

And as they looked into some of these accounts, they discovered a couple of interesting things that definitely seemed suspicious, but didn't point to fraud necessarily one way or another while looking at individual submissions. One of the things that was pretty puzzling for them was that different selfies from the accounts had backgrounds that looked really similar suspiciously, but the faces, as you can see here, were very noticeably different. 

Mod Image 1
Mod Image 2
Mod Image 3

At this point, actually, I'll pause and want to open it up to folks in the audience for a quick moment.

[9:02] Guess what was happening

Kerwell Liao: Are there any guesses that you have for what was happening here? And if you were at this business and you wanted to investigate further, what would you do? Feel free to drop some ideas in the chat. We also have some poll questions for you, some choices for what might have been happening. 

One possibility is that maybe different individuals were creating accounts that were initially legitimate, and then maybe using them for fraud later on. Another option is maybe there was an organized fraud ring that was targeting the business, or potentially the same bad actor using realistic masks and just in the same physical location. Could be one of these things. Let's see what folks have mentioned in the chat as well.

Persona
Notes:

Here are some of the attendees’ guesses:

  • Virtual camera injection with deepfake software
  • AI-generated images being submitted as selfies
  • Organized IDT ring, AI use
  • Single individual using filters to produce variations of his face
  • Face swap
  • AI for the photo and virtual camera, location change
  • Hacked accounts
  • AI-generated images used to test and locate weakness in the system
  • Multiple individuals using the same software to create AI-generated images making the output of the selfie look similar

I'm seeing something about virtual camera detection with deepfake software. It's a pretty good guess. AI-generated images being submitted as selfies. Okay, some folks have definitely seen some fraud before, I'm seeing some veterans here in the chat. Organized ring using AI. Individuals using filters. Uh-huh. Face swap AI for the photo and virtual camera. These are all fantastic guesses, and also thanks for engaging with me and not leaving me hanging here. Keep the guesses coming, but I'm going to continue to move us to the rest of the content as well for the sake of time.

So at this point for this business, they know their guess was as good as any of the ones here, and some folks were mentioning some things that are being mentioned in the chat as well. 

Persona
Notes:

In this case, it turned out to be a fraudster, or fraud ring, using face swaps.

And the point I'm trying to make is that really, any of the options that we're seeing on the slide here could have been true based on the data that the business was collecting. 

And without them having more access to different types of data or signals, it was very difficult for them to pinpoint the source of the fraud. That brings us to the next section, if you're wondering what kind of data would have been useful for this business, that's actually what we're going to cover in the rest of the webinar.

Pat
Notes:

A lot of solutioning around identity is the data that you can collect — the depth of it and the fidelity of it — because so much is trying to be spoofed today. It's about gaining the holistic picture of what's going on and making sure you're doing it in a way that’s not susceptible to spoofing or manipulation.

To set the foundation actually for the types of data and signals that can help catch these kinds of examples, I want to first turn it over to our fraud experts here to cover some of the latest techniques that fraudsters are using to commit this kind of fraud. Pat, let's start with you. Can you share some of the latest fraud techniques that you've been paying attention to?

[11:18] The latest fraud techniques

Pat Hall: Yeah. And everybody that's in the chat feel free as David and I keep going on this as well, to keep putting stuff into the chat, we can see everything and log it. It's a pretty common story that we see across the board today in the identity and verification space overall, in trying to protect accounts potentially from ATOs, or just onboarding new accounts as we look to grow the businesses out. 

We all kind of know the mental image historically of the fraudster sitting in the basement with the hoodie over their heads. I think as I can see in chat right now, a lot of us know that's no longer the case. 

  • There is pretty proliferated technology, organized fraud rings that are leveraging AI, VPNs, and other means today to get into systems.

  • They know the ins and outs of the accounts flows, they're always testing, they're looking how they can spoof different pieces of information in. 

  • They know PII on individuals, anything from a driver's license number and the information surrounding it, all the way up to SSN. 

  • They're very good at having that info that they've collected. 

Just a plug for David, if you look at his LinkedIn profile, you'll see some good videos on teaching us how they do that exactly. They've become so advanced, it's hard for us to understand how to stop them in a lot of cases. And what we're going to talk about today is some of the techniques that we can use and signals around them overall.

Kerwell Liao: Awesome. Great. Pat, thanks for that overview. One of the examples that I know you and I have talked about that I wanted to kind of share with the audience here is, this example of how fraudsters are collecting some of those signals and repurposing them. Can you talk us through how fraudsters are doing that with collecting selfies in, I would say, kind of clever ways that we haven't necessarily seen before?

Pat Hall: Yeah. It's a great question and I think a common theme, I don't think malls exist too much anymore, but I was at a mall the other weekend with my child and he wanted to pop into a photo booth, so we did that. What you can imagine out there today is, we have a lot of means with mobile devices, desktops to capture our images. We're being asked for that quite frequently. What we will see out there typically is the ability for individuals to capture and intercept those types of signals. 

So, you can imagine I might be at the mall photo booth, that's not really how it really works, it's probably on your mobile device. You're asked to do a verification. The question is, are you on a secure network, or is the verification that you're doing being intercepted? Is it even real from a verification company?

Pat
Notes:

One of the things that we’ve seen and are concerned about are the fake websites that get set up to collect information. People believe they’re signing up for a dating website and that the dating site is verifying them, but it’s not actually a dating site. We’re seeing things like that a lot.

You can imagine that out there, and just using a small example, we might actually have the ability to capture an image of an individual, you, and log it as a fraudster. So, this is one example. 

Then what that would look like is a potential playback into a verification system, like a Persona or SentiLink. We might see something like this where the data that's captured, and David will go through a few great examples soon, actually comes back into the systems, not from you, but from the individual that has intercepted this or grabbed this data out there overall. And I'll pass it over to David who will show a little bit of what that looks like.

David Maimon: Everything that Pat had mentioned is spot on, so I appreciate that. But there's a lot going on in the context of those images. A lot of companies nowadays actually require you to go through a verification process when they want you to go through the selfie and the liveness test. And unfortunately during the last several years or so, some of those companies have been breached. And this specific data became available in some of the darknet markets we spent a lot of time in. 

So we've seen some of the images, some of the verification which came from those data breaches available out there for reasonable prices. 

Some of the prices we're seeing today for a combination of a selfie picture, driver license front and back, as well as the full PII of the individual, which include Social Security number, date of birth, address goes for $12 per a piece of identity. If you buy in bulk, you can get a better price, which is unfortunate. This is one important source for that.

undefined
David
Notes:

I think this one was for sale on Telegram, but the sellers also have a presence on WhatsApp.

David Maimon: We see, and Pat mentioned that as well, we see a lot of criminals now launching smishing and phishing attacks in order to collect all this information. We have a lot of really interesting examples of people going to those websites thinking that this is their bank website, or the IRS website, or any other website which they normally go to thinking that their account has been blocked. And then after providing the password to the scam page, they're asked to provide their PII along with those selfie images, driver licenses, passport, and so on. Unfortunately, we're seeing a lot of the information coming from those websites as well, and criminals are selling the information as well.

It's really a combination of different ways the fraudsters are collecting the data, and then offering the data for sale on the online fraud ecosystem, which is fairly vibrant at this point. And criminals, novices, novice criminals, more professional criminals have access to all this data, and they can just start using it in the context of onboarding ATO and so on.

Kerwell Liao: Thanks for sharing those examples, David. Yeah, I'm a fan of Costco, so buying in bulk definitely resonates with me. But it's frightening to see that that business model has also moved to the fraud landscape.

David Maimon: Yeah, smart consumer, right?

Kerwell Liao: Right. Exactly. Economics works one way or another I suppose. And I think, seeing a lot of the chat, people are talking about how there's a lot of ways to use models to generate these kinds of images. But I think one thing that your example here and the examples that Pat talked about earlier really struck a different chord with me because it's not just using AI to generate selfies. I think we all know that it's gotten so sophisticated that selfies can look really realistic, and it's hard if not impossible for humans and frankly AI models to detect using visual signals whether something is real or fake. But in this case it is real and they're actually finding ways to repurpose that real PII and real faces in these verification flows and impersonate folks. Actually, did you have more you wanted to add on that?

David Maimon: Those selfies are important, right? We see them, but unfortunately in many places around the globe we're actually seeing people selling their faces as well. Along with the data breaches, you have a lot of people in Russia, for example. In Colombia, we've seen people selling, not only their faces in a picture, but in a video as well. A lot of videos out there are of people turning their head left and right, up and down, and then they sell it for $25. It's not only GenAI as you mentioned. It's people selling their faces, and criminals using those faces in order to try and bypass those tests

Persona
Notes:

404 Media covered the collection and sale of real images and videos in their story, Inside the Face Fraud Factory.

Kerwell Liao: Right. That's a good call out. Well, with all of these examples of how this data gets collected and aggregated, I wanted to turn it over to Pat again to share a little bit about how fraudsters are actually deploying this information in these IDV flows.

Pat Hall: And I'll pick up a few questions as I go through the slide too. One of the questions that we got in chat is, can you actually detect it with the human eye? The answer is no. 

You can imagine where I sit, I see a lot of selfies daily. I see video selfies, right? The UI is no different, but you record in the background what's going on. You will actually replay that selfie a few times and not be able to tell anything. But then as the image that Kerwell was showing earlier, you'll see those different individuals with the same exact background come in five accounts in 10 minutes, and you'll know that something's off. So no, it's very hard visually in certain instances to notice it. There's less sophisticated groups, there's more sophisticated groups. Good laugh, the toll scam is getting out of hand too. Yes, I think everyone's getting the text messages on the tolls that come in.

There are a few things we'll answer on the product that are coming in, but wanted to show, and David's got a really good example on the next slide, if you can watch the screen. Imagine that, that info has been captured, right? Whether they're producing the content with AI to replay it in, or they've actually collected the image of a real person and recorded it. And they've placed that on a fake printed DL for, let's just say it's the US, but they're clearly not the US. Then they're going to try to inject it. 

undefined

When you hear this term of video injection attack, if you're not familiar, we are typically seeing [the created or stolen PII] being input into systems that rely on visuals, that rely on other things to beat them. To look like a unique individual. In a lot of cases too, with ATOs, they're trying to replicate the likeness of the account owner themselves. They've targeted them, they know the account owner's information, they've gotten in.

Now, the business is saying, "Hey, this looks risky. There's a device change, there's something else going on. Let's send a selfie or a gov ID verification out." That's pretty common with what happens. 

What fraudsters have done is figured out this mechanism to replicate that, whether via AI or if it's a new account creation, just making info that looks very real. They have systems which David will show in a quick second that actually puts that into the verifications. And you can see, it looks very real, nothing looks off on the surface, but what we're going to talk through later is the signals that you can pick up when that's actually occurring.

[21:50] Examples of injection attacks

Kerwell Liao: Perfect. Thanks for walking through this illustration conceptually. And yeah, Pat, as you mentioned, David has a great example of this. Let's go to that slide. And then David, do you want to talk through what we're seeing here?

David Maimon: Sure. So maybe before we play it, right? If we were playing this, I just want to emphasize that what we're seeing here is just one of the ways that folks are doing what they're doing in terms of bypassing the liveness test. 

There are a couple of approaches where you can attack the camera, and essentially inject an image, which the camera will think that it's looking at a live image. That's one way to go, the other way to go is to do what folks are doing in this example, and now you can play it.

So they take an image or a video, I'm sorry, of this young lady, which could have been GenAI, or a real person, and then using an Android emulator, they simply feed it into the Android emulator, which takes over the camera, and then the camera thinks again that the person is real. And at the end of this video, we can't really show the rest of it. You see that the verification was successful.

There are a couple of approaches where you can attack the camera, and essentially inject an image, which the camera will think that it's looking at a live image. That's one way to go, the other way to go is to do what folks are doing in this example, and now you can play it. 

So they take an image or a video, I'm sorry, of this young lady, which could have been GenAI, or a real person, and then using an Android emulator, they simply feed it into the Android emulator, which takes over the camera, and then the camera thinks again that the person is real. And at the end of this video, we can't really show the rest of it. You see that the verification was successful.

These are several ways folks are trying to bypass the liveness test. We are seeing them using the emulators, the Android emulators, we're seeing them using other technology and other software such as OBS Studio to sort of try to bypass the liveness test. If they use an iPhone, they airplay the iPhone into the computer, and then that's how they take over the computer using OBS Studio, several approaches that folks are using. 

David
Notes:

We also see tutorials for sale that teach people how to do this with an iPhone or Android.

But the bottom line is that once you have that setup going, you can bring in a GenAI created video, and it's fairly easy to create those videos with ChatGPT, or Krea, or a real person video and simply inject it in the camera in order to bypass those liveness tests.

Kerwell Liao: Right. That's also quite frightening. How many different ways there are to spoof the camera and spoof the device. Yeah, and I think seeing it in person, seeing this video example also, and seeing how easy it is and seeing it in action also paints that picture for me too.

David Maimon: If I can jump in. There's so many videos that we're seeing. One of my favorite videos, we haven't really shown it here, but it's a mind-boggling video where we see on the bottom of the screen, you're seeing a real person on the right. To the left, you're seeing an image, a black and white image that the criminals are using, and then in the middle you're seeing how these guys swap faces between the real person and the image in the middle. You're seeing that, and then you're seeing in that video how they're able to attack the camera and bypass liveness tests, right? Again, mind-boggling to see the different opportunities, the different ways these guys are trying to target the solutions we currently have.

Kerwell Liao: Right, right. As we talked about-

Pat Hall: If I could-

Kerwell Liao: Go ahead, Pat.

[24:59] More/less vulnerable systems and flows

Pat Hall: If I could also jump in, because there's a lot of good stuff coming in chat that I want to make sure that we get to. Again, this is supposed to be super interactive, we can keep reading through the deck with everyone, but also just want to make sure we pick up a few things that are probably good questions for everyone. 

There was a question on what is a virtual camera before David ran through what he did. If you still have that question or any deeper dives, let us know. But I think that's a good overview of what the virtual camera looks like. We can definitely share visuals. I think David, in your example, that's something we can take as a follow-up, so we'll do that. 

Persona
Notes:

We aren’t able to share the video that David referenced above, but here's another example of an injection attack.

undefined

An example from Persona that wasn’t shared during the webinar

One of the really good questions we just got that I wanted to share with everyone is, certain systems are more vulnerable than others to these types of attacks today. iOS, decently secure, nothing's infallible, right? Systems get breached. Android for example, is a lot more susceptible. Any web-based verification, incredibly susceptible to fraud today. 

Pat
Notes:

There has always been a deficiency with web-based verifications, and it’s growing because more folks have their hands on GenAI tools. Web-based verification doesn’t let us detect a lot around the transmission mechanism itself, such as what device a person is using, which we can use to catch deepfakes. Folks usually aren’t buying multiple devices, but they can switch their browsers easily to hide their tracks with web verifications.

So you can't shut those things off, right? You run a business, you need to focus on conversion. I sat in that seat for the last six years. How do we make sure we get the entirety of the user base that you want to target, but also do it in a way that the fraudsters can't exploit those kinds of gaps? It's a really important balance, but to the question, yes.

And then I want to double down on something that David said, and if there's any follow-up questions, let us know. David, I'll point it to you. Can you walk through a little bit of what an emulator actually is, what that does?

[26:36] What is an emulator and how fraudsters buy or create PII

David Maimon: So, it essentially pretends to be your smartphone, right? Which is again really interesting. It's pretending to be a smartphone that you can control from your computer, and it has the same functionality. You can have all your apps on the emulator, and then if the emulator is on your computer, you can connect it or the default camera is the emulator camera, which then allows you to feed in videos and pictures and so on.

Kerwell Liao: Yeah. A lot of different technologies that we're talking about here. And I think Pat, it was a great point about different types of devices or ecosystems, for example, having their different vulnerabilities or protections, frankly. That actually brings me to another related topic that I know David, you've done a lot of digging into. Which is, we spent a lot of time in this first portion talking about deepfakes or selfies, but obviously there's a lot of other information that folks are collecting through these types of flows. Can you talk about how that kind of information, Social Security numbers for example, or names, that kind of stuff is getting spoofed.

David Maimon: When we talk about the verification process, right? We know that there are several layers. We have the first layer when folks provide their PIIs, and then if there's a second layer that a vendor will have out there, they will be asked to sort of provide the images, the driver licenses, the liveness test, and so on and so forth, right? But folks, when they want to onboard at a bank or a governmental agency, they need to have those important sorts of PIIs out there. And what they will do in the context of stolen identities, they will buy the identity, or they will find the identity, they will find all the PIIs. As Pat said, they will collect a lot of information around that, and then they will manufacture a lot of documentation around it. 

They will manufacture fake driver licenses, they will manufacture fake passports, they will manufacture utility bills as we're seeing on the image right here in order to go along with the identity.

undefined

So again, when we onboard, we need to provide all this information and prove that we are alive. Those documents are supposed to be very important in the process of the verification. The problem is that, as I mentioned earlier, those documents are very easy to spoof. 

We're seeing a lot of people offering those services for sale, those services of creating those fake documents, very high quality like we're seeing here in the image. And when you take the identities that some of these vendors disclose out there on the online fraud ecosystem, and you check them with our systems, you actually see some really interesting things. 

For example, in the context of the identity we're looking at on the slide right now that essentially came from the utility bill, we're seeing a slew of Social Security numbers which are associated with that. That's an indication that specific identity is involved in first-party fraud, probably changing Social Security numbers, changing phone numbers quite consistently, using a slew of emails, right?

So a lot of confusing signals, sort of speaking, which suggests that at the end of the day there may be an issue with an application coming from these specific individuals. In addition to those fake utility bills, we also see a lot of driver licenses, but also fake sort of security cards. 

This example came from a recent research we engaged in where we try to figure out the markets or the ecosystem around synthetic identities. Part of the things we've done, we put together a list of clearnet markets where you can buy synthetic identities, Facebook Groups where you can buy those identities. And we started communicating with some of the vendors in order to see whether they can actually prove that they can create the identities.

David
Notes:

Anyone can try to get into these fraud communities, of course. It's just a matter of finding the communities, getting into the first group, trying to find some interesting links or interesting forums. You have to be consistent and not talk like you are gathering intelligence.

But unless you develop relationships, you will always be on the surface level. If you want to get deeper and deeper, you need to build trust, talk to people, maybe purchase something here and there, and that will allow you to get to more sophisticated actors. There's a science to it.

But unless you develop relationships, you will always be on the surface level. If you want to get deeper and deeper, you need to build trust, talk to people, maybe purchase something here and there, and that will allow you to get to more sophisticated actors. There's a science to it.]

In one of the conversations we had, we got this Social Security Card you were seeing on the image, that was one of the evidence that the guy essentially can provide you with a fake identity. 

undefined

So what we've done, we ran this identity, the Social Security number of this identity with our databases. And we realized that, again, this person really exists, it's a 29 years old female who has, of course, a real Social Security number, but at the same time she's using two other Social Security numbers along with the one on the card here to ask and open new bank accounts, or take upon new loans. 

We see a lot of that happens, and that speaks volumes in my mind to the different signals that we need to take into consideration when we engage in fraud prevention.

[31:55] How to use different signals to spot fraudsters

Kerwell Liao: That's actually a perfect segue to our next section on signals. We've spent a lot of time, Pat and David to share a lot of techniques that fraudsters are using, and a lot of these really great examples of how people are putting the kinds of data that they have captured from various places together to try and spoof these flows. 

So, what can businesses actually do? What are the signals that they can actually use? Let's talk about those. And Pat, let's start with you. What are some ways that businesses should think about whether or not to trust a particular submission or account? What are the types of data or signals that they should be looking at?

Email course
An expert-led course on combating AI-based face spoofs
Start now

Pat Hall: Yeah, that's a great question. And again, I see a number of questions in the chat, one on ATO, which is account takeover. Thanks somebody for clarifying that earlier. We will get to these at the end, I think we'll have plenty of time. We'll get to the emulator questions as well, and how we can get through that. So, just want to log that we'll get to those.

To Kerwell's question, one of the things I want to flag to everybody that's on this call, signals that you gather don't mean increased friction on users that you're dealing with. You can collect passive signals in the background. It's actually the best way to do it because the fraudster doesn't know which friction they’re trying to beat. 

An example is, yes, you may insert a gov ID challenge, or a selfie challenge when you detect something weird is going on. It's also a bunch of the passive signals behind the scenes that are going on with those submissions that we can look at. And Kerwell, if we can go to the next slide, we'll talk through it together.

undefined

So, loaded visual, but it really gives you the picture of what we can look at behind the scenes when the submissions come in, right? We're all familiar with the visuals. You can see a selfie, we saw the selfies earlier with different backgrounds. You have a way to link that, there's something called velocity checks where if they're coming in too fast and you see those types of hallmarks, you can kind of link them together into a bundle and try to block it.

Pat
Notes:

Velocity checks are really about speed. We’re looking for attributes that are quickly repeating over and over again. For example, in a certain customer environment, we might see that the email risk reports show that there are usually 10 disposable emails used for user sign-ups in a week. But then we see that there are 100 in a week. The velocity check highlights that rate of change and we can flag that. It might be legitimate — maybe there was a new promotion that week — or it could be a sign that there were fraudulent sign ups.

Really, what's important is some of the stuff at the bottom of this slide, in my opinion, which is in the background. 

A question that was asked, can you actually peer through the emulator? Can you find what they're doing to try to avoid detection? And the answer is yes, it's not perfect in a lot of cases, but you can look through that info to try to see what they're doing.

Sometimes when they use an emulator, it shows up visually. There's a certain emulator that folks use for gaming, and will try to beat identity verification techniques. And that will leave a very distinctive color around the edges and the border. That's one example of, you could detect the emulator that way visually. The other thing is, the emulator will allow you, and David talk through this a little bit, to quite literally impersonate a device down to it. 

David
Notes:

An attendee asked whether a device possession check could defeat an emulator. It won’t necessarily, because the fraudster could still receive an SMS on their device.

Sometimes the fraudsters get lazy and they don't do that, and you can pick up, "Hey, that looks like they submitted a picture that was in a car clearly, but I'm seeing Windows desktop is what submitted it. That doesn't look right. There's no way that was actually a Windows desktop." So, there's signals like that, that you can look at in the background.

And then also there's behavioral items too on the submission itself. You can really just think of the selfie and the gov ID as a challenge, but how long did it take to complete, right? If someone's sharing an account with somebody else, they might have to physically go to the location to do that selfie with that individual that they're using it from. We're really operating in a digital world today, so was it almost too quick the submission that happened. When they typed in their PII, did it look like it wasn't a natural human doing it?

There was a world I lived in historically when folks were signing up, they quite literally with the program pasted the PII in, and because it started with a caps, they would just paste an all caps name in, and you could tell it was fraud from that. 

So, just to give you an idea, there are all sorts of signals you can use in setups and logging. It's not simply about the verification challenge that might be presented to a customer or individual itself, it's all these things that are happening kind of behind the scenes and how we can log it.

Persona
Notes:

We have a list of 50+ risk signals that fraud fighters can use to detect different types of fraud attacks.

Kerwell Liao: That's great. That's great, Pat. Thanks for walking us through all these types of signals. And I think it was a really great point that there are a lot of signals in the background that are not necessarily visible to the person going through the flow that can be very useful in these kinds of investigations.

So I guess in a very similar way to the fact that these criminals and fraudsters have a lot of different ways to capture a lot of PII, or selfies, government ID images, and then try to deploy them. Kind of matching that with this breadth of signals, whether it's visual signals, device types of signals, behavioral signals, as you mentioned. 

I think the thing that would really tie this together for our audience, and I'll turn it over to you David for this is, I think conceptually you can understand how a lot of these signals could help you pinpoint if this person, I think that example Pat that you mentioned, is taking a picture from their car, but they're on a desktop. That kind of doesn't smell right. But how should a business think about gathering these types of signals and actually using the signals to inform how a user is going through various flows?

Pat
Notes:

It's really important to have a controlled environment where the signals will be captured and to know what signals are important for your use case.

There are certain use cases where you might not even need a selfie. You might just need a government ID and the location where it was captured. In some cases, that could be enough to prove somebody is who they say they are.

But with a lot of the use cases, you do need to collect more data: location, gov ID, selfie, etc. And you might even need to do it repeatedly to understand that the individual is actually there.

There are certain use cases where you might not even need a selfie. You might just need a government ID and the location where it was captured. In some cases, that could be enough to prove somebody is who they say they are. 

But with a lot of the use cases, you do need to collect more data: location, gov ID, selfie, etc. And you might even need to do it repeatedly to understand that the individual is actually there.]

David Maimon: You need to look at consistency across the signals, and I think that's exactly what Pat is saying, right? You need to make sure that you collect as much information as possible in order to make sure that when you review the case, then things make sense. I can tell you that in the context of both account takeover, and Pat and I love sort of talking about account takeover in that sense, we're seeing the criminals selling information about IP addresses folks are using. In a way, they download remote desk protocol to some of the computers they're hacking, and then they're getting access to victims' bank accounts from the victim's computer.

Pat
Notes:

One of the other things we’re also seeing is people using IP addresses near stolen identities — literally within two miles — to sign up for services that aren’t available in their home country.

Criminals are already aware of the fact that we have all these fingerprinting that we're collecting, right? In order to provide effective fraud solutions. But what they have difficulties with spoofing is consistency across the signals, and that's what we need to keep in the back of our mind. It’s very, very easy to come up with a new image of a person, very easy to maybe bypass the camera, or spoof an IP address. But it’s very complicated to spoof someone's history, like a complete sort of history. 

And so, when we collect the signals, at least in my mind, we need to make sure that there's consistency. We review the consistency, not only in the context of the technology that folks have been using in order to create a bank account or log in to their bank account, but also the history around them, telephone numbers, and things of that nature. Hopefully that makes sense.

Kerwell Liao: Yeah, that makes total sense. I think that really brings to mind, not just zooming out and gathering a broader set of understanding, and not just looking at these signals here at a one-time submission, but as you mentioned, looking at the entire history because with that it gets harder and harder to spoof that entire history. 

And at the end of the day, kind of one of the things that we're trying to do here is to increase the amount of friction that bad actors or fraudsters have to change the ROI calculation here.

So another question I have is, how would a business and David and Pat, this could be for either of you, feel free to chime in here as you see fit. How should a business think about, when is the right time to collect something like a device hardware signal, or to run a Social Security number check? How should that inform them what the sorts of follow-up steps might be for the business?

[40:33] Balancing fraud prevention and friction

Pat Hall: So I think it's the biggest challenge, right? And I think folks on the call are aware of this too. 

Anytime you insert any level of friction to collect info around identity, it's going to potentially hurt growth, right? Why am I putting my Social Security number in here? Why am I having to, "Hey, I might not have my gov ID on me when I get this right? I'm signing up for this application, I left it somewhere else. Oh, we could have lost conversion there." What's tough is it's always that balance of conversion where you're grabbing the info. And then how much info do you actually need and when do you need it on an individual?

Pat
Notes:

You don't want to go from noticing a fraud problem to collecting every signal under the sun. For example, for conversion, growth, and experience reasons, you wouldn’t want a user to be taking a selfie every hour. It’s about finding the balance and using a data-driven approach for your use case.

You can ask: What is your use case? What are the standard practices for identity in the area that you operate in today? Then, do you want more stringent circumstances or do you want to be more open? 

I think what's a little bit scary in the fraud space today is the velocity, how quickly fraud moves. I lived in a world with gig marketplaces for the last six years prior to this, and if you didn't insert that friction at onboarding that you needed to, that account could instantly be doing damage within minutes of it accessing the platform. And you're competing for potential hours of individuals in a gig marketplace. If your marketplace's funnel is not that efficient and someone's trying to make rent at the end of the month, they might go do work on one platform versus another. Finding that balance is incredibly important.

I think it's really important to set a baseline understanding, even if it's passively on what device fingerprint did that account sign up with, et cetera? 

Going back to the account takeover example again, one of the things that we see daily is, individuals actually doing real verifications, it's the account owner, you can imagine it's like a financial account. And then a minute later you can see the fraudster trying to do a GenAI deepfake. 

Pat
Notes:

We’re seeing this a lot with sophisticated actors social engineering the user. If you have a setup that doesn’t require reverification when the device changes, even if a verification happened in the last hour, they might be able to get in.

It's clear that they're on the phone with the actual individual, grabbing their ID info and trying to replicate it, and sending the good user through that verification flow themselves.

Persona
Notes:

An attendee asked what device information you should look for.

So the question in the chat, it's really important to establish the baseline on an account as early as you can.

Obviously measuring growth trade-off and other things that your business is willing to take on. But if you don't, the velocity of damage they can do on new accounts as quickly and then they rinse and repeat and scale it, and do it over and over again. Or for the existing accounts in your area that they're trying to take over to either steal earnings of an individual or actually take the financial assets out of the account that they've breached. They get very sophisticated in socially engineering folks to give them the info they need. 

Again, David has incredible stuff on his LinkedIn profile that will show you videos of how that works. And we all have gone through it, right? We've gotten an email about an account breach or something else. It's really sophisticated how they're doing it, and it's really hard as an individual. It's not in the identity space to be parsed when someone's trying to fish you versus not today. But that's how I'd summarize it. David, I don't know if you have anything.

David Maimon: I agree with you 100%, Pat. The only thing I will add is just your ability to be in compliance with regulations because regulations around the type of information that we were talking about, and the collection of this type of information is quite rigorous. And as long as you can go through that level of compliance, then I believe that you're in position to collect this type of information. There are a lot of really interesting examples about how this type of information could be used for legitimate and illegitimate purposes. And we were talking about this a couple of days ago.

I'm in the middle of reading this really cool book, Your Face Belongs to Us by Kashmir Hill. And this book talks about this company, the startup company, which wanted to do identity verification for law enforcement around the globe, and which essentially scraped all the images and all the faces of people from Facebook and other social media, and then tried to match individuals in the real sort of world with those individual faces, and then pushing information to law enforcement in order to help them solve criminal cases. 

Of course, this company and the process got a lot of heat because of the type of operation that they were running. And this is a very extreme example, but this is definitely an example we should have in mind when we're talking about compliance, the type of data we're using, and the way we're using those different signals in order to verify folks' identities. Hopefully that makes sense.

Kerwell Liao: It does, it does. And actually I'm seeing a question in the Q&A about how can fraudsters figure out what face matches what account owner in an account takeover scenario, for example? And I think kind of similar to what you were talking about, trying to scrape faces and then trying to match the faces, kind of a different application of that. But do either of you have any reactions to that? How would you maybe investigate what's going on there?

David
Notes:

If you have someone's account or you're buying it from the ecosystem, at the end of the day, you'll be able to get the entire PII of that on that individual. You'll get the name, you'll get the address, you'll get the Social Security number. You even get the IP address folks are using to connect to their bank account.

Oftentimes, you can also buy the remote desktop protocol access that folks establish when they took over the account. Once you have all this information, it's fairly easy to simply find the face online.

And if you have remote desktop protocol to the “client” — the victim — you can get everything on the computer. You essentially have access to the computer as well. So you'll be able to take a picture when they're working on the computer. You'll be able to harvest all the data they have on the computer and then use that. Really, the sky's the limit in terms of what you can do. You can take pictures of them and their family. It's just mind boggling what you can do.

Pat Hall: It's incredibly important to have device-based info. I would say a device today is one of the strongest fidelity signals, it's also that you need to integrate something that can actually log the device info well. 

Pat
Notes:

Also, if you don’t have a baseline to go off of, you don’t have a metric to use when someone tries to take over the account. It might be a selfie, device fingerprint, location signature, or something else. Those things can change over time, maybe they age or get a new device, or you can notice a change when there’s an attack. Then, you can step up to a higher verification level.

Frequently you'll see in verifications, somebody might start on a desktop when they're at home in the evening, move to a mobile device to verify. But we all have social profiles to a certain extent, maybe some historically, maybe some not. We all have a digital footprint that's out there sometimes with our likeness. What's very clever about it is, how they target and pick up on who you are, what you may be doing. And I wouldn't just say, think about this in the financial sense.

One of the things that's occurring most frequently today is impersonation on the social platforms to build a reputation synthetically for something that might not be real where you're mimicking an actual individual. I would say that's the challenge that we face today, is we're having these kind of digital identities that we've created over the last decade or used against us in a certain way. And that's why it's really important to use the signals around that identity. 

Hey, that IP location just really doesn't make any sense for that individual, so let's really look at this thoroughly as a risky account. Oh, they knew how to spoof the IP location. Let's look at these other attributes. Oh, it's one of these IPs that's on the risky list that David was talking about earlier where they're actually trying to buy the info out there. 

There's a number of ways that we can kind of target that together, look into it. Even though if they match up the likeness and the selfie looks good, there's five other signals that we detected that flagged as incredibly high risk.

[47:46] Final Q&A

Kerwell Liao: Gotcha. Thanks for taking that question, Pat. I think with that, actually we are moving toward the tail end of our time here. We do have some final takeaways that we want to get to, but I think we can flip it a little bit actually since we're getting such great engagement in the chat and in the Q&A. I'm seeing a couple of questions that I think we can just move into the Q&A, and then we can wrap up with some final takeaways here.

I am seeing a question here about the example that David shared with the Social Security Card that they were selling. I think the question here is around, are they selling physical fraudulent cards or are they selling images of a Social Security Card that they've fabricated? One that I would just toss in there is, is it possible that they just created it with AI and it's an AI-fabricated image with stolen PII, for example? David, do you want to speak to that one?

David Maimon: Yeah, of course. Yeah, I encourage you guys to sort of read through some of the blogs we've put together in SentiLink around this issue. The example I shared with the Social Security number, I'm actually talking to the vendor almost on a daily basis at this point just to figure out the operations, just to figure out what he's doing, and how they're doing. Essentially what they're selling is a synthetic Social Security number. I mean, a fake Social Security number, what they call a credit privacy number. And once you have a credit privacy number, which is a nine-digit number, which is similar to a Social Security number. It could be stolen, it could be spoofed, and sort of completely bogus. What they offer to do is, they offer to give you that number along with an identity that either you build, or they build for you.

You take my name, and you bring in the new sort of Social Security number. They will manufacture the driver license for you, they will manufacture the Social Security Card for you. And then you can start using it in order to build its records, public records, as Pat mentioned earlier, in order to go along with the identity. 

And one of the interesting things that these guys are offering is attaching the identity to existing credit lines. And again, I'm supposed to sort of contribute to one of the media outlets out there and actually show how it's done, but it's kind of mind-boggling because they have different packages. You can attach the identity to a tradeline, which is two years old, and a tradeline is essentially a credit card, right? You can attach the synthetic identity to a tradeline, which is two years old with a credit limit of $12,000. And then the assumption is that within a week or so after you've appended the identity, you'll get a credit score of between 650 to 700.

They have other packages, more sophisticated packages, which allow you to add these specific identities to eight tradelines, and get a credit limit of up to $100,000. The tradelines are older, and they help boost your credit score and create this allure of legitimacy around the identity. That is what this guy is doing. This is how we were able to get the proof for the legitimacy of the identity. We were able to sort of test it with our cluster, with SentiLink cluster and see whether the folks have been using it. And the answer is yes. Folks are definitely using this, and we got a lot of other evidence from this person that essentially shows how they're using it.

The market out there is flourishing, unfortunately, not on the darknet, but on the clearnet and on Facebook. Of course, folks are working on darknet and Telegram as well on this issue. But it's simply all over the place, and it's unfortunate that we're seeing it. But the good thing is that there are solutions, which at the end of the day can help you flag that issue.

David
Notes:

There are tutorials for pretty much everything in the fraud ecosystem. It really depends on what you're trying to do, but you can find a tutorial for any type of illicit activity that focuses on fraud. Starting 2020, during the pandemic, there were tutorials on unemployment benefit fraud or SBA loan fraud. There are tutorials focused on getting into specific companies or bypassing different KYC checks. Some of the tutorials are better than others, but they're there.

Kerwell Liao: Perfect. Those are such great examples David. I always learned so much from chatting with you and reading your insights. Thank you for sharing about that.  I wanted to address another question that I'm seeing here in the Q&A about authenticators beyond what exists today. And I think the spirit of this question is looking at, we've seen historically a lot of what we consider to be safe has become vulnerable over time. Things like doc verification, biometrics we're seeing also deal with vulnerabilities around deepfakes and injections. 

So, I think the spirit of the question here is, zooming out and looking long-term, the question asks, do you have thoughts on what's beyond the horizon? Pat, do you want to speak to that one?

Pat Hall: Yeah, I think the tough thing is we believe PII, your Social Security number, or your government ID information is very secure, right? Unfortunately it's not the case anymore, right? Leaks have occurred, identities are out there. David talked through it and showed you the starter pack. And unfortunately the unit economics of that just keep getting cheaper and cheaper. We used to think it was like a few $100, it seems like it goes for next to nothing nowadays when you see them resold.

So, I think one of the movements that we're seeing is towards things like in the US and even globally, is things like mobile driver's licenses, for example, right? Hey Pat, we're seeing these fake images on doc verification. Cool, they're doing deepfakes. What's next, right? Is a mobile driver's license, for example, in the US, a trusted source? 

What I would ask you to take away from this, and again, go look at some of the resources that are out there, how are those mobile driver's licenses today actually getting into the digital wallet? What verification do you go through with your identity to get it in there? 

Pat
Notes:

For example, when someone gets an mDL, does the DMV compare the selfie they submit to the existing images in the DMV’s database?

I think you'll be interested just to see what that verification process looks like. Think about how deepfakes work. Is that going through in-person? Is it not? Are you able to do it digitally? And think about the ramifications of it?

I think this is the toughest question that we have to answer. There's no silver bullet to solving it. It's a layered approach, it's a signal-based approach. And without trying to get too much data to David's point on compliance, we need to figure out a way to balance that your identity won't be stolen, or the customer joining your platform is not trying to join for abusive reasons. 

There's really not a great answer that's going to, it's going to be its mobile driver's licenses, for example, because even that could potentially be exploited eventually, and it's a very central system once that's open. But David, anything you want to add?

David Maimon: No, I think you covered most of it, I agree with you 100%. I just came back from RSA where all the vendors sort of presented their most recent technology. I couldn't really find something that is very sophisticated of its own, so to speak. 

As you mentioned, Pat, it's all about a comprehensive approach of finding signals, and making sure that there's consistency across the signal. In the context of gen AI, and injection attacks, and the doc verification, the technology sort of trying to figure out the documents around that. The technology is there.

I have a lot of conversations with DMVs commissioners, and they're all very excited about the mobile driver licenses. Having said that, they're not going to let go of the car driver licenses, right? I don't know if you guys know that. They're simply not willing to do that. We will continue to have the car driver licenses.

With the mobile driver licenses, there are a lot of issues there. We obviously don't have time talking about that right now. But again, to me in line with Pat's point, it's all about a comprehensive approach, where you figure out signals across the board and you try to figure out whether the signals are legit or not.

Kerwell Liao: Awesome. Thank you both for sharing those great insights. I think personally, as I was listening to both of you, I kind of summarized them in my brain as one thing is, the signals that we have today are not going to be resistant to fraud forever. They do need to adapt over time, as we've seen. 

This is kind of the natural progression of things. And then another insight I think that I would summarize is, it's also not just about trying to collect every signal under the sun, certain signals are going to be more useful than others. You also want to think about the quality of the signals, and what are the actual insights that a particular signal is pointing to you.

So, to the point of some of the questions in chat about the accuracy of technological deepfake indications or a mobile network signal, for example, those might be good in certain situations. My takeaway is that those might be good in certain situations, but no individual signal is going to be a silver bullet.

With that, we are getting close to the end of our time. I do want to just address one last question, which ties into kind of a final point that I want to share. There's a question here about, I'll try to summarize it here for the sake of time. But the question is about what would the recommended approach be in terms of using high risk signals like risky devices or IP addresses to trigger something like a document verification? Is that the correct flow? 

And the thing I would summarize there is it really depends on the particular use case, it depends on the contexts. One signal could mean, as we talked about before, it could point you toward one conclusion, but it could also point you toward another when combined with a variety of other signals in a different context. And this is exactly what we think about at Persona and at SentiLink.

So I just want to share this one last slide for how Persona and SentiLink can work together. 

undefined

Persona can work with SentiLink to help collect a variety of these signals, whether it's behavioral signals, data-based signals like synthetic identity, or identity theft checks, and use those to then orchestrate the flow. 

So, to help you determine if the risk is high, maybe you do want to run step-up verification as suggested in this question here. But then keep the friction low as Pat and David were talking about for cases where you're not seeing a lot of signs of risk. 

There's a lot more here than we can talk about obviously in this last minute here. But I did want to wrap up by thanking Pat and David for joining us and sharing the insights here today. Thanks so much for sharing your examples.

Bonus Q&A

There were a few questions that we didn’t have a chance to answer live, but that Pat and David shared answers to later. 

When and how do you see fraudsters using cryptocurrency?

David
Notes:

We see them use cryptocurrencies all the time. When you purchase something on the fraud marketplaces, you often need to use cryptocurrencies. You’ll also see folks using cryptocurrency when they're trying to launder money or send money to their friends in different countries. The use of cryptocurrencies is quite high in the context of the fraud ecosystem.

Pat
Notes:

They’re using crypto to gain access to and build reputation with financial accounts and payment services. Once they get in, they can try to move money from illicit activity in and then try to “clean” the funds. That happens with crypto, but frankly, it happens with non-crypto money as well.

How can fraud fighters make their case and get buy-in from leadership?

David
Notes:

I think about this a lot, and I talk to many of my friends about this as well. They often try to do this by showing the ROI. They show how much the organization is losing versus how much it will lose after implementing a specific tool.

There are a lot of conversations right now around retro studies. For example, we can run a retro study and figure out how much money you would have saved if you had our solution last year. That’s one way to get buy-in.

Another way to get buy-in from the C-suite is to essentially show them some of the things that your threat intelligence team is able to gather — information about how people are targeting your organization, information about your organization's accounts, etc. And, show how if you have a specific tool, you'll be able to prevent those accounts from being taken over.

Those are the two major ways. But often, it's really a combination of showing what you're dealing with, what the criminals are talking about, and how they're targeting your organization, and bringing evidence for the ROI.

Pat
Notes:

I think the historical way to convince leadership was to focus on ROI. You would show how much money a specific fraud vector cost the business and try to predict how much worse it could get if you didn’t do anything. Then, you'd compare that to the impact of adding a fraud control, like how much it would reduce fraud versus how much the tool cost or how much friction a change would add.

With the generative AI and deepfake world that we live in today, it's incredibly easy for a number of mischievous individuals to create accounts and commit fraud, so you really can't take the ROI-based approach. By the time you try to calculate what the loss is versus the friction, the damage is already done. Even if you think that you can contain the vector, fraudsters will create something new and clever to abuse your business.

What you need to do is look at what the industry standards are in your space and where they’re going. Then, build toward that as quickly as you can. You don’t want to be the one that’s behind the curve, because the fraudsters will go after the weakest link. They are lazy and want to figure out where they can extract the most dollars.

But in the same way, if your business is growing, they're gonna go after you — especially if you're gonna become a big player in an area that's open to this risk. You really need the foresight to think ahead as to whether you’re going to be a leader, how your business model can be susceptible to some sort of fraud, and what type of gating or identity techniques will keep it from scaling.

The information provided is not intended to constitute legal advice; all information provided is for general informational purposes only and may not constitute the most up-to-date information. Any links to other third-party websites are only for the convenience of the reader.
Louis DeNicola
Louis DeNicola
Louis DeNicola is a content marketing manager at Persona who focuses on fraud and identity. You can often find him at the climbing gym, in the kitchen (cooking or snacking), or relaxing with his wife and cat in West Oakland.