Bot protection

Bot protection refers to the methods and strategies an online company uses to manage its exposure to bots — both good and bad — that might affect its business. While any business operating online may have a bot protection strategy, they are very important to the broader anti-fraud strategies employed by financial institutions, social media companies, and online marketplaces.

Frequently asked questions

What is a bot?

A bot is a piece of code or software designed to automatically execute a task — typically one that is repetitive in nature. Bots can be used for a variety of purposes, but are commonly used to interact with a website or form. They can also be paired with generative AI in order to automatically create content for a variety of purposes.

How can bots be used for fraud?

The primary reason that bad actors use bots to carry out fraud is to scale. A fraudster can only commit a limited number of crimes in a 24-hour period if they are doing everything manually. By leveraging bots, a fraudster can dramatically scale efforts, carrying out many more fraud attempts with the hope that at least one might be effective.

A bad actor could use bots to create dozens or hundreds of fake accounts on a social media platform, or to create a large number of fake product listings on an online marketplace. Other examples include leaving fake reviews, engaging in phishing schemes, password spraying and credential stuffing attacks, and even distributed denial of service (DDoS) attacks.

What are examples of bot protection strategies?

How a business fights bots will depend on a number of factors, including the specific nature of those bots. Below are a few examples of strategies to deploy for specific types of bot threats:

  • Multi-factor authentication: If a business sees a lot of account takeover (ATO) attacks facilitated by automatic password spraying, they can add two-factor authentication or multi-factor authentication to the log-in process to significantly reduce the success of these attempts. 
  • Identity verification: Bots are often used to generate fake accounts. Putting in simple identity verification measures during account creation can cut down on the number of fraudulent accounts and ensure a healthier platform ecosystem. 
  • Reverification: Accounts that have been infiltrated by bots often engage in suspicious activity, such as repetitive actions and other strange behavior. Reverifying a user’s identity when this activity is detected, as well as during other high-risk moments, can help weed out bots from legitimate users. 
  • Link analysis: As noted above, bots are often used to create multiple accounts on a single platform. Through link analysis, it’s possible to link these fraudulent accounts to one another via passive signals (such as IP addresses and device or browser fingerprints). The accounts can then be investigated together or blocked outright.

Ready to get started?

Get in touch or start exploring Persona today.