They’re Among Us: Malicious Bots Hide Using NLP and AI
Can you tell the difference between a human and a bot online? While it sounds easy enough, technological advancements in artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) are making this task increasingly complex.
I analyze and research cybersecurity trends to predict and protect some of the world’s largest brands from sophisticated threats. Over the course of my career, I’ve seen a shift with more attacks carried out by bad bots — software applications that are programmed and controlled by bot operators to perform automated tasks with malicious intent.
Research from Imperva found that bad bots accounted for over a quarter of all internet traffic in 2021. They are used by a wide range of malicious operators including competitors who scrape websites for proprietary information and prices, scalpers who purchase entire inventories of limited-edition items, attackers looking to obtain sensitive data and more.
Most of these bad bots mask themselves by attempting to interact with applications similar to a legitimate user. In fact, increasingly sophisticated bots have the ability to mimic human behavior by cycling through random IPs, entering through anonymous proxies and changing identities.
Unfortunately, that means detecting malicious bad bot activity that abuses APIs and application business logic will get harder until defenses are equipped to identify these sophisticated threats.
How Bots Are Becoming More ‘Human’
Not all bots are bad, and there are many examples of good bots that provide beneficial services. Chatbots, for example, are ubiquitous and appear on nearly every type of website to assist with consumer-facing roles such as sales, customer service and relationship management.
Powered by advanced AI, many chatbots now recognize psychological, behavioral and social patterns to provide the end user with a more humanlike experience. Further, natural language processing, a machine learning technology that helps bots understand text, data and social patterns, enables automation to respond with adapted semantics so it conveys realistic human behavior.
3 Ways Bad Bots Are Committing Fraud
While innovations in ML, AI, and NLP benefit our daily lives, bad bot operators could exploit these innovations for malicious purposes. Some examples include:
Pretexting is a type of social engineering technique that manipulates victims into divulging personal information. A bot operator could use NLP to train a bad bot to adapt to the social and behavioral patterns of a target to impersonate them and assume their identity.
The bot operator could then use the bad bot to communicate with the target’s friends or coworkers via email, social media or text to obtain sensitive information that could be used for other more nefarious attacks such as account takeover, identity theft or data leakage.
Distributed Denial of Service (DDoS)
In a DDoS attack, bad actors attempt to make a server or network resource unavailable to users.
Malicious operators looking to disrupt a business’s operations or knock it offline can train an army of bad bots with NLP to learn the language patterns of a business’s customers. This army of bots could then be used to flood an organization’s social media with complaints, overwhelm customer service phone lines or chat services, or slow down website performance leading to downtime.
In this type of online fraud, bad actors use bots to automate account creation to spam messages, amplify propaganda or abuse promotions.
Using NLP, bad actors can masquerade as legitimate user accounts to sabotage a brand or its competitors.
Protecting Applications and APIs from Humanlike Bots
Recognizing the difference between good and bad bots is essential in a bot prevention solution, but that job is becoming more challenging as bad bot behaviors mirror sophisticated human actions.
It is reasonable to predict that bad actors will continue to find new ways to use sophisticated NLP technologies to turn a profit and cause disruption. In the near future, we’ll see more bad bots interacting with humans to gain their trust — adapting to the language, social and behavior patterns of their targets.
For organizations, this will require a shift in defenses and for applications and APIs to be developed with bots in mind. Some proactive steps organizations can take to manage bot traffic include:
- Implement CAPTCHA technology for traffic that comes from outdated browser versions.
- Block IPs hosted on providers and proxy services such as Host Europe GMBH, Dedibox SAS, Digital Ocean, OVH SAS and Choopa, and LLC.
- Review web traffic data for unexpected traffic spikes or increases in failed login attempts, as those could be signs of bad bot traffic.
- Understand the ways your site can become a target. Does your site have credit card forms, pricing information or exposed APIs? Those are all website functionalities that can be exploited by automated attacks.
In taking these proactive steps, organizations are well on their way to creating a successful bad bot management strategy that protects the customer experience, their brand reputation and the business’s bottom line.