This is part-2 of a 2 part series that discusses the evolution from human to machine based DDoS attacks. It specifically delves into how to prepare for such attacks while keeping low positives and negatives to industry standard low.

The Evolution of DDoS

In the early 2000’s, we had simple shell scripts created to take down a single web page. Usually, one attacking signature was used from one single source IP address. This was known as a classical Bot based attack, which was effective in taking down a single web page. However, this type of threat needed a human to launch every single attack. For example, if you wanted to bring ten web applications to a halt, you would need to hit “enter” on the keyboard ten times.

We then started to encounter the introduction of simple scripts compiled with loops. Under this improved attack, instead of hitting the keyboard every time they wanted to bring down a web page, the bad actor would simply add the loop to the script. The attack still used only one source IP address and was known as the classical denial of service (DoS).

Thus, the cat and mouse game continued between the web application developers and the bad actors. The patches were quickly released. If you patched the web application and web servers in time, and as long as a good design was in place, then you could prevent these types of known attacks.

Eventually, attackers started to distribute using multiple sources to launch the attack creating a larger surface. This was known as distributed denial of service (DDoS). The spreading of malware from the manual approach to an automatic approach further complicated the matters. The automatic spreading of malware was a major evolution in DDoS.

Now, attacks from multiple sources could automatically spread themselves without human intervention. It was now that we started to witness automated attacks hitting the web applications with a variety of vulnerability types.

Attackers used what was known as Command and Control (C&C) servers to control the compromised bots. The attacking vectors could be changed randomly by the C&C servers from, for example, a user datagram protocol (UDP) flood to an internet control message protocol (ICMP) flood or if that didn’t work, a human could intervene and configure the C&C servers to aim higher up the application stack.

Many volumetric attacks were combined with application-based attacks. The volumetric attacks hit the network gates, causing the security teams to panic. No one is ever truly ready for DDoS attack. However, while the heavy-hitting volumetric DDoS attack from thousands of sources is filling up the network pipes, the more serious application-based attack is under the radar, thereby, compromising the precious web applications.

Volumetric attacks can be combated in a number of ways and are usually used in conjunction with an external scrubbing center. However, they merely act as smoke to cover up the more dangerous application attack.

The introduction of ‘under the radar application attacks’ was known as the low and slow style attack. A standard Firewall or intrusion detection system (IDS) system is not designed to capture these types of attacks. They go completely off the radar of all the traditional defense mechanisms. Besides, if the right application security is not in place, the attacks could go unnoticed for months with the occurrence of data breach and data exfiltration.

The rise of Artificial Intelligence in DDoS

There was the automatic spreading of the malware and changing vulnerability types via C&C servers but still, there was a limitation. This limitation was the need for human intervention. The rise of AI in DDoS unfolds a new era of attack that does not require human presence. We have entered into a new world of cyber criminality where only the toughest of web application stacks will survive.

AI DDoS takes the human involvement completely out of the picture. Now, we have machines attacking applications. They are fully automated, changing vulnerability types and attacking vectors based on the response from the defense side.

If one attacking signature does not work, the machine thinks for itself and can change to a different signature. All done automatically, without human intervention.

How do we prepare?

The quickest way to prepare for this right now is to properly harden the application stack to the best of its capability, while keeping false positives and negatives to the lowest possible rate. For effective web application security, you need mechanisms in place to guarantee a low false positive number while accurately scanning the web application for the randomized attacks that a machine can throw.

Acunetix – How do we prepare?

The web application is a low barrier to entry for an AI-based attack. Traditional web application security testing that is based on the black-box testing, does not help in any way. Black-box testing simulates the approach of a real attacker and the application is tested from outside-in without any knowledge of the internal structure or application architecture. However, now the attacker is not a human, it’s a machine. More powerful and more dangerous.

Black-box testing will not monitor how code behaves during execution, and the source code analysis will not always understand what happens when the code is in execution. By itself, it does not offer much assistance in preparing for an AI-based application attack that is changing vectors and vulnerability types randomly in a snip of fingers.

A combination of both black-box and white-box testing is required to keep false positives to a low rate. To enhance web application scanning, Acunetix uses a unique AcuSensor Technology that enables more accurate testing than the traditional black-box testing.

As an AI-based machine is hitting your application with automatically changing attacking parameters, Acunetix technologies can help to keep your team calm with lowering both false positive and false negative rates to the industry’s lowest rate.

To defend against an automated machine-based attack, security professionals must respond with an automated defense mechanism. Acunetix scan results eliminate the need to manually confirm the detected vulnerabilities, thereby, putting the web application on the right road to protect against the AI-based attacks.

Considering the fact that the attacking side is in a fully automated era, the web vulnerability testing should also become an automated process.

Summary

Since the last decade, we have been witnessing DDoS attacks ranging from simple scripts with single vectors, to now a fully automated attack based on the power of machined-based AI.

The AI-based attacker will go for the web application since it is the easiest to compromise. Keeping false negatives and false positives to a low rate, while also accurately scanning for vulnerabilities is an immediate and essential step to be taken against the unknown future of the bad actor’s use of Artificial Intelligence.


Part 2: Preparing for Artificial Intelligence (AI) DDOS Attacks

SHARE THIS POST
THE AUTHOR
Matt Conran
Network, Security & Cloud Specialist
Matt Conran has more than 17 years of networking industry with entrepreneurial start-ups, government organizations and others. He is a lead Network Architect and successfully delivered major global greenfield service provider and data center networks. Core skill set includes advanced data center, service provider, security, and virtualization technologies. He loves to travel and has a passion for landscape photography.