A plethora of valuable solutions now run on web-based applications. One could argue that web applications are the forefront of the world. More importantly, we must equip them with appropriate online security tools to barricade against the rising web vulnerabilities. With the right tool set at hand, any web site can shock-absorb known and unknown attacks.
Today the average volume of encrypted internet traffic is greater than the average volume of unencrypted traffic. Hypertext Transfer Protocol (HTTPS) is good but it’s not invulnerable. We see evidence of its shortcoming in the Heartbleed Bug where the compromise of secret keys was made possible. Users may assume that they see HTTPS in the web browser and that the website is secured.
But there are a lot of moving parts in online security and its underlying infrastructure. Therefore cyber criminals are able to jump tracks like a moving train, to compromise and target a valuable asset. All these problems are compounded by the fact that web applications are built on flaky old protocols
Hindering Legacy Protocols
The majority of networks are built on Internet Protocol (IP). The initial requirement for IP networks was solely connectivity. Very little thought was directed into securing connections and the end systems they connect. The yesteryear map of the Internet was considerably different to what it is now. The Internet has evolved and changed over time to adapt to new consumer requirements. There weren’t any cyber criminals back in the 1960’s. So online security was never too much of a concern when designing a protocol or a framework. Earlier, the Internet and its foundations were built without security in mind.
The TCP/IP protocols were born in a time when security was non-existent. IP by itself does not have any built-in security mechanisms. It has no way to secure individual packets or securely validate the sender. It also has no mechanism to determine if the packet was modified during transport.
Global Reachability and Flaws with SSL/TLS
The Internet is designed with global reachability. If you have the IP address of someone, you can reach them. It shifted slightly when Network Address Translation (NAT) was introduced but the model still stays the same. The global reachability world of the Internet is here to stay. On the flip side of the benefits of global reachability, if you have the IP address, you can also attack them.
The whole world is reachable on port 80/443. However, the TLS/SSL, which carries the majority of Internet traffic, is built without an authentication layer. Authentication happens after the connection is established. This means that two sides can connect together without initially authenticating to each other. This is just like a stranger entering the house without pushing the bell as the door is open.
In today’s world of TLS/SSL the clients connect to anything and upon successful connection continue to use the authentication layer. TLS/SSL has a lot of moving parts which are hard to manage and opens up the connection to ‘Man in the Middle Attacks’.
This is not a deliberate design fault or that someone forgot to add something, but this design flaw has recently hit a Brazilian bank big time. All of the bank’s digital properties were replicated due to security fissure in the bank’s website. The malicious hackers were able to replicate the entire bank’s website, swing Domain Name System (DNS) and host a fake website at a different location. The hackers eventually took over all of the bank’s automated teller machine (ATM).
It’s evident that the underlying technologies that make up a web application run on legacy protocols that were built without security in mind. The advances in web technologies hover over legacy protocols which have and will keep collapsing. So if online security is not firmly fortified, it will keep serving the malicious hackers on a silver platter.
Networking is Complex
The networking world started uncomplicated. Initially, designs consisted of standard sites, perimeter firewalling with static point-to-point connections to other satellite sites. Overtime the requirements changed and with the introduction of high availability, network configurations it became more complicated.
Some sites were designed to backup each other while others hot active, ready to take over in an instant. Active locations for high availability require complex interconnectivity configuration, supporting tailored ingress and egress traffic engineering capability that are unique to each customer. All these additions add to network complexity especially when you need secure web applications that are hosted inside the network.
Dissolved Network Perimeters
Networking had a very static and modular design. For example, there was an inside, outside Wide Area Network (WAN) module, Demilitarized Zone (DMZ) and other zones. Nowadays, these perimeters are completely dissolved with the introduction of new technologies such as micro-segmentation, VM NIC firewalls and other security services inserted closer to the workload.
This shift in security paradigm means securing valuable assets such as a company’s website, which is more of a challenge. There is no point locking the barn after the horse has been stolen. It’s harder to scan the perimeter with traditional tools as these tools are designed for static based security perimeters.
East to West Traffic Flows & Mini Firewalls
Traffic Flows within networks have also changed. Traditional traffic flows are north to south; the majority of traffic leaves the network. The advent of virtualization and Virtual Machine (VM) / Container mobility results in a different type of east to west traffic flows with the potential of traffic trombones across boundaries.
The change in traffic flows turns the traditional firewalls at a standstill. They are designed and optimized in the network for north to south and not east to west flows. New types of firewalling are now inserted closer to the workloads, which breaks the traditional security paradigms.
There is also a big debate as to whether these mini firewalls have the same feature parity set as their big brother, the physical firewall. Inserting new mini firewalls with a limited feature set leaves open many doors for the web site compromise.
Can You Trust The Network To Secure Your Web Application?
With all these changes and flaky underlying protocols, can you trust the network to securely host your web application? Traditional networks are patched together with kluges to support this new era of application and connectivity model. All you need to do is to look at the kludges in Internet Protocol Security (IPsec). IPsec reminds me of a Swiss army knife that does one thing appropriately but many things badly.
IPsec is not one protocol but a collection of protocols that authenticates and encrypts IP packets. IKEv1 has been around for a long time and hackers have developed many tools that attack IKEv1 aggressive-mode negotiation. Implementations of IKEv2 are also very recent. We have very few lessons learned from the protocol. Moreover, it has many compatibility issues.
As a result, a lot of the work for online security gets pushed up into the application stack; to the actual web server. Is the only way to harden the network is to harden the web application? However, application architectures have gone through a number of transformations, making it even harder to secure.
In part 2 in this series on Online Security we shall be exploring aspects of Application Security Testing. We live in a world of complicated application architecture compound with poor visibility leaving the door wide open for compromise.
Online Security: The Underlying Infrastructure
Get the latest content on web security
in your inbox each week.