Defence in depth and how it applies to web applications

Defence in DepthInformation security generally refers to defending information from unauthorized access, use, disclosure, disruption, modification or deletion from threats. Organizations are constantly facing threats that exist both externally as well as internally — be they from nation states, political activists, corporate competitors or even disgruntled employees.

Defending an organization from these threats is hard because it requires a significant amount of effort, insight and investment. It’s also difficult for non-technical users to appreciate its importance; that is, until a security breach cripples or even destroys even the most carefully constructed organization. To such an extent, it is important to understand the concept of defence in depth when tasked with defending an organization from threats.

It is critical to understand that security is always “best effort”. No system can ever be 100% secure because factors outside of the designers’ control might introduce vulnerabilities. An example of this is the use of software that contains 0-day bugs — undisclosed and uncorrected application vulnerabilities that could be exploited by an attacker.

Defence in depth is a principle of adding security in layers in order to increase the security posture of a system as a whole. In other words, if an attack causes one security mechanism to fail, the other measures in place take arms to further deter and even prevent an attack.

Comprehensive strategies for applying the defence in depth principle extend well beyond technology and fall into the realm of the physical. These can take the form of appropriate policies and procedures being set up, training and awareness, physical and personnel security, as well as risk assessments and procedures to detect and respond to attacks in time. These measures, crucial though they might be, are only but physical measures to preventing what is ostensibly an information security problem. This article on the other hand will focus on how defense in depth principles could apply to web applications and the network infrastructure they operate within. This article will also offer a number of pointers (that is by no means exhaustive) which can be used to improve the security of web applications.

The KISS principle

KISS is an acronym for “Keep it simple, stupid”. Since it’s impossible to ever achieve a system that is 100% secure (because it’s impossible to build bug-free software), simplifying the way software works is an effective strategy to reduce the number and severity of security flaws in applications.

By not over complicating an application’s design and the infrastructure it’s running on, makes the implementation easier, and also allows easier inspection of security mechanisms.

Fail-safe defaults

Software is bound to fail. Try as we might to create perfect, failure-resistant software, bugs will always exist that might cause software to fail. Notwithstanding this, it is important that this potential failure does not expose an application to a security risk.

An application should feature secure defaults; denying access to resources by default; checking returned values for failure; and making sure that conditional code or filters properly handle failure.

Critically, even though some part of the application is not available, or is functioning unexpectedly, it should not be possible for an attacker to compromise the confidentiality or integrity of an application.
Security before obscurity
Security through obscurity refers to the use of obfuscation or randomization of a design or implementation to provide security. With this in mind, it becomes obvious that the security of a system relying solely on obscurity, rather than the implementation of sound security devices, is destined for failure.

Take for instance, an SSH daemon configured to listen on a port other than the standard port 22. While that may deter a script-kiddie, this obscurity is going to be of little protection against a financially motivated attacker who would not only discover the SSH daemon on the unconventional port, but also notices a series of known and exploitable high-severity vulnerabilities in that daemon.

While obscurity can be used as a defense in depth measure (since it would increase the efforts an attacker needs in order to break into a system), it should never replace real security controls. As such, when obscurity is implemented, it should only be used to increase the cost of attack, and it should always be assumed that a savvy attacker can identify the obscurity and overcome it.

The Least Privilege Principle

An application does not need to use the root (MySQL), sa (Microsoft SQL Server), postgres (PostgreSQL) or SYSDBA (Oracle Database) to connect to the database.

Likewise, it’s a bad idea to run daemons or services as root (Linux) or Administrator (Microsoft Windows), unless there is a specific, justifiable, and carefully considered reason to do so. An application should always be given the least privileges possible that allow it to work properly — any additional privileges should be disabled.

If an application is connecting to a database with a privileged user account, in the event of an SQL injection vulnerability, an attacker would be able to run SQL queries as a database administrator, and on some database servers, also execute operating system commands. By executing operating system commands, an attacker could have the ability to carry out a reconnaissance exercise on the internal network behind the firewall and escalate an attack further.

Running anything with administrative privileges defeats a tried-and-tested security model that’s been in place for years, since it allows an attacker or a rogue application to cause more damage in the event of a security breach. Applications and database connections should be run with restricted, non-administrative privileges, elevating privileges temporarily to modify the underlying system only on a per-need basis.

Log everything, revisit often

Several defence in depth strategies help prevent breaches in the first place, however a crucial aspect to any defence strategy is to know when an attack is underway and what has happened after an attack occurred. Mitigating the the effects of a security breach is only possible if attention is paid to early warning signs.

Logs are a crucial part of systems and applications. Through logs one can monitor performance, uptime, resource usage and other such data. Logs are also indispensable tools for monitoring security and detecting attacks. Logging is the closest thing to a time machine, so having comprehensive, detailed records of what happened when, could spell the difference between noticing a breach early and letting an attacker pull off a heist.

The obvious deduction here is not to ignore early warning signs, while the less obvious conclusion is the need to log absolutely everything and revisit those logs often. Naturally, this could present some technical challenges, especially to larger organizations, but it’s not impossible, especially with the ever-decreasing cost of storage and the various log management tools out there that help filter signal from noise.

In order to respond in time to the early warnings of a security breach, an organization first needs to know when it’s under attack and what has happened (or is happening) during the attack. One of the more effective ways to do that is to log everything and take action on what information logs reveal.

Trust no one, validate everything

Unfortunately, most vulnerabilities at the application layer can’t simply be patched by applying an update. In order to fix web application vulnerabilities, software engineers often need to correct mistakes within the application code. It’s therefore ideal for software engineers to understand the security risks associated with user input. At the end of the day, all user input should be considered unsafe.

By never trusting the user, and validating every input, an application can be built to be more secure and more robust. This applies to any injection vulnerabilities such as SQL injection and cross-site scripting, but it also applies to vulnerabilities that would allow an attacker to bypass authentication, or request a file they should never be allowed to see.

Parameterize SQL queries

While encrypting database tables and restricting access to a database server are valid security measures, building an application to withstand SQL injection attacks is a crucial web application defence strategy.

SQL injection is one of the most widely spread and most damaging web application vulnerabilities. Fortunately, both the programming languages, as well as the RDBMSs themselves have evolved to provide web application developers with a way to safely query the database — parameterized SQL queries.

Parameterized queries are simple to write and understand while forcing a developer to define the entire SQL statement before hand, using placeholders for the actual variables within that statement. A developer would then pass in each parameter to the query after the SQL statement is defined, allowing the database to be able to distinguish between the SQL command and data inputted by a user. If SQL commands are inputted by an attacker, the parameterized query would treat the input as a string as opposed to an SQL command.

Application developers should avoid sanitizing their input by means of escaping or removing special characters (several encoding tricks an attacker could leverage to bypass such protections) and stick to using parameterized queries in order to avoid SQL injection vulnerabilities.

Outbound, context-dependant input handling

HTML encoding data before it is inserted into a database (inbound input handling) in order to prevent cross-site scripting (XSS) is considered to be bad practice as it could limit the usage of that data. If data is HTML encoded it can only be used inside HTML pages (perhaps that same data needs to be consumed by a web service, not just rendered into an HTML page).

More importantly, if input data is handled inbound by, for example, HTML encoding it when inserted into a database, there is no guarantee that this will prevent XSS. Preventing XSS is highly dependant on the context in which user input is used. If user input is used inside an HTML page, it needs to be HTML encoded; if user input is used inside a <script> tag or inside a JSON object, HTML encoding might not stop an attacker from delivering an XSS payload to a user.

It is therefore important for user input to be treated differently based on where user input is being used (context-dependant outbound input handling). Context-dependant outbound input handling, if used correctly is a very effective technique in preventing XSS. Furthermore, it has the advantage of keeping the data in a database unmodified, while retaining the ability to treat user input depending on the context it is being used in.

Prefer whitelists to blacklists

Since user input can never be trusted, validating user input is a core concept of application security which, if done right, not only increases the overall security of an application, but also makes an application more robust.

Data validation strategies generally fall into two camps: whitelisting (accepting known goods), and blacklisting (rejecting known bads). Both have their use cases, however, in general, unless there is a specific, justifiable, and carefully considered reason to use a blacklist, a whitelist tends to provide much stronger resistance to attacks.

A simple example of this would be the validation of a US ZIP code. It’s far easier and safer to create a whitelist that accepts numbers consisting of five digits from 0 to 9 (or sometimes, a set of five digits from 0 to 9, followed by a second set of four digits from 0 to 9 with a hyphen in between the two sets of numbers) as opposed to trying to come up with a blacklist of all possible combinations that should be rejected. A blacklist, in such a context, is not only impractical, but more importantly, it is bound to miss something that could allow a user to input something that should be allowed.

Update software and components

Whether it’s a server’s operating system, a web server, a database server or even a client-side JavaScript library, an application should not be running software with known vulnerabilities.

Updating, removing or replacing software or components with known vulnerabilities sounds obvious, but it’s a significant problem that thousands of organizations struggle to manage.

Patching a handful of servers and applications may not sound like a mammoth of a task. However, when scaled out to thousands of applications with different application stacks, different development and infrastructure teams, spread across geographically distributed organizations, it’s easier to understand why patching software with known vulnerabilities tends to become a challenge.

Isolate services

Since the software we create can never be bug free, a common defence in depth approach, one that goes back to the early days of UNIX (user accounts and separate process address spaces are such examples), was to base some elements of a platform’s security on isolation. The idea is to separate a system into smaller parts with the aim of limiting the damage of an affected or malfunctioning system.

While this is not always easy to achieve, and in some cases may conflict with the principle of keeping things simple, isolation techniques such as limiting what resources can communicate with each other on a network, and forbidding everything else by default (therefore adopting a whitelist approach), could limit the damage of an attack.

Does a web server really need to be able to communicate with a domain controller or a printer on the same network? Should these devices even be on the same network? That depends; the answer might be “yes”, in which case, that communication should be allowed as long as it is carefully considered and secure.

Never roll your own (or weak) crypto

In 1998, world renowned cryptographer Bruce Schneier wrote the following.

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.

Bruce Schneier is not alone in holding this view, countless other experts in the field agree. Just because you can’t break the crypto algorithm you created, it does not mean it is secure.

The same argument can be made for weak cryptography because it has the same effect — it doesn’t serve its purpose.

Cryptography is one of the aspects of security which should never, under no circumstance be “homemade”. Instead, it’s not only wiser, but also far easier to rely on proven, heavily scrutinized algorithms such as AES (Rijndael) or Twofish for encryption, SHA-3 or SHA-2 for general-purpose hashing and bcrypt or PBKDF2 for password hashing.

To conclude

While every system is different in its own right, the points above, while by no means an exhaustive list, should serve as a general guideline for most situations. By adopting a layered approach to application security, it not only makes applications more secure, but also more robust and prepared for failure — hopefully enough to keep out even the most determined of attackers.