Defence in Depth – Final Part – Update software, Isolate services

Update software and components

Whether it’s a server’s operating system, a web server, a database server or even a client-side JavaScript library, an application should not be running software with known vulnerabilities.

Updating, removing or replacing software or components with known vulnerabilities sounds obvious, but it’s a significant problem that thousands of organizations struggle to manage.

Patching a handful of servers and applications may not sound like a mammoth of a task. However, when scaled out to thousands of applications with different application stacks, different development and infrastructure teams, spread across geographically distributed organizations, it’s easier to understand why patching software with known vulnerabilities tends to become a challenge.

Isolate services

Since the software we create can never be bug free, a common defence in depth approach, one that goes back to the early days of UNIX (user accounts and separate process address spaces are such examples), was to base some elements of a platform’s security on isolation. The idea is to separate a system into smaller parts with the aim of limiting the damage of an affected or malfunctioning system.

While this is not always easy to achieve, and in some cases may conflict with the principle of keeping things simple, isolation techniques such as limiting what resources can communicate with each other on a network, and forbidding everything else by default (therefore adopting a whitelist approach), could limit the damage of an attack.

Does a web server really need to be able to communicate with a domain controller or a printer on the same network? Should these devices even be on the same network? That depends; the answer might be “yes”, in which case, that communication should be allowed as long as it is carefully considered and secure.

Never roll your own (or weak) crypto

In 1998, world renowned cryptographer Bruce Schneier wrote the following.

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.

Bruce Schneier is not alone in holding this view, countless other experts in the field agree. Just because you can’t break the crypto algorithm you created, it does not mean it is secure.

The same argument can be made for weak cryptography because it has the same effect — it doesn’t serve its purpose.

Cryptography is one of the aspects of security which should never, under no circumstance be “homemade”. Instead, it’s not only wiser, but also far easier to rely on proven, heavily scrutinized algorithms such as AES (Rijndael) or Twofish for encryption, SHA-3 or SHA-2 for general-purpose hashing and bcrypt or PBKDF2 for password hashing.


 

Share this post

Leave a Reply

Your email address will not be published.