All security vulnerabilities are the result of human error. Most web application vulnerabilities and API security issues are introduced by developers. Therefore, the best approach to building secure applications is to do all that is possible to avoid introducing such errors in the first place instead of fixing them.

You can find several detailed guides on how to create secure code during application development, for example, the one provided by the Open Web Application Security Project (OWASP). They focus on such details as input validation, output encoding, access control, communication security, data protection, cryptographic practices, error handling, the principle of least privilege, etc. Instead, we would like you to look at this software security issue from a strategic point of view.

Principle 1: Spread awareness and educate

In most cases, developers introduce security risks into the source code simply because they are not aware of such risks. While universities often focus on teaching such details as formal verification, many of them do not offer dedicated courses on cybersecurity and don’t even mention topics such as injection attacks or cross-site scripting (XSS). This is especially the case for older developers who have taken such courses several years ago when there was no hype about security yet.

Universities also teach a limited number of programming languages so developers are in most cases self-taught, and some security problems are very specific to the programming language. For example, you won’t find a risk of buffer overflows in Java or C#. Even if the course teaches a language in detail, it rarely focuses on coding best practices related to application security in that language.

To make sure that your software development teams don’t make mistakes due to lack of awareness, understanding, or gaps in education, you must approach the issue strategically:

  • Your development managers must not only be aware of security risks but they must be the driving force behind security. A developer with no security awareness can be educated but a development manager who does not realize the importance of security will never become the security leader.
  • Don’t make any assumptions as to developer knowledge. Validate it first and if it’s not sufficient, provide in-house or external training sessions dedicated strictly to secure coding standards. It’s not the best idea to absolutely demand security knowledge from new hires because this will limit your recruitment capabilities greatly and developers can easily learn as they progress.
  • Do realize that no matter how well your developers understand security, new techniques and attacks appear very often due to the speed with which technology is progressing. Some of these techniques require very specific security knowledge that can only be expected from someone with a full-time security-related position. Expect your developers to make mistakes and don’t punish them.
  • Don’t keep your development teams separated from your security teams. The two should work very closely together. Developers can learn a lot from security professionals.
  • Don’t assume that the nature of your software reduces your security requirements in any way. For example, even if your web application is not accessible publicly but only to authenticated customers, it should be just as secure as a public one. In general, don’t go for any excuses.

Principle 2: Introduce multiple layers of verification

Even the most aware and best-educated developers still make mistakes so simply trusting them to write secure code is not enough. You need automatic auditing tools that work in real-time during development to help them realize their mistakes and follow up with suitable mitigation.

In an ideal situation, software should be tested using the following tools and methods:

  1. A code analysis tool that is built into the development environment. Such a tool prevents basic errors immediately as the developer is typing in the code.
  2. A SAST (static application security testing) solution that works as part of the CI/CD pipeline. Such a solution analyzes the source code before it is built and points out potential software vulnerabilities. Unfortunately, SAST has a lot of disadvantages, including a high level of false positives.
  3. An SCA (software composition analysis) solution that works as part of the CI/CD pipeline. Since most code nowadays comes not directly from your developers but from open-source libraries that they use, you have to help them make sure that they are using secure versions of such libraries. Otherwise, you will have ticking-bomb vulnerabilities just waiting to explode.
  4. DAST (dynamic application security testing) solution that works as part of the CI/CD pipeline. Such a solution analyzes the application at runtime (after you compile it – with no access to the source code) and points out real security vulnerabilities. In the case of such software, performance is very important (scans are very intensive) and so is the certainty that reported errors are real (proof-of-exploit).
  5. Additional manual penetration testing for errors that cannot be discovered automatically, for example, business logic errors. However, this requires specialized security personnel and takes a lot of time so it’s often performed only in the last stages of the software development life cycle (SDLC).

However, early security testing takes a lot of time and resources. Therefore, a compromise is often needed between the time and effort required to perform tests and the quality of the results. If such a compromise is required, selecting a fast DAST scanner that provides proof-of-exploit and comes with SCA functionality is the best choice.

Principle 3: Test as early as possible to promote responsibility

To attain top code quality it’s not enough to have secure coding requirements and secure coding guidelines in place along with a test infrastructure. Teams must not only feel obliged to follow secure coding principles during the development process and do so because their code will be tested, but they must also feel that writing secure code is in their best interest as well. Secure coding doesn’t just need rules and enforcement, it needs the right attitude.

shift-left approach, such as the one described above, has many advantages, one of them being that developers realize that they’re an integral part of the security landscape. They feel responsible for code security and realize that if they make a mistake, they are going to have to fix it immediately and not count on someone else doing it later instead.

Of course, you can test your application for security vulnerabilities just before it goes into production or even in production (shift right). However, it will cost you much more than it would if you shifted left. The software will have to go through all stages again, which involves other resources, not only developers. The developer won’t remember the code that they worked on or the fix may be assigned to a different developer than the original one and as a result, the developer will need more time to find and remove the vulnerability. As a consequence, late testing may delay the release even by several weeks.

Not just security policies

In conclusion, we would like you to realize that the security policies, while necessary, are not enough if they are perceived as a limitation, not an enhancement. Security begins with the right attitude when building applications. And even the best tools used to maintain security must be used in the correct way in the process so that they are perceived as helpful, not as a burden.

SHARE THIS POST
THE AUTHOR
Tomasz Andrzej Nidecki
Principal Cybersecurity Writer
Tomasz Andrzej Nidecki (also known as tonid) is a Primary Cybersecurity Writer at Invicti, focusing on Acunetix. A journalist, translator, and technical writer with 25 years of IT experience, Tomasz has been the Managing Editor of the hakin9 IT Security magazine in its early years and used to run a major technical blog dedicated to email security.