Monday, July 18, 2011

Architectural Security Reviews

When it comes to conducting a security review of an application architecture (aka design review) there are basically two major terms that are often referenced. First the software-centric Threat Modeling approach by Microsoft. Second, one suggested by Gary McGraw that is known as Architectural Risk Analysis.

Although  both approaches seem to be rather different at first sight, since then uses the term "threat" and the other "risk", they are actually pretty comparable. Since threat Modeling is in fact often a risk review to. When it is combined with risk assessment metrics like DREAD for instance, we end up with a list of qualitative risks.

So, when secure SDLCs are described, you will generally find one (but only one) of both terms be used for describing a security design review.

I expierenced, however, that there is need for another approach. Less comprehensive and more focused on functional security such as the correct use of specific security controls like authorization, authentication, validation and cryptography. This approach could, of course, easily integrated into or combined with an existing threat model or other kind of architectural analysis results.

I was pretty much inspired here, when I first saw a presentation by John Steven on "Advanced Threat Modeling" a couple of years ago. He used a mix of different visualization techniques to describe characteristics and problems within the application architecture in a very perspicuous way.

Since then I used some of his ideas extensively to visualize architectural security in various projects. Not to conduct a complete threat modeling activities in most cases, but to use a straightforward approach to sketch and discuss the architecture of a piece of software and to outline basic problems and characteristics within it so does everybody can understand it quickly. I call this activity usually Architectural Security Review to distinguish it from "Architectural Risk Analysis".

Here is a very simple example:
The diagram above outlines a very high-level three tier architecture with an Apache frontend and some backend systems as well as three different actors. The red dotted lines mark trust boundaries the red boxes security controls or security problems. Since some aspects are very importand to visualize in much greater detail, we use hybrid diagrams here, showing different level of detail.

I usally work with a bunch of different hybrid diagramms to visualize different kinds of security attributes like identity flows, data handling (including the crypto architecture) or views on it, respectively.

Especially on agile projects, these kind of activity turned out to be really useful since it can be adapted very quickly and give all existing as well as new project members a quick understanding on the security architecture. And one of the the great advantages of this approach: To have something everybody can understand and perhaps agree on.

In the next couple of weeks I will introduce some of these techniques here in this blog. I will show that we can visualize even very complex archichitectural characteristics in a rather simplictic and perspicuous way.

Monday, July 11, 2011

Defining Data Handling Requirements

We often experience a transformation lack between security requirements on the one hand and their practical implementation on the other. Often those people defining security requirements do simply not understand how to do this in a way that they are understood and implemented in a complete and thorough way.

One vital class of security requirements are those concerning data confidentiality and privacy. Many data leaks and privacy problems result in insufficient implemented data handling mechanisms. For instance, when we look at log files of productive systems they are often leaking massive kinds of sensitive data, often without knowledge of any of the stakeholder.

To reduce this risk, I discovered a while a go the need to define exact data handling requirements for all handled types of information. As a result, all stakeholders like project management, development and the information security officer of course can discuss this and agree upon it. It is vital here, that these are detailed enough so that developers understand them and are able to follow these requirements. But also abstract enough, so that all other involved parties understand them and can agree on them too.

I ended up with the following sheet structure (you can download the full example from Google Docs by clicking on the image):
The first column lists all relevant information types (in real projects you would have some dozens of them) as well as compliance requirements that might to be taken in mind for each one. Then, the following columns define the detailed data handling requirements for data at rest (persistation in log files on db or caches) and data in motion (e.g. transferred in different part of the HTTP protocol).

Each data handling requirements
  • E: Data Encryption
  • HTTPS: Transport Encryption
  • H: Simple Hash
  • SH: Salted Hash
  • M: Masking (e.g. replace all but the first 4 digits of a cc number with a "x")
  • -: no persistation or transportation is allowed
In addition, I introduced specific log level requirements (e.g. INFO, DEBUG, etc.). So that a developer does exactly know what he is allowed to log in what way.  This sheet turned out to be really useful in a couple of projects now. Perhaps it might help others too.

Saturday, July 2, 2011

Pushing Appsec Left in the SDLC

There are basically two main reasons why conceptual application security is so vital.

First, the cost factor:


There are plenty of studies comparing costs of defect fixing in different development lifecycles. Graff and van Wyk estimates, for instance, estimates in their book Secure Coding: Principles & Practices that fixing a defect when an application is in production is 60 times more expensive as it would have been in the design phase:


There are other studies on this that calculate this costs lower (e.g. Gartner) or even higher (e.g. IBM). The reason for this is most likely that these costs highly depend upon what projects and organization one look at.

However, the qualitative conclusion is always the same: It is much cheaper to solve a defect as early in the development lifecycle as possible. This means, of course, to fix or prevent defects best in a conceptual phase. This is in fact possible for a large number of vulnerabilities. Gary McGraw et al. estimates in his fantastic book Software Security Engineering that as many as 60% of all defects are already visible witihin the design phase.

Second, the archived level of security:

With this first conclusions comes another: Since fixing a defect is so expensive when you do this late in the development lifecycle, in practice, people tend to fix only the most critical vulnerabilities. Less critical defects, that reduces do not pose an immediate threat but reduces the level of robustness of an application, are often not fixed after the implementation phase is finished.

In addition, likewise to a house with strong security requirements, applications can't be build with a high level of security, when they are not mend to be. Or as Roger Thornton, founder of Fortify, put it: "You cannot secure something that was not built to be secure".