Tuesday, January 31, 2012

Application Security Metrics

The MITRE, probably most famous for its widely adapted vulnerability reference list, the Common Vulnerability Enumeration (or CVE) and it less well known but surely not less valuable other project, the Common Weakness Enumeration (or CWE), directly or indirectly published three such scoring models that I'd like to introduce:
The CVSS: The Common Vulnerability Scoring System (CVSS) is from my understanding not directly published by MTRE but by the Forum of Incident Response and Security Teams (FIRST) and another organization called the Common Vulnerability Scoring System-Special Interest Group (CVSS-SIG). The CVSS is based on 13 metrics that maps characteristics of vulnerability such as whether it is reachable on the Internet or not on a score value. In its current version 2, the CVSS is currently widely adapted by lots of tool vendors like Symantec, IBM or HP and part of many vulnerability databases on the Web.
The CVSS is good for evaluating such vulnerabilities in products but does not work really well, if you want to measure the criticality of a weakness found in the application's implementation or design. Or if some pieces of information are missing. These are two reasons is restricted in its practical use.
The CWSS: The Common Weakness Scoring System (CWSS) is another scoring model, that was directly developed by the MITRE and that tries to solve the above discussed problems with CVSS.
Its structure is pretty similar. We have 18 instead of 13 metrics. Similar to the CVSS they are structures in three subgroups: The base finding metric group (captues the inherit risk of the weakness), the attack surface metric group (barriers that an attacker must cross in order to exploit the weakness) and the environmental metrics group (captues the specific environment/deployment context).
The result of a CWSS scoring (for which a couple of tools are available) is a CWSS vector for each group, that consists of the individual rating of each specific metric such as " RP:RU,0,7/RL:A,1/AV:V,0,8/AS:W,0,9/AI:S,0,8/IN:Aut,1/SC:NA,1 ". Here, AV means "Attack Vector" and "L" means "Local", so vulnerability that is only locally exploitable. In the End we get a CWSS score for each subgroup as well as a total value. This is also similar to the CVSS.

So where is the difference between CVSS and CWSS. First of all, the metrics are different. For instance, the CWSS takes into account whether there are internal or external security controls in place, it differs between the required and the acquired privilege and privilege layer (app, system or network).

The CWSS also allows us to rate all metrics as "unknown" and "not applicable", which is great since this addresses the mentioned situation where we do not have certain information about a weakness yet. In addition, some metrics (like the "technical impact" for instance) can be quantified with a avalue different to the standard one from the model. Due to this agility, the CWSS model fits much better into a SDL where it is reviewed and adapted over the time. More information on this comparison can be found in the CWSS documentation.

After working quite a lot with both models I found that CWSS is not only an adaption of the CVSS so that we also measure weaknesses, it marks a clear evolution step of CVSS. It addresses its problems and can be applied to both weaknesses in our design and vulnerabilities in our implementation.

My advice is therefore to start thinking about working with CWSS. Unfortunately there is no tool there yet, but this will change shortly I'm confident. CWSS is also just in a beta stage yet (version 0.8). I found a couple of possible failures in the document that I sent to MITRE. All of them where regarding typos and that kind of thing not the model itself.

Oh, nearly forgot, I mentioned three MITRE scoring models. Yes, in fact there is an additional one. The CMSS: The Common Misuse Scoring System (CMSS). I had to work with it at a customer and found all the same shortcomings the CVSS has. Since I can't find it officially linked on the MITRE homepage and it seems not to be mentioned anywhere, I think it was just some kind of research I would not bother too much about.

Tuesday, November 1, 2011

Functional vs. Non-Functional Security Requirements

In the non security world, there is a lot of confusion between the definitions and meanings of the terms functional and non-functional security requirements. I often see, for instance, people defining things like authentication as an non-functional others do this as a functional requirement.

Both terms might become a bit easier to distinguish if we clearify what they both actually describe:
A Functional Security Requirement (FSR) is a security feature, a function that a system or system component must be able to perform (IEEE), something that we can test, a yes/or no requirement, that we can define in concepts and that we can build test cases for.
Security can be defined using functional security requirements such as authentication, authorization, input validation, etc.

We mostly define FSRs in a positive way (a positive requirement), since we describe the presence of a specific security feature. In contrast, when we discuss something in a negative way (a negative requirement), for instance a specific attack is not suppose to be successfull, then we are most likely talking about a non-functional requirement.

A Non-Functional Security Requirement (NFSR) is a quality property. Something that we can not test in a yes or no way, but something that we can measure using metrics.
Examples for NFSRs are availibility, integrity, robustness, fault-tolerance, trustworthiness, attack resilience, attack tolerance, attack resitance and so on. We can't that an application has one of these attributes or not, we can only say that it has it to a certain degree.

Unfortunatelly, the often referenced quality model presented in the ISO/IEC 9126 standard for product quality in software engineering only defines the functional part of security requirements. Which could be one reason for those confusions.

In the context of security requirements I also found a couple of other interesting terms.

First of all one that was referenced as a Derived Security Requirement in an presentation given by Paco Hope from Cigital.

Since I am actually not aware of a rule stating that a FSR or NFSR must not be derived from FSR or NFSRs, I do not see why this is not just an additional attribute or subclass of either one of these. In most cases it would be a functional one I guess. In other words, a derived functional security requirement or derived FSR.

The OpenSecurityArchitecture Group introduces two additional terms:

Secure Functional Requirements, "this is a security related description that is integrated into each functional requirement. Typically this also says what shall not happen. This requirement artifact can for example be derived from misuse cases."

This is actually a classic example for a negative security requirement (see above), which is mostly non-functional in nature (e.g. "XSS have to be prevented").

Secure Development Requirements, "these requirements describe required activities during system development which assure that the outcome is not subject to vulnerabilities".

And this one could be both functional and non-functional. In fact most secure coding guidelines are non-functional. This is because they are ment to be used in all kinds of applications.

So, in the end, it all comes down to either functional or non-functional security requirements. We can use one of those discussed attributes here to clarify them more but they are still functional or non-functional.

Saturday, October 8, 2011

Threat Profiling

Having worked with threat modeling for quite a while now, I was recently given the task by one of my customers to develop a concept for threat modeling. This is perhaps nothing new, but the interesting aspect here was to integrate a kind of simplified approach ("threat modeling light" if you will) that could be executed by internal staff less experienced in doing threat modeling.

I started by evaluating different kinds of threat modeling methodologies that are out there: Microsofts of course, Trike, P.A.S.T.A and the one that could be found as Application Threat Modeling methodology on the OWASP web site. I discovered that no one of them really provide a set of clear tasks that could be executed by staff with little security know-how. Except, maybe doing an application overview, but thats quite all.

The reason for this lies in the fact that threat modeling is mostly an experience driven activity. Such as identifying trust relationships and boundaries, possible misuse cases and so on. And this experience is the result of years of practice work in this industry.

So how can we map this experience? The solution has already given years ago by people like Gary McGraw with his Attack Patterns. Those were also integrated as Attack Models as one of nine practices into Cigital's Build Software Assurance into Maturity Model (BSIMM). The problem I've got with attack patterns was though, that they do not really provide a clear procedure to use them practically. For instance for deriving a threat within an threat modeling activity.

A call this missing step between application decomposition and threat analysis threat profiling.

A threat profile starts with someone selecting a bunch of technical and/or business properties for the system to evaluate. This could be done as part of the application overview/decomposition phase.

Here are a couple of examples how those properties could actually look like:
Property Type Desription
TA.TYPE.WEB Technical Is a Webapplication
TA.TYPE.WINDOWS Technical Is a Windows Application
TA.TYPE.EXTERNAL Technical External
TA.FUNC.UPLOAD Technical Has an upload functionality
TA.LANG.C-CPP Technical Code written in C/C++
TA.LANG.PHP Technical Code written in PHP
TA.SRC.SP2010 Technical Based on SharePoint 2010
TA.ATN.KERBEROS Technical Using Kerberos authentication
BA.DATA.TYPE2 Business Consisting classified data
BA.USERS.ANONYMOUS Business Anonymous users
BA.DEV.EXT Business Developed externally

Now, these properties ca be matched against a custom threat library, which is managed by experienced security stuff. This threat library is where the knowledge is based. It maps properties to actual but still only possible threats - and just a few not all of course. For instance XSS for a general web-based application (tech prop), buffer overflows for C (technical propperty) or the risk of untrusted developers to include backdoors in our code (thats a business property).

Here is another example for that:
Threat ID Threat Description Description Properties
T.XSS1 General Cross-site Scripting+TA.TYPE.WEB
T.BFO Generic buffer overflow+TA.LANG.C-CPP
T.MALWARE Malicious files could be uploaded+TA.TYPE.WEB, +TA.FUNC.UPLOAD
T.BACKDOORS Developers could include backdoors+BA.DEV.EXT

The basis for these threat library could be CAPEC or WASC. But more importantly are those threats here that are organization- or application specific. Think of a specific platform like Share Point 2010 that you are using. Wouldn't it be nice to have all of those possible Share Point threat pop up, when someone starts to work on a new application for that?

So, now we have the input (technical and business properties) and we have the knowledge (properties matched to specific threats). What we now need is a tool that matches those two sources against each other. I used a simple Excel sheet for a start. I doubt though that this would be the best choice for professional project though. However, at this point project managers only need to select a couple application properties and a simple macro automatically generates a list of possible threats for that system.

For each threat in the above list we link a more detailed threat description outlining specific attacks, countermeasures, criticality rating and link to external resources such as CAPEC.

This could be, for instance, a couple of SharePoint threats like activated web services that you have to consider when starting a SharePoint project. The result is a threat profile - not a threat model!.

Threat profiles can now give an impression of threats for a system which could be the basis for further threat analys steps to get a threat model. Alternatively, they could build the basis of an later penetration test of that specific system.

Monday, July 18, 2011

Architectural Security Reviews

When it comes to conducting a security review of an application architecture (aka design review) there are basically two major terms that are often referenced. First the software-centric Threat Modeling approach by Microsoft. Second, one suggested by Gary McGraw that is known as Architectural Risk Analysis.

Although  both approaches seem to be rather different at first sight, since then uses the term "threat" and the other "risk", they are actually pretty comparable. Since threat Modeling is in fact often a risk review to. When it is combined with risk assessment metrics like DREAD for instance, we end up with a list of qualitative risks.

So, when secure SDLCs are described, you will generally find one (but only one) of both terms be used for describing a security design review.

I expierenced, however, that there is need for another approach. Less comprehensive and more focused on functional security such as the correct use of specific security controls like authorization, authentication, validation and cryptography. This approach could, of course, easily integrated into or combined with an existing threat model or other kind of architectural analysis results.

I was pretty much inspired here, when I first saw a presentation by John Steven on "Advanced Threat Modeling" a couple of years ago. He used a mix of different visualization techniques to describe characteristics and problems within the application architecture in a very perspicuous way.

Since then I used some of his ideas extensively to visualize architectural security in various projects. Not to conduct a complete threat modeling activities in most cases, but to use a straightforward approach to sketch and discuss the architecture of a piece of software and to outline basic problems and characteristics within it so does everybody can understand it quickly. I call this activity usually Architectural Security Review to distinguish it from "Architectural Risk Analysis".

Here is a very simple example:
The diagram above outlines a very high-level three tier architecture with an Apache frontend and some backend systems as well as three different actors. The red dotted lines mark trust boundaries the red boxes security controls or security problems. Since some aspects are very importand to visualize in much greater detail, we use hybrid diagrams here, showing different level of detail.

I usally work with a bunch of different hybrid diagramms to visualize different kinds of security attributes like identity flows, data handling (including the crypto architecture) or views on it, respectively.

Especially on agile projects, these kind of activity turned out to be really useful since it can be adapted very quickly and give all existing as well as new project members a quick understanding on the security architecture. And one of the the great advantages of this approach: To have something everybody can understand and perhaps agree on.

In the next couple of weeks I will introduce some of these techniques here in this blog. I will show that we can visualize even very complex archichitectural characteristics in a rather simplictic and perspicuous way.

Monday, July 11, 2011

Defining Data Handling Requirements

We often experience a transformation lack between security requirements on the one hand and their practical implementation on the other. Often those people defining security requirements do simply not understand how to do this in a way that they are understood and implemented in a complete and thorough way.

One vital class of security requirements are those concerning data confidentiality and privacy. Many data leaks and privacy problems result in insufficient implemented data handling mechanisms. For instance, when we look at log files of productive systems they are often leaking massive kinds of sensitive data, often without knowledge of any of the stakeholder.

To reduce this risk, I discovered a while a go the need to define exact data handling requirements for all handled types of information. As a result, all stakeholders like project management, development and the information security officer of course can discuss this and agree upon it. It is vital here, that these are detailed enough so that developers understand them and are able to follow these requirements. But also abstract enough, so that all other involved parties understand them and can agree on them too.

I ended up with the following sheet structure (you can download the full example from Google Docs by clicking on the image):
The first column lists all relevant information types (in real projects you would have some dozens of them) as well as compliance requirements that might to be taken in mind for each one. Then, the following columns define the detailed data handling requirements for data at rest (persistation in log files on db or caches) and data in motion (e.g. transferred in different part of the HTTP protocol).

Each data handling requirements
  • E: Data Encryption
  • HTTPS: Transport Encryption
  • H: Simple Hash
  • SH: Salted Hash
  • M: Masking (e.g. replace all but the first 4 digits of a cc number with a "x")
  • -: no persistation or transportation is allowed
In addition, I introduced specific log level requirements (e.g. INFO, DEBUG, etc.). So that a developer does exactly know what he is allowed to log in what way.  This sheet turned out to be really useful in a couple of projects now. Perhaps it might help others too.

Saturday, July 2, 2011

Pushing Appsec Left in the SDLC

There are basically two main reasons why conceptual application security is so vital.

First, the cost factor:


There are plenty of studies comparing costs of defect fixing in different development lifecycles. Graff and van Wyk estimates, for instance, estimates in their book Secure Coding: Principles & Practices that fixing a defect when an application is in production is 60 times more expensive as it would have been in the design phase:


There are other studies on this that calculate this costs lower (e.g. Gartner) or even higher (e.g. IBM). The reason for this is most likely that these costs highly depend upon what projects and organization one look at.

However, the qualitative conclusion is always the same: It is much cheaper to solve a defect as early in the development lifecycle as possible. This means, of course, to fix or prevent defects best in a conceptual phase. This is in fact possible for a large number of vulnerabilities. Gary McGraw et al. estimates in his fantastic book Software Security Engineering that as many as 60% of all defects are already visible witihin the design phase.

Second, the archived level of security:

With this first conclusions comes another: Since fixing a defect is so expensive when you do this late in the development lifecycle, in practice, people tend to fix only the most critical vulnerabilities. Less critical defects, that reduces do not pose an immediate threat but reduces the level of robustness of an application, are often not fixed after the implementation phase is finished.

In addition, likewise to a house with strong security requirements, applications can't be build with a high level of security, when they are not mend to be. Or as Roger Thornton, founder of Fortify, put it: "You cannot secure something that was not built to be secure".

Thursday, June 30, 2011

Conceptual Application Security

IT security can be seen as a crosscutting concern. This means it affects an IT system throughout its complete lifecycle. There are basically two different stages of such a system we can look at: The pre-implementation phase and the one after it. We could name the later one post-pre-implementation phase or simple everything beginning with the implementation.

Before this happened, we have conceptual security tasks, starting with the implementation we have those concerning the realization. I will call these "meta security phases". This distinction is vital since it describes two relatively separated phases where completely different views and approaches and do exist.

The following diagram depicts the development phases where conceptual app security is mainly practiced. Is has, of course, a hugh impact on the later, realization, phases though.

Within the field of IT security in general, both meta phases are pretty well understood. We have a lot of security consultants working on security concepts at the one hand and quite a lot of others doing security assessments, pentests and operational security at the other.

Within the application security, which is of course highly influenced by web application security, the conceptual meta phase is, from my experience, often though not as widely understood as an essential part of security projects as it is within the IT security field in general. In fact you will often find only those activities practiced in projects that concern identity management.

Granted, identity management (or IdM) has surely a great impact on application security. It is though just one of many functional security aspects. For instance validation or secure data handling, and of course those non-functional security aspects like security robustness, fault-tolerant or code correctness.

Activities concerning conceptual application security include secure coding standards, architectural analysis and threat modeling as well as secure requirement engineering and review. Many of those are often referenced but often little understood from a methodical point of view.

Since from my impression discussions (and also blogs) around appsec are mostly related to programmatic or operational security measures I decided to set-up this blog to share some of my conceptual ideas and thoughts of application security, or as you might say, the very foundations of secure applications.