The MITRE, probably most famous for its widely adapted vulnerability reference list, the Common Vulnerability Enumeration (or CVE) and it less well known but surely not less valuable other project, the Common Weakness Enumeration (or CWE), directly or indirectly published three such scoring models that I'd like to introduce:
The CVSS: The Common Vulnerability Scoring System (CVSS) is from my understanding not directly published by MTRE but by the Forum of Incident Response and Security Teams (FIRST) and another organization called the Common Vulnerability Scoring System-Special Interest Group (CVSS-SIG). The CVSS is based on 13 metrics that maps characteristics of vulnerability such as whether it is reachable on the Internet or not on a score value. In its current version 2, the CVSS is currently widely adapted by lots of tool vendors like Symantec, IBM or HP and part of many vulnerability databases on the Web.
The CVSS is good for evaluating such vulnerabilities in products but does not work really well, if you want to measure the criticality of a weakness found in the application's implementation or design. Or if some pieces of information are missing. These are two reasons is restricted in its practical use.
The CWSS: The Common Weakness Scoring System (CWSS) is another scoring model, that was directly developed by the MITRE and that tries to solve the above discussed problems with CVSS.
Its structure is pretty similar. We have 18 instead of 13 metrics. Similar to the CVSS they are structures in three subgroups: The base finding metric group (captues the inherit risk of the weakness), the attack surface metric group (barriers that an attacker must cross in order to exploit the weakness) and the environmental metrics group (captues the specific environment/deployment context).
The result of a CWSS scoring (for which a couple of tools are available) is a CWSS vector for each group, that consists of the individual rating of each specific metric such as " RP:RU,0,7/RL:A,1/AV:V,0,8/AS:W,0,9/AI:S,0,8/IN:Aut,1/SC:NA,1 ". Here, AV means "Attack Vector" and "L" means "Local", so vulnerability that is only locally exploitable. In the End we get a CWSS score for each subgroup as well as a total value. This is also similar to the CVSS.
So where is the difference between CVSS and CWSS. First of all, the metrics are different. For instance, the CWSS takes into account whether there are internal or external security controls in place, it differs between the required and the acquired privilege and privilege layer (app, system or network).
The CWSS also allows us to rate all metrics as "unknown" and "not applicable", which is great since this addresses the mentioned situation where we do not have certain information about a weakness yet. In addition, some metrics (like the "technical impact" for instance) can be quantified with a avalue different to the standard one from the model. Due to this agility, the CWSS model fits much better into a SDL where it is reviewed and adapted over the time. More information on this comparison can be found in the CWSS documentation.
The CWSS also allows us to rate all metrics as "unknown" and "not applicable", which is great since this addresses the mentioned situation where we do not have certain information about a weakness yet. In addition, some metrics (like the "technical impact" for instance) can be quantified with a avalue different to the standard one from the model. Due to this agility, the CWSS model fits much better into a SDL where it is reviewed and adapted over the time. More information on this comparison can be found in the CWSS documentation.
After working quite a lot with both models I found that CWSS is not only an adaption of the CVSS so that we also measure weaknesses, it marks a clear evolution step of CVSS. It addresses its problems and can be applied to both weaknesses in our design and vulnerabilities in our implementation.
My advice is therefore to start thinking about working with CWSS. Unfortunately there is no tool there yet, but this will change shortly I'm confident. CWSS is also just in a beta stage yet (version 0.8). I found a couple of possible failures in the document that I sent to MITRE. All of them where regarding typos and that kind of thing not the model itself.
Oh, nearly forgot, I mentioned three MITRE scoring models. Yes, in fact there is an additional one. The CMSS: The Common Misuse Scoring System (CMSS). I had to work with it at a customer and found all the same shortcomings the CVSS has. Since I can't find it officially linked on the MITRE homepage and it seems not to be mentioned anywhere, I think it was just some kind of research I would not bother too much about.