Since posting my previous blog entry a couple of weeks ago I have been doing some research on how I could use the openPERT tool to improve the rigor of my selection of the metrics used in the FAIR methodology. This comes after listening to the discussion in the RiskHose podcast 9 where it was discussed how to attribute an “expert” opinion into your metrics.
While I am no statistician, and I am only just started to read about PERT, Beta distributions and Monte Carlo simulations, I think I have obtained a base understanding of how they are used, especially in the normal context of project management. However ,I am grappling with how I can make use of these tools in the context of the FAIR risk methodology. So, as a way to try draw out my thinking, I thought I would jot down my current thoughts and hope for some feedback/direction to determine if I am heading in the correct direction.
To ensure this post doesn’t turn into War and Peace, I will limit my thoughts to the initial FAIR metric selection of Threat Community capability. The FAIR Threat Community capability suggests five categories:-
|Very High (VH)||Top 2% when compared against the overall threat population|
|High (H)||Top 16% when compared against the overall threat population|
|Moderate (M)||Average skill and resources (between bottom 16% and top 16%)|
|Low (L)||Bottom 16% when compared against the overall threat population|
|Very Low (VL)||Bottom 2% when compared against the overall threat population|
The problem is, a particular generic Threat Community will be made up many smaller Threat Communities, each of which have their own capabilities. So, it would be hard to define a Threat Community, such as “Organised Crime”, as just having a High (H) threat capability.
I believe that you need to be able to justify any metric you use and it has to pass the “smell” test to the lay reader. That is, if a metric doesn’t “smell” right, then it will create doubt in the reader for the rest of the assessment. Given the diversity of a Threat Community such as “Organised Crime”, I believed that their capabilities would range across Moderate (M) to Very High (VH).
Before going further it is probably best to further define the example Threat Community of “Organised Crime”. This Threat Community would be those criminals working as a group to misuse computer systems for the purpose of monetary or material gain. To better understand what potential Threat Vectors this Threat Community may use we can define the motivation even further to include:
- Illegal Information Disclosure
- Unauthorised Data Alteration
- Supporting Traditional Crime
When I wrote my original risk assessment I tried to visualise a Threat Community’s capability for the reader. To do this I displayed the metric using a chart with a coloured normal distribution curved such as:-
Other communities with more defined capabilities would be shown as the below.
It was when I visualised a broad capability such as “Organised Crime” that I became uncomfortable with my capability choice. Basically, it didn’t “smell” right to me. While it is probably true that “Organised Crime” will have such a broad capability as a Threat Community, from an Medium to Large Enterprise point of view, there will be a large part of that Threat Community that would not be relevant. Both because Enterprises don’t meet specific sub-threat communities target or motivation, and also because the techniques used can be expected to be ineffective in an Enterprise environment (assuming firewalls, proxies with authentication, patching, anti-malware software etc ). This sub-threat community would be targeting “consumer” end users, and would need a much lower “force” to be applied due to the low control strengths of a “normal” home personal computer.
This is the point at which I had reached before I wrote my previous entry. While I was really satisfied with the results of using the FAIR methodology for doing a generic risk assessment, there were some areas that I wanted to introduce more rigor. This was especially around metrics that had a strong dependence on my “expert” opinion.
At this point I will attempt to use the openPert excel add-in to better display how a particular generic Threat Community may be applied in a context such as an Enterprise environment (NB: I am using the tool for the first time as I write this post). First some assumptions.
- While we may see the less capable “consumerware” threats, because of the Enterprise environment we can assume they will be mitigated (and therefore will have little impact on the risk assessment).
- As we move towards the High category, the force applied by the Threat Community will become more successful in the initial stages, but are more likely not to cause a significant impact to the business. This may be because the community ignores Enterprise environments as it is not their target, or because additional stages are ineffective to complete the community’s motivations. So we can set the minimum capability for our betaPERT calculation at 67% which is the middle between 50% and 84% (boundary for High).
- The upper boundary will still be 100%, because it is likely that only the most capable of the “Organised Crime” community would target larger Enterprise environments.
- Given the context of the Enterprise for a Medium to Large enterprise, the most likely capability to be successful would be 88%. This is just an initial estimate so that we can run a simulation and produce a distribution histogram from the openPERT tool
To make use of openPERT, we run Excel and from the Add-In menu select the betaPERT Simulation option
The betaPERT Simulation will ask you for a minimum, maximum and most likely estimate for your values. By entering the values from the assumptions above we get the following output.