My Experience using FAIR for Risk

Recently I decided to make use of the FAIR risk model to document how the threats have dramatically changed over the first 6 months of 2011. Besides always liking the concepts of the FAIR model, I thought the risk assessment paper I had to write would benefit with the focus on Threat Populations/Communities rather than using technology as a driver.

I don’t intend to try explain the FAIR methodology write as there is plenty of documentation on the model already, but I thought I would share some comments from my experience in using it. I am also sure that I am misusing some of the nomenclature used in FAIR, so I apologise for that up front.

The largest benefit in using the FAIR model was being able to show to the non technical managers that the increase in risk in 2011 was being driven by changes in the Threat Population profiles. That is, it was the increased activity by “assumed” Nation State, Hackers and Hackivist Threat Populations that was driving the increase risk and this could be shown without the need of having to focus on the specific techniques or vulnerabilities that were being utilised. By describing each Threat Population, their capabilities and their potential motivations I believe it allowed for greater acceptance of the outcomes. The non technical managers could better grasp the drivers, and therefore were more accepting of the assessment. Because the Threat Population profiles are unlikely to change significantly (i.e. who they are, what their capabilities and motivations are), I spent a lot of time carefully presenting each profile as an appendix to the assessment. This appendix can be used in future risk assessments and will allow continuity between assessments.

To better determine where changes may need to occur in the organisation, it was not enough to just rely on a high level assessment of how well we could resist the applied force from Threat Populations (sorry I am trying to use FAIR nomenclature). Therefore, each Threat Population was linked to common “groups” of Threat Vectors. It is the force applied through these Threat Vectors for each Threat Population, that allowed for a better overall picture of where changes to Control Strength would have to be applied. I also used these common Threat Vectors to individual show the Threat Event Frequency that is expected based on the motivation and capability of each Threat Population. This also means that some Threat Vectors would be not applicable as they are not normally use to achieve the Threat Populations motivation. Some of the vectors used were:

  • Targeted Email, Spam, Hosted Website, Website based (e.g. drive-by-download) ….

It is in the assessment of the Threat Vector’s Control Strength where I first deviated from the FAIR model. I found the use of an overall Control Strength assignment to be too generic, and it didn’t show where potential weaknesses were. Therefore I used Control Strength as a means to show the technology control strength and the strength of the configuration and policies applied to those technologies. Then I also created a new assessment for the “Time to Detect” an event. This second category shows what impact available people resources are and their ability to deter the forced applied by an event. It would also define the capabilities of those resources, as the more capable they are the more likely they would recognise an event occurring.

While the introduction of the new assessment category in the model work well to show weaknesses between technology and people (processes could be spread across both categories depending on the context of the assessment), how “Time to Detect” integrates into the final risk calculation posed a problem. In the end, I used it twice which I am still not comfortable with. However, my use was based on where I thought it effected the risk calculation. The first time I combined it with Control Strength to create a “Mitigation Strength” assessment which was used in the FAIR model instead of the traditional Control Strength. The second time I used it to influence the final Severity impact on the basis that shorter “Time to Detect” should work to minimise Severity of an event.

Investing the time to learn how to use the FAIR model was very beneficial, and I will be using it again in the future. Especially when I need to determine what strategic changes are need to address changes in risk. Do I need to add more technology controls (or modifications to the policies of existing controls) or is it due to a lack of resources and the addition of technology wouldn’t be sound without the resources to make use of the technology.

Besides defining how the “Time to Detect” attribute should be used in the risk calculation, I believe defining capability levels and descriptions to each Threat Vector would strengthen the assessment of a Threat Populations capability to make use of a Threat Vector to overcome the applied mitigations. In other words, there is a need to provide examples of what types of actions would imply a Threat Population’s capability is in the top 2% compared to the general Threat Population. For the Threat Vector of targeted email a Very High capability may have these attributes:-

  • Makes use of undocumented and unknown vulnerabilities in attachments or linked resources
  • Makes use of single use malware to avoid detection from any security controls protecting target
  • Has invested time in documenting the background of their target so as to create a believable social engineering engagement to trigger the exploit
  • May have compromised a targets “trusted” relationship (e.g. family or friends) to send the email to increase probability of exploit action being triggered.

In the last few days since starting to draft this blog, I have started to listen to the Risk Hose podcast. From some of the discussions on Monte Carlo simulations and PERT, and the openPERT project, I believe these can also be of great benefit in strengthening the definition of both a Threat Populations capability is, and what the Threat Event Frequency of a Threat Vector would be. Especially since due to the lack of data many times, these assignments are based on “expert” opinion.

Advertisements
3 comments
  1. Chris Hayes said:

    Thank you for listening to the Risk Hose podcast and trying out OpenPert. FAIR is a great methodology of which the broader information security and IT risk management is beginning to understand and embrace its value / potential. I would also encourage you to check out the Society of Information Risk Analysts (SIRA); http://societyinforisk.org/. SIRA has quite a few FAIR advocates and practitioners as well as a practical mailing list. Finally, if you have a few questions you want to pose to the Risk Hose crew, let us know!

    Chris Hayes (@risktical, Risk Hose Podcast, and SIRA)

  2. Jack said:

    Very pleased to hear that you found FAIR to be useful. Your application of the model — particularly given that the published information on FAIR is pretty light — is impressive. Your grasp of carving up the larger scenario into different vectors (sub-scenarios) was spot-on, and is something that some people who go through formal training struggle with.

    Actually, your “deviation” from the model regarding control strength isn’t a deviation at all. That’s exactly how it’s supposed to be done. The reason I didn’t get into that level of detail in the white paper is because the paper would have quickly become a book — something I wasn’t prepared to undertake at the time. As you recognized, “Time to detect” can play two roles as a control — potentially limiting the amount of time an attacker has to perform their deed (a “resistive” Loss Event Frequency limiting function), and enabling more rapid containment and reputation damage management even if the attacker is successful (a “responsive” Loss Magnitude limiting function). There’s actually an entire controls taxonomy (not yet published) that’s devoted to defining where different controls play a role in risk management. In fact, many of the controls we use in our industry can have an affect at more than one point in the risk taxonomy.

    Bottom line — thanks for taking the time to kick FAIR’s tires. You seem to be a natural at it. Drop me a line if you want to chat further about FAIR.

    Cheers,
    Jack

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: