Building

Reflections on the vulnerability factor (notes on RRA, part 3)

Today I’m going to discuss the (presumably) most complex and difficult-to-handle of the three parameters contributing to a risk (as of the RRA), that is the “vulnerability [factor]”.
First it should be noted that “likelihood” and “vulnerability” must (“mentally”) be clearly separated which means that “likelihood” denotes: likelihood of threat showing up _without_ consideration of existing controls. Security controls already present will affect the vulnerability factor (in particular if they are effective ;-), but _not_ the likelihood.
First reflect on “how often will somebody stand at the door of our data center with the will to enter?” or “how often will a piece of malware show up at our perimeter?” or “how often will it happen that an operator commits a mistake?” and assign an associated value to the likelihood.
Then, _in a separate_ step, think about: “will that person be able to enter my data center?” (maybe it’s an external support engineer and, given their high workload, your admins are willing to violate the external_people_only_allowed_to_access_dc_when_attended policy. which – of course – is purely fictional and will never happen in your organization ;-)) or “how effective are our perimeter controls as for malware?” (are they? ;-)) or “hmm… what’s the maturity of our change management processes?” and assign an associated value to the vulnerability factor.
As stated in an earlier post: this will allow for identifying areas where to act and thus allow for efficient overall steering of infosec resources.
Mixing likelihood and vulnerability might lead to self complacent stuff like “oh, evidently likelihood of unauthorized access to datacenter is ‘1’ as we have that brand new shiny access control system” …

Now, how to rate the “vulnerability” (on a scale from 1 to 5)?
In my experience a “rough descriptive scale” like

1: Extensive controls in place, threat can only materialize if multiple failures coincide.
2: Multiple controls, but highly skilled+motivated attacker might overcome those.
3: Some control(s) in place, but highly skilled+motivated attacker will overcome those. Overall exposure might play a role.
4: Controls in place but they have limitations. High exposure given and/or medium skilled attacker required.
5: Maybe controls, but with limitations if at all. High Exposure and/or low skills required.
works quite well for exercises with participants somewhat experienced with the approach. If there are people in the room (or call) who’ve already performed RRA (or, for that matter, similar exercises) a joint understanding what a “2” or “4” mean in the context of figuring “the vulnerability [factor]” can be attained quickly. This might even work when most of the participants are complete newcomers to risk assessments. In this case discussing some examples is the key for gaining that joint understanding.
That’s why in RRAs – where getting results in a timely manner is crucial (see introductory notes in part 1 on the complexity vs. efficiency trade-off) – we usually use some scale like the one outlined above.

Still, this – perfectly legitimately – may seem a bit “unscientific” to some people. For example, I currently do a lot of work in an organization where relevant people involved in the (risk assessment) process heavily struggle with the above scale, feeling it does not permit “a justified evaluation of the vulnerability factor due to being too vague”. In general, in such environments going with a “weighted summation method” is a good idea. More or less this works as follows:

a) identify some factors contributing to “vulnerability” (or “attack potential”) like “overall (network connectivity) exposure of system” or skills/time needed by an attacker and so on.
b) assign individual value to those factors, perform some mathematical operation on them (usually simply adding them), map the result to a 1-5 scale and voilà, here’s the – “justified and calculated” – vulnerability factor.

Again, an example might help. Let’s assume the threats-to-be-discussed are different types of attacks (as opposed to all the other classes of threats like acts of god, hardware failures etc.). One might look at three vulnerability-contributing factors, that are “type of attacker”, “exposure of system” and “extent of current controls” and assign values to each three of them as shown in the following – sample – table:

Attacker (knowledge) Exposure Extent of controls Value
Script kid Internet facing none     5
Bot Business partners Some, but insufficient     4
Skilled + motivated attacker Only own organization some but not resistant to skilled and motivated attacker     3
Organized crime Own organization, restricted multiple controls, but single failure might lead to attack succeeding     2
Nation state/agency Very restricted or mgmt access only multiple controls, multiple failures needed at the same time for attack to succeed     1

 

with the following “mapping scale”:
Sum of values in the range 1-3 gives overall value: 1.
4-6 gives a 2.
7-9:   3.
10-12: 4.
13-15: 5.

Now, suppose we discuss the vulnerability of certain type of attack against a certain asset (here: some system). In case an attacker shows up (the likelihood of this event would be expressed by the respective value that is not included in this example) a skilled and motivated attacker is assumed to perform the attack (leading to a “3” in the first column). Furthermore the system is exposed to business partners (=> “4” in second column) and, due to the high sensitivity of the data processed, it has multiple layers of security controls (=> “1” for “extent of controls”). Adding these values gives on overall “8” which in turn means a vulnerability [factor] “3”.

For an internet facing system (“5) which we expect to be attacked/attackable by script kids (“5”) and which does not dispose of good controls (thus “4”) adding the respective values gives a 14 and subsequently an overall vulnerability of “5”.

An extensive presentation of this approach can be found in the clause B.4 of the Common Criteria Evaluation Methodolgy (CEM). There the respective values “characterising [the] attack potential” are:

a) Time taken to identify and exploit (Elapsed Time);
b) Specialist technical expertise required (Specialist Expertise);
c) Knowledge of the TOE design and operation (Knowledge of the TOE);
d) Window of opportunity;
e) IT hardware/software or other equipment required for exploitation.

with a fairly advanced description of the different variants for these values and an elaborated point scale.
To the best of my knowledge (and I might be biased given I’m a “BSI certified Common Criteria evaluator”) this is the best description of the “weighted summation method” in the infosec space. Feel free to let me know if there’s a better (or older) source for this.

Back to the topic, I’d like to state that while I certainly have quite some sympathy for this approach, using it for risk assessments – obviously – requires (depending on the type of people involved and their “discussion culture” 😉 potentially much) more time for the actual exercise. Which in turn might endanger the overall goal of delivering timely results. That’s why the latter way of rating the vulnerability is not used in the RRA.

====

Stay tuned for the next part to follow soon,

thanks

Enno

Leave a Reply

Your email address will not be published. Required fields are marked *