Insinuator


Some outright rants from a bunch of infosec practitioners.

TAG | risk

I’m currently involved in creating an up to date approach to handling external connections (read: temporary/permanent connections with external parties like business partners) of a very large enterprise. Currently they have sth along the lines of: “there’s two types of external connections, trusted and untrusted. the untrusted ones have to be connected by means of a double staged firewall”.

Which – of course – doesn’t work at all in a VUCA world, for a number of reasons (the demarcation between trusted and untrusted is quite unclear – just think of mergers & acquisitions –; “business doesn’t like implementing 2-staged firewalls in some part of the world where they just signed the memorandum for a joint venture to build windmills in the desert”; firewalls might not be the appropriate control for quite some threats anyway – see for example slide 46 of this presentation– and so on). Not to mention that I personally think that the “double staged firewall” thing is based on an outdated threat model, in particular when implemented with two different vendors (for the simple reason that the added operational effort usually is not worth the added security benefit. see this post for some discussion of the concept of “operational feasibility”…).

Back to the initial point: the approach to be developed is meant to work on the basis of several types of remote connections which each determine associated security controls and other parameters. Which, at the first glance, does not seem overly complicated, but – as always – the devil is in the details.

What to base those categories on: the trust or security level of the other party (called “$OTHER_ORG” in the following) – or just assume they’re all untrusted? The protection needs of the data accessed by $OTHER_ORG? The (network) type of connection or number & type of users (unauthenticated vs. authenticated, many vs. few), the technical characteristics of the services involved (is an outbound Bloomberg link to be handled differently than an inbound connection to some published application, by means of a Citrix Access Gateway? if so, in what way?) etc.

As a start we put together a comprehensive list of questions as for the business partner, the characteristics of the connection and the data accessed and other stuff. These have to be answered by the (“business side”) requestor of an external connection. To give you an idea of the nature of questions here’s the first of those (~ 40 overall) questions:

  • Please provide details as for the company type and ownership of $OTHER_ORG.
  • More specifically: does $COMPANY hold shares of $OTHER_ORG?
  • Who currently manages the IT infrastructure of $OTHER_ORG?
  • Does $OTHER_ORG dispose of security relevant (e.g. ISO 27001) certifications or are they willing to provide SAS 70/ISAE 3402/SSAE 16 (“Type 2”) reports?
  • What is – from your perspective – $OTHER_ORG’s maturity level as for information security management, processes and overall posture?
  • How long will the connection be needed?
  • Which $COMPANY resources does $OTHER_ORG need to access?
  • Does a risk assessment for the mentioned ($COMPANY) resources exist?
  • What is the highest (data) classification level that $OTHER_ORG needs access to?
  • What is the highest (data) classification of data stored on systems that $OTHER_ORG accesses by some means (even if this data is not part of the planned access)?
  • Will data be accessed/processed that is covered by regulatory frameworks [e.g. Data Protection, PCI, SOX].
  • What would – from your perspective – be the impact for $COMPANY in case the data in question was disclosed to unauthorized 3rd parties?
  • What would – from your perspective – be the impact for $COMPANY in case the data in question was irreversibly destroyed?
  • What would – from your perspective – be the impact for $COMPANY in case the service(s) in question was/were rendered unavailable for a certain time?

We then defined an initial set of “types of connections” that dispose of different characteristics and might be handled with different measures (security controls being a subset of these). These connection types/categories included

  • “trusted business partners”/TBP (think of outsourcing partners, with strong mutual contractual controls in place etc.).
  • “external business partner”/EBP (this is the kind-of default, “traditional” case of an external connection).
  • “mergers & acquisitions [heritage]”/MA (including all those scenarios deriving from M & A, like “we legally own them but don’t really know the security posture of their IT landscape” or “somebody else now legally owns them, but they still need heavy access to our central systems, for the next 24-36 months”).
  • “business applications”/BusApp (think of Bloomberg access in finance or chemical databases in certain industry sectors).
  • “external associates”/ExtAss (“those three developers from that other organization we collaborate with on developing a new portal for some service, who need access to the project’s subversion system which happens to sit in our network”).

Next we tried to assign a category by analyzing the answers in a “point-based” manner (roughly going like: “in case we own them by 100% give a point for TBP”, “in case the connection is just outbound to a limited set of systems, give a point to BusApp”, “if it’s an inbound connection from less than 10 users, here’s a point for ExtAss” etc.), in an MS Excel sheet containing the questions together with drop-down response fields (plus comments where needed) and some calculation logic embedded in the sheet. This seemed a feasible approach, but reflecting on the actual points and assignment system, we realized that, in the end of the day, all these scenarios can be broken down to three relevant parameters which in turn determine the handling options. These parameters are

  • the trustworthiness of some entity (e.g. an organization, a network [segment], some users). pls note that _their trustworthiness_ is the basis for _our trust_ so both terms express sides of the same coin.
  • the (threat) exposure of systems and data contained in certain parts of some (own|external) network.
  • the protection needs of systems and data contained in certain parts of (usually the “own”/$COMPANY’s) network.

Interestingly enough every complex discussion about isolating/segmenting or – the other side of the story – connecting/consolidating (aka “virtualizing”) systems and networks can be reduced to those three fundamentals, see for example this presentation (and I might discuss, in another post, a datacenter project we’re currently involved in where this “reduction” turned out to be useful as well).

From this perspective a total of eight categories can be defined, with each of those mentioned parameters potentially being “high” or “low”. These would look like

Taking this route greatly facilitates the assignment of both individual connections to a category and sets of potential (generic) controls to the connection type categories, as each answer (to one of those questions) directly influences one of those three parameters (e.g. “we hold more than 50% of their shares” => increase trust; “$OTHER_ORG needs to access some of our systems with high privileges” => increase exposure; “data included that is subject to breach laws” => increase protection need etc.).

Which in turn allows a (potentially weighted) points based approach to identify those connections with many vs. few (trust|exposure|protection need) contributing factors.

 

More on this including details on the actual calculation approach and the final assignment of a category in the next part of this series which is to be published soon…

Have a great weekend

Enno

 

 

, , | Post your comment here.

Dec/10

13

The OSSTMM 3 – What I like about it

Given the upcoming public release of ISECOM‘s Open Source Security Testing Methodology Manual (OSSTMM) version 3, I took the opportunity to have a closer look at it. While we at ERNW never adopted the OSSTMM for our own way of performing security assessments (mostly due to the fact that performing assessments is our main business since 2001 and our approach has been developed and constantly honed since then so that we’re simply used to doing it “our way”) I’ve followed parts of ISECOM’s work quite closely as some of the brightest minds in the security space are contributing to it and they come up with innovative ideas regularly.
So I was eager to get an early copy of it to spend some weekend time going through it (where I live we have about 40 cm of snow currently so there’s “plenty of occasions for a cosy reading session” ;-))
One can read the OSSTMM (at least) two ways: as a manual for performing security testing or as a “whole philosophy of approaching [information] security”. I did the latter and will comment on it in a two-part post, covering the things I liked first and taking a more critical perspective on some portions in the second. Here we go with the first, in an unordered manner:

a) The OSSTMM (way of performing tests) is structured. There’s not many disciplines out there where a heavily structured approach is so much needed & desirable (and, depending on “the circumstances” so rarely found) so this absolutely is a good thing.

b) The OSSTMM has a metrics-based approach. We think that reasonable decision taking in the infosec space is greatly facilitated by “reducing complexity to meaningful numbers” so this again is quite valuable.

c) One of the core numbers allows to display “waste” (see this post why this is helpful).

d) It makes you think (which, btw, is exactly why I invited Pete to give the keynote at this year’s Troopers). Reading it will certainly advance your infosec understanding. There’s lots of wisdom in it…
In many aspects, the OSSTMM is another “step in the right direction” provided by ISECOM. Stay tuned for another post on the parts where we think it could be sharpened.

thanks

Enno

, | Post your comment here.

Some days ago my old friend Pete Herzog from ISECOM posted a blog entry titled “Hackers May Be Giants with Sharp Teeth” here which – along with some quite insightful reflections on the way kids perceive “bad people” – contains his usual rant on (the uselessness of) risk assessment.
Given that this debate (whether taking a risk-based infosec approach is a wise thing or not) is a constant element of our – Pete’s and mine – long lasting relationship I somehow feel enticed to respond ;-)

Pete writes about a 9-yr-old girl who – when asked about “bad people” – stated that those “look like everybody else” and, referring to risk assessment, he concludes “that you can’t predict the threat realibly and therefore you can’t determine it.”
I fully agree here. I mean, if you _could_ predict the threat realibly why perform risk assessment at all? Taking decisions would simply be following a path of math then. Unfortunately most infosec practitioners do neither dispose of a crystal ball – at least not a dependable one – nor of the future prediction capabilities this entity seems to have …
So… as long as we can’t “determine reliably” we have to use … risk assessments. “Risk” deals with uncertainty. Otherwise it would be called “matter of fact” ;-)
Here’s how the “official vocabulary of risk management” (ISO Guide 73:2009) defines risk: “effect of uncertainty on objectives”.
Note that central term “uncertainty”? That’s what risk is about. And that’s, again, what risk assessments deal with. Deal with uncertainty. In situations where – despite that uncertainty – well-informed decisions have to be taken. Which is a quite common situation in an ISO’s professional life ;-)
Effective risk assessment helps answering questions in scenarios characterized by some degree of uncertainty. Questions like “In which areas do we have to improve in the next 12 months?” in (risk assessment) inventory mode or “Regarding some technology change in our organization, which are the main risks in this context and how do they change?” in governance mode. [See this presentation for an initial discussion of these terms/RA modes].
So asking for “reliable threat prediction/determination” from risk assessments is just not the right anticipation. In contrast, structured RA can certainly be regarded as the best way to “take the right decisions in complex environments and thereby get the optimal [increase of] security posture, while being limited by time/resource/political constraints and, at the same time, facing some uncertainty”.

Btw: the definition from ISO 73:2009 – that is used by recently published ISO 31000 (Risk management — Principles and guidelines) too – nicely shows the transformation the term “risk” has undergone in the last decade. From “risk = combination of probability and consequence of an event [threat]” in ISO 73:2002 through ISO 27005:2008’s inclusion of a “vulnerability element” (called “ease of exploitation” or “level of vulnerability” in the appendix) to the one in ISO 73:2009 cited above (which, for the first time, does not only focus on negative outcomes of events, but considers positive outcomes as well. which in turn reflects the concept of “risk & reward” increasingly used in some advanced/innovative infosec circles and to be discussed in this blog at another occasion).
Most (mature) approaches used amongst infosec professionals currently follow the “risk = likelihood * vulnerability * impact” line. We, at ERNW, use this one as well.

Which brings me to the next inaccuracy in Pete’s blog entry. He writes: “Threats are not the same for everyone nor do they actually effect us all the same. So why do we put up with risk assessments?”.

Indeed (most) threats _are_ the same for everyone. Malware is around, hardware fails from time to time and humans make errors. Point is all this does _not_ affect everyone “the same way” (those with the right controls will not get hit hardly by malware, those with server clusters will survive failing hardware more easily, those with evolved change control processes might have a better posture when it comes to consequences from human error). And all this is reflected by the – context-specific – “vulnerability factor” in risk assessments (and, for that matter, sometimes by the “impact factor” as well). So while threats might be the same, the _associated risks_ might/will not be the same.
which, again, is the exact reason for performing risk assessments ;-))
If they _were_ the same one just would have to look up some table distributed by $SOME_INDUSTRY_ASSOCIATION.

So, overall, I’m not sure that I can follow Pete’s line of arguments here. Maybe we should have a panel discussion on this at next year’s Troopers ;-)

have a good one everybody,

thanks

Enno

, | Post your comment here.

Contact


Mail | Twitter | Imprint

©2010-2013 ERNW GmbH
To top