Breaking

Broken Trust, Part 1: Definitions & Fundamentals + Some More Reflections on RSA

This again is going to be a little series of posts. Their main topic – next to the usual deviations & ranting I tend to include in blogposts šŸ˜‰ – is some discussion of “trust” and putting this discussion into the context of recent events and future developments in the infosec space. The title originates from a conversation between Angus Blitter and me in a nice Thai restaurant in Zurich where we figured the consequences of the latest RSA revelations. While we both expect that – unfortunately – not much is really going to happen (surprisingly many people, including some CSOs we know, are still trying to somehow downplay this or sweep it under the carpet, shying away from the – obvious – consequences it might have to accept that for a number of environments RSA SecurID is potentially reduced to single factor auth nowadays…), the long term impact on our understanding of 3rd party (e.g. vendor) trust might be more interesting. Furthermore “Broken Trust” seems a promising title for a talk at upcoming Day-Con V… šŸ˜‰

Before getting into too much detail too early I’d like to outline the model of trust, control and confidence we use at ERNW. This model was originally based on Piotr Cofta’s seminal book on “Trust, Complexity and Control: Confidence in a Convergent World” and Rino Falcone’s & Christiano Castelfranchi’s paper ā€œTrust and Control: A Dialectic Linkā€ and has evolved a bit in the last two years. Let’s start with some

Definitions


Despite (or maybe due to) being an apparently essential part of human nature and a fundamental factor of our relationships and societies there is no single, concise definition of “trust”. Quite some researchers discussing trust related topics do not define the term at all and presume some “natural language” understanding. This is even more true in papers in the area of computer science (CS) and related fields, the most prominent example being probably Ken Thompson’s ” Reflections on Trusting Trust” where no definition is provided at all. Given the character and purpose of RFCs in general and their relevance for computer networks it seems an obvious course of action to look for an RFC providing a clarification. In fact the RFC 2828 defines as follows

“Trust […] Information system usage: The extent to which someone who relies on a system can have confidence that the system meets its specifications, i.e., that the system does what it claims to do and does not perform unwanted functions.”

which is usually shortened to statements like “trust = system[s] perform[s] as expected”. We don’t like this definition for a number of reasons (outside the scope of a blogpost) and prefer the definition the Italian sociologist Diego Gambetta published in his paper “Can we trust trust?” in 1988 (and which has – while originating from another discipline – gained quite some adoption in CS) that states

“trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action.”

With Falcone & Castelfranchi we define control as

ā€œa (meta) action:

a) Aimed at ascertaining whether another action has been successfully executed or if a given state of the world has been realized or maintained [ā€¦]

b) aimed at dealing with the possible deviations and unforeseen events in order to positively cope with them (intervention).ā€

It should be noted that the term ā€œcontrolā€ is used here with a much broader sense (thus the attribute ā€œmetaā€ in the above definition) than in some information security standard documents (e.g. the ISO 27000 family) where control is defined as a ā€œmeans of managing risk, including policies, procedures, guidelines, practices or organizational structures, which can be of administrative, technical, management, or legal nature.ā€ [ISO27002, section 2.2]

Following Cofta’s model both, trust and control, constitute ways to build confidence which he defines as

“one’s subjective probability of expectation that a certain desired event will happen (or that the undesired one will not happen), if the course of action is believed to depend on another agent”.

[I know that this sounds quite similar to Gambetta’s definition of trust but I will skip discussing such subtleties for the moment, in this post. ;-).]

Putting the elements together & bringing the whole stuff to the infosec world

Let’s face it: in the end of the day all the efforts we as security practitioners take ultimately serve a simple goal, that is somebody (be it yourself, your boss or some CxO of your organization) reaching a point where she states “considering the relevant risks and our limited resources, we’ve done what is humanly possible”. Or just “it’s ok the way it is. I can sleep well now”.

I’m well aware that this may sound strange to some readers still believing in a “concret, concise and measurable approach to information security” but this is the reality in most organizations. And the mentioned point of “it’s ok” reflects fairly precisely the above definition of confidence (with the whole of events threatening the security of an environment being “another agent”).

Now, this state of confidence can be attained on two roads, that of “control” (applying security controls and, having done so, subsequently sleeping well) or that of “trust” (reflecting on some elements of the overall picture and then refraining from the application of certain security controls, still sleeping well).

A simple example might help to understand the difference: imagine you move to a new house in another part of the world. Your family is set to arrive one week later, so you have some days left to create an environment you consider “to be safe enough” for them.

What would you do (to reach the state of confidence)? I regularly ask this question in workshops I give and the most common answers go like “install [or check] the doors & locks”, “buy a dog”, “install an alarm system”. These are typical responses for “technology driven people” and the last one, sorry guys, is – as of my humble opinion – a particularly dull one given this is a detective/reactive type of control requiring lots of the most expensive operational resource, that is human brain (namely for follow-up on alarms = incident response). Which, btw, is the very reason why it pretty much never works in a satisfying manner, in pretty much any organization.

And yes, I understand that naming this regrettable reality is against the current vogue of “you’ll get owned anyway – uh, uh APT is around – so you need elaborate detection capabilities. just sign here for our new fancy SIEM/deep packet inspection appliance/deep inspection packet appliance/revolutionary network monitoring platform” BS.
Back to our initial topic (I announced some ranting, didn’t I? ;-)): all this stuff (doors & locks, the dog, the alarm system) follow the “control approach” and, more importantly and often overlooked, they all might require quite some operational effort (key management for the doors – don’t underestimate this, especially if you have a household full of kids šŸ˜‰ -, walking the dog, tuning the alarm system & as stated above: resolving the incidents one it goes off, etc.).

Another approach – at least for some threats, to some assets – could be to find out which parties are involved in the overall picture (the neighbors, the utility providers, the legal authorities and so on) and then, maybe, deciding “in this environment we have trust in (some of) those and that’s why we don’t need the full set of potential controls”. Living in a hamlet with about 40 inhabitants, in the Bavarian country side, I can tell you that the handling of doors and locks there certainly is a different one than in most European metropolises…

Some of you might argue here “nice for you, Enno, but what’s the application in corporate information security space?”. Actually that’s an easy one. Just think about it:

– Do you encrypt the MPLS links connecting your organization’s main sites? [Most of you don’t, because “we trust our carriers”. Which can be a entirely reasonable security decision, depending on your carriers… and their partners… and the partners of their partners…]

– Do you perform full database encryption for your ERP systems hosted & operated by some outsourcing provider? [Most of you don’t, trusting that provider, which again might be a fully legitimate and reasonable approach].

– Did you ever ask the company providing essential parts of your authentication infrastructure if they kept copies of your key material and, more importantly, if they did so in a sufficiently secure way? [Most of you didn’t, relying on reputation-based trust “everybody uses them, and aren’t they the inventors of that algorithm widely used for banking transactions? and isn’t ‘the military’ using this stuff, too?” or so].
So, in short: trust is a confidence-contributing element and common security instrument in all environments and – here comes the relevant message – this is fully ok. As efficient information security work (leading to confidence) relies on both approaches: trust (where justified) and control (where needed). Alas most infosec people still have a control-driven mindset, not recognizing the value of trust. [this will have to change radically in “the age of the cloud”, more on this in a later part of this series].

Unfortunately, both approaches (trust & control) have their own respective shortcomings:
– following the control road usually leads to increased operational cost & complexity and might have severe business impact.

– trust, by it’s very nature (see the “Gambetta definition” above) is something “subjective” and thereby might not be suited to base corporate security decisions on šŸ˜‰

BUT – and I’m finally getting to the main point of this post šŸ˜‰ – if we could transform “subjective trust” into sth documented and justified, it might become a valuable and accepted element in an organization’s infosec governance process [and, again, this will have to happen anyway, as in the age of the cloud, the control approach is doomed. and pls don’t try to tell me the “we’ll audit our cloud providers then” story. ever tried to negotiate with $YOUR_FAVORITE_IAAS_PROVIDER on data center visits or just very basic security reporting or sth.?].

Still, then the question is: What could that those “reasons for trust” be?

Evaluating trust(worthiness) in a structured way

There are various approaches to evaluate factors that contribute to the trustworthiness of another party (called “trustee” in the following) and hence the own (the “trustor’s”) trust towards that party. For example, Cofta lists three main elements, that are (the associated questions in brackets are paraphrases by myself):

  • Continuity (“How long will we work together?”)
  • Competence (“Can $TRUSTEE provide what we expect?”)
  • Motivation (“Whatā€™s $TRUSTEEā€™s motivation?”)

We, howver, usually prefer the avenue the ISECOM uses for their Certified Trust Analyst training which is roughly laid out here. It’s based on ten trust properties, two of which are not aligned with our/Gambetta’s definition of trust and are thereby omitted (these two are “control” and “offsets”, for obvious reasons. Negotiating a compensation to be paid when the trust is broken constitutes the exact opposite of trust… it can contribute to confidence, but not to trust in the above sense). So there’s eight left and these are:

  • Size (“Who exactly are you going to trust?”. In quite some cases this might be an interesting question. Think of carriers partnering with others in areas of the world called “emerging markets” or just think of RSA and its shareholders. And this is why background checks are performed when you apply for a job in some agency; they want to find out who you interact with in your daily life and who/what might influence your decisions.).
  • Symmetry (“Do they trust us?”. This again is an interesting, yet often neglected, point. I first stumbled across this when performing an MPLS carrier evaluation back in 2007).
  • Transparency (“How much do we know about $TRUSTEE?”).
  • Consistency (“What happened in the past?”. This is the exact reason why to-be-employers ask for criminal records of the to-be-employees.).
  • Integrity (“[How] Do we notice if $TRUSTEE changes?”).
  • Value of Reward (“What do we gain by trusting?” If this one has enough weight, all the others might become irrelevant. Which is exactly the mechanism Ponzi schemes are based upon. Or your CIO’s decision “to go to the cloud within the next six months” – overlooking that the departments are already extensively using AWS, “for demo systems” only, of course šŸ˜‰ – or, for that matter, her (your CIO’s decision) to virtualize highly classified systems by means of VMware products ;-). See also this post of Chris Hoff on “the CxO part in the game”.).
  • Components: (“Which resources does $TRUSTEE rely on?”).
  • Porosity (“How separated is $TRUSTEE from itā€™s environment?”).

Asking all these questions might either help to get a better understanding who to trust & why and thereby contribute to well-informed decision taking or might at least help to identify the areas where additional controls are needed (e.g. asking for enhanced reporting to be put into the contracts).

 

Applying this stuff to the RSA case

So, what does all this mean when reflecting on the RSA break-in? Why exactly is RSA’s trustworthiness potentially so heavily damaged?

As a little exercise, let’s just pick some of the above questions and try to figure the respective responses. Like I did in this post three days after RSA filed the 8-K report I will leave potential conclusions to the valued reader…

Here we go:

  • “Size”, so who exactly are you trusting when trusting “RSA, The Security Division of EMC”? Honestly, I do not know much about RSA’s share- and stakeholders. Still, even though not regarding myself as particularly prone to conspiracy theories, I think that Sachar Paulus, the Ex-CSO of SAP and now a professor for Corporate Security and Riskmanagement at the University of Applied Sciences Brandenburg, made some interesting observations in this blogpost.
  • “Symmetry” (do they trust us?): everybody who had the dubious pleasure to participate in one of those – hilarious – conference calls RSA held with large customers after the initial announcement in late March, going like

“no customer data was compromised.”

“what do you mean, do you mean no seed files where compromised?”

“as we stated, no customer data, that is PII was compromised.”

“So what about the seeds?”

“as we just said, no customer data was compromised.”

“and what about the seeds?”

“we can’t comment on this further due to the ongoing investigation. but we can assure you no customer data was compromised.”

 

might think of an answer on his/her own…

 

  • “Transparency”: well, see above. One might add: “did they ever tell you they kept a copy of your seed files?” but, hey, you never asked them, did you? I mean, even the (US …) defense contractors use this stuff, so why should one have asked such silly questions…
  • “Integrity”, which the ISECOM defines as “the amount and timely notice of change within the target”. Well… do I really have to comment on “the amount [of information]” and “timely notice” RSA delivered in the last weeks & months? Ā Some people assume we might never have known of the break-in if they’d not been obliged to file a 8-K form (as standard breach-laws might not have kicked in, given – remember – “no customer data was exposed”…) and there’s speculation we might never have known that actually the seeds were compromised if the Lockheed Martin break-in hadn’t happened. I mean, most of us were pretty sure it _was_ about the seed files, but of course, it’s easy to say so in hindsight šŸ˜‰
  • “Components” and “porosity”: see above.

 

Conclusions

If you have been wondering “why do my guts tell me we shouldn’t trust these guys anymore?” this post might serve as a little contribution to answering this question in a structured way. Furthermore the intent was to provide some introduction to the wonderful world of trust, control and confidence and its application in the infosec world. Stay tuned for more stuff to come in this series.
Have a great sunday everybody, thanks

Enno

 

Continue reading
Breaking

RSA: Anatomy of an Attack

Lots of stuff has been written about this blog post from RSA describing the (potential) details of the attack, so I will refrain from detailed comments on this piece that Marsh Ray nicely called “some of theĀ most egregious hyperbole I’ve read in infosec”.

Just one short note. Presumably the attack, in an early stage, used a “spreadsheet [that] contained a zero-day exploit that installs a backdoor through an Adobe Flash vulnerability (CVE-2011-0609)”.

I’ve written about Flash here.

nuff said,Ā thanks

 

Enno

Continue reading
Breaking

Reflections on the RSA Break-in

Some of you may have heard of the break-in at RSA and may now be wondering “what does this mean to us?” and “what can be done?”.Ā Not being an expert on RSA SecurID at all – I’ve been involved in some projects, however not on the technical implementation side but on the architecture or overall [risk] management side – I’ll still try to contribute to the debate šŸ˜‰

Feel free to correct me either by comment or by personal email in case the following contains factual errors.

Fundamentals

My understanding of the way RSA SecurID tokens work is roughly this:

a) The authentication capabilities provided by the system (as part of an overall infrastructure where authentication plays a role) are based on two factors: a one-time-password (OTP) generated in regular intervals by both a token and some (backend) authentication server and a PIN known by a user.

c) the OTP generation process takes some initialization value called the “seed” and the current time as input and calculates – by means of some algorithm at whose core probably sits a hash function – the OTP itself.

d) the algorithm seems publicly known (there are some cryptanalytic papers listed in the Wikipedia article on RSA SecurID and a generator – needing the seed as input – has been available for some time now).Ā Even if it wasn’t public we should assume that Kerckhoff’s principle exists for some reason šŸ˜‰

e) So, in the end of the day, an OTP of a given token at a given point of time can be calculated once the seed of this specific token is known.

This means: to some (large) degree, the whole security of the OTP relies on the secrecy of the seed which, obviously, must be kept.Ā [For the overall authentication process there’s still the PIN, but this one can be assumed to be the “weaker part” of the whole thing.]

Flavors

RSA SecurID tokens, and those of other vendors as well, are sold in two main variants:

– as hardware devices (in different sizes, colors etc.)Ā Here the seed is encoded as part of the manufacturing process and there must be some import process of token serial numbers and their associated seeds into the authentication server (located at the organization using the product for authentication), and some subsequent mapping of a user + PIN to a certain token (identified by serial number, I assume). The seeds are then generated on the product vendor’s (e.g. RSA’s) side in an early stage of the manufacturing process and distributed as part of the product delivery process.Ā Not sure why a vendor (like RSA) should keep those associations of (token) serial numbers and their seeds (as I said, I’m not an expert in this area so I might overlook sth here, even sth fairly obvious ;-)) once the product delivery process is completed, but I assume this nevertheless happens to some extent.Ā And I assume this is part of the potential impact of the current incident, see below.
– as so-called “soft tokens”, that are software instances running on a PC or mobile device and generating the OTP. For this purpose, again the seed is needed and to the best of my knowledge there’s, in the RSA space at least, two ways how the seed gets onto the device:

  • generate it as part of “user creation” process on the authentication server and subsequent distribution to users (by email or download link), for import. For obvious reasons not all people like this, security-wise.
  • generate it, by means of an RSA proprietary scheme called Cryptographic Token Key Initialization Protocol (CT-KIP) Ā in parallel on the token and the server and thereby avoid the (seed’s) transmission over the network.

Btw: In both cases importing the seed into a TPM would be nice, but – as of mid 2010 when I did some research – this was still in a quite immature state. So not sure if this currently is a viable option.

Attacks

For an attacker going after the seed I see three main vectors:
  • compromise of an organization’s authentication server. From audits in the past I know these systems often reside in network segments not-too-easily accessible and they are – sometimes – reasonably well protected (hardening etc.). Furthermore I have no idea how easy it would be to extract the seeds from such a system once compromised. Getting them might allow for subsequent attacks on remote users (logging into VPN gateways, OWA servers etc.), but only against this specific organization. And if the attacker already managed to compromise the organization’s authentication server this effort might not even be necessary anymore.
  • compromise of the (mobile) devices of some users of a given organization using soft tokens and copy/steal their seeds. This could potentially be done by a piece of malware (provided it manages to access the seed at all, which might be difficult – protected storage and stuff comes to mind – or not. I just don’t know šŸ˜‰
    This is the one usually infosec people opposing the replacement of hard tokens by soft tokens (e.g. for usability reasons) warn about. There are people who do not regard this as a very relevant risk, as it requires initial compromise of the device in question. Which, of course, can happen. But why “spend energy” on getting the seed then as the box is compromised anyway (and any data processed on it).Ā I’m well aware of the “attacker can use seed for future attacks from other endpoints” argument. One might just wonder about the incentive for an attacker to got after the seeds…
    It should be noted that binding the (soft) token to a specific device, identified by serial number, unique device identifier (like in the case of iPhones) or harddisk ID or sth – which can be done in the RSA SecurID space since some time, I believe since Authentication Manager 7.1 – might to some degree serve as a mitigating control against this type of attack.
  • attack vendor (RSA) and hope to get access to the seeds of many organizations which can then be used in subsequent targeted attacks.Ā I have the vague impression that this is exactly what happened here. Ā Art Coviello writes in his letter:
    “[the information gained by the attackers] could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack.”
    I interpret this as follows: “dear customers, face the fact that some attackers might dispose of your seeds and the OTPs calculated on those so you’re left with the PIN as the last resort for the security of the overall authentication process”.
I leave the conclusions to the valued reader (and, evidently, the estimation if my interpretation holds or not) and proceed with the next section.
Mitigating Controls & Steps
First, let’s have a quick look at the recommendations RSA gives (in this document).Ā There we find stuff like “We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.” Ā – yes, thanks! RSA for reminding us, this is always a good idea šŸ˜‰ – andĀ equally conventional wisdom including pieces like “We recommend customers update their security products and the operating systems hosting them with the latest patches”.Ā And, of course, it’s pure coincidence that they mention the use of SIEM systems twice… being a SIEM vendor themselves ;-))
From my part I’d like to add:
– in case you use RSA SecurID soft tokens, binding individual tokens to specific devices seems a good idea to me. (yes, this might mean that users using several devices have several, different, instances then. and, yes, I understand that in the shiny new age of user-owned funky smartphone gagdets used for corporate information processing, this might be a heavy burden to ask your users for šŸ˜‰
– some of you might re-think their (sceptical) position as for hard tokens: in the RSA SecurID space soft tokens can be “seeded” by means of CT-KIP, so no 3rd party is involved or disposes of the seeds. I’m not aware of such a feature for hard tokens.
– whatever you do, think about the supply chain of security components, which parties are involved and which knowledge they might accumulate.
– replacing proprietary stuff by standard based approaches (like X.509 certificates) should always be reconsidered.
– whatever you do, authentication-wise, you should always have a plan for revocation and credential replacement. This should be one of the overall lessons learned from this incident and the current trend that well-organized attackers will go after authentication providers and infrastructures (see, for example, this presentation from the recent NSA Information Assurance Symposium).
Last but not least I’d like to draw your attention to this upcoming presentation on the current state of authentication at Troopers. I’d be surprised if Steve and Marsh would not include the RSA incident in their talk šŸ˜‰
thanks,
Enno
Continue reading