Breaking

If they had used DLP…

this would not have happened. At least this is what $SOME_DLP_VENDOR might tell you.
Maybe, maybe not. It wouldn’t have happened if they’d followed “common security best practices” either. Like “not to process sensitive data on (presumably) private laptops” or “not to run file sharing apps on organizational ones” or “not to connect to organizational VPNs and home networks simultanously”. yadda yadda yadda.

Don’t get us wrong here. We’re well aware that these practices are not consistently followed in most organizations anyway. That’s part of human (and corporate) reality. And part of our daily challenge as infosec practitioners.

This incident just proves once more that quite some security problems have their origins in “inappropriate processes” which in turn are the results of “business needs”.
(all of which, of course, is a well-known platitude to you, dear reader ;-).

The problem of data leakage by file sharing apps is not new (e.g. see this paper), nor is the (at least our) criticism of DLP.

Did you notice how quiet it has become around DLP, recently? Even Rich Mogull – whom we still regard as _the authority_ on the subject – seems not to blog extensively about it anymore.
Possibly (hopefully), we can observe the silent death of another overhyped, unneeded “security technology”…

Continue reading
Breaking

Series on “Outdated Threat Models” – Part 1

Yesterday I took a long run (actually I did the full distance here) and usually such exercises are good opportunities to “reflect on the world in general and the infosec dimension of it in particular”… at least as long as your blood sugar is still on a level to support somewhat reasonable brain activity 😉

Anyhow, one of the outcomes of the number of strange mental stages I went through was the idea of a series of blogposts on architectural or technological approaches that are widely regarded as “good security practice” but may – when looked at with a bit more of scrutiny – turn out to be based on what I’d call “outdated threat models”.

This series is intended to be a quite provocative one but, hey, that’s what blogs are for: provide food for thought…

First part is a rant on “Multi-factor authentication”.

In practically all large organizations’ policies, sections mandating for MFA/2FA in different scenarios can be found (not always being formulated very precisely, but that’s another story). Common examples include “for remote access” – I’m going to tackle this one in a future post – or “for access to high value servers” [most organizations do not follow this one too consistently anyway, to say the least ;-)] or “for privileged access to infrastructure devices”.
Let’s think about the latter one for a second. What’s the rationale behind the mandate for 2FA here?
It’s, as so often, risk reduction. Remember that risk = likelihood * vulnerability * impact, and remember that quite frequently, for infosec professionals,  the “vulnerability factor” is the one to touch (as likelihood and impact might not be modifiable too much, depending on the threat in question and the environment).

At the time most organizations’ initial “information security policy” documents were written (at least 5-7 years ago), in many companies there were mostly large flat networks, password schemes for network devices were not aligned with “other corporate password schemes” and management access to devices often was performed via Telnet.
As “simple password authentication” (very understandably) was not regarded sufficiently vulnerability-reducing then, people saw a need for “a second layer of control”… which happened to be “another layer of authentication”… leading to the aforementioned policy mandate.

So, in the end of the day, here the demand for 2-factor auth is essentially a demand for “2 layers of control”.

Now, if – in the interim – there’s other layers of controls like “encrypted connections” [eliminating the threat of eavesdropping, but not the one of password bruteforcing] or “ACLs restricting which endpoints can connect at all” [very common practice in many networks nowadays and heavily reducing the vulnerability to password bruteforcing attacks], using those, combined with single-auth, might achieve the same level of vulnerability-reduction, thus same level of overall risk.
Which in turn would then make the need for 2FA (in this specific scenario) obsolete. Which shows that some security controls needed at some point of time might no more be reasonable once threat models have changed (e.g. once the threat of “eavesdropping on unencrypted mgmt traffic from a network segment populated by desktop computers” has mostly disappeared).

Still, you might ask: what’s so bad about this? Why does this “additional layer of authentication” hurt? Simple answer: added complexity and operational cost. Why do you think that 2-factor auth for network devices can _rarely_ be found in large carrier/service provider networks? For exactly these reasons… and those organizations have a _large_ interest in protecting the integrity of their network devices. Think about it…

Continue reading
Misc

Welcome to insinuator.net

Welcome to insinuator.net, the semi-official blog of ERNW GmbH.
You may ask: Why yet another infosec blog? Aren’t there already just too many around? Well, possibly. But that opulence is part of blogging in general, isn’t it? 😉
Given we are trying to contribute to “public space & opinion” in a number of ways anyway [e.g. by our presentations or our newsletter] it seemed just too logical – and we’ve been asked by various people as well – to add another element to global blogosphere. Voilà, here we go!
What can you, dear reader, expect? Of course all kinds of shameless self-references, maybe occasionally a little bit of insight or even wisdom (yes, you’re right: modesty is not amongst our key virtues, at times) and – hopefully – some entertainment.

Enjoy!

Continue reading