Archive for December 2010
0 Comments | Posted by Enno Rey
The British Standards Institution recently published “Cloud Computing. A Practical Introduction to the Legal Issues”. I ordered an electronic copy yesterday (I did that here, for GBP 30) and after a first glance can say there’s lots of valuable information in it.
Merry christmas to everybody, have some peaceful and relaxing days
Given the upcoming public release of ISECOM‘s Open Source Security Testing Methodology Manual (OSSTMM) version 3, I took the opportunity to have a closer look at it. While we at ERNW never adopted the OSSTMM for our own way of performing security assessments (mostly due to the fact that performing assessments is our main business since 2001 and our approach has been developed and constantly honed since then so that we’re simply used to doing it “our way”) I’ve followed parts of ISECOM’s work quite closely as some of the brightest minds in the security space are contributing to it and they come up with innovative ideas regularly.
So I was eager to get an early copy of it to spend some weekend time going through it (where I live we have about 40 cm of snow currently so there’s “plenty of occasions for a cosy reading session” ;-))
One can read the OSSTMM (at least) two ways: as a manual for performing security testing or as a “whole philosophy of approaching [information] security”. I did the latter and will comment on it in a two-part post, covering the things I liked first and taking a more critical perspective on some portions in the second. Here we go with the first, in an unordered manner:
a) The OSSTMM (way of performing tests) is structured. There’s not many disciplines out there where a heavily structured approach is so much needed & desirable (and, depending on “the circumstances” so rarely found) so this absolutely is a good thing.
b) The OSSTMM has a metrics-based approach. We think that reasonable decision taking in the infosec space is greatly facilitated by “reducing complexity to meaningful numbers” so this again is quite valuable.
c) One of the core numbers allows to display “waste” (see this post why this is helpful).
d) It makes you think (which, btw, is exactly why I invited Pete to give the keynote at this year’s Troopers). Reading it will certainly advance your infosec understanding. There’s lots of wisdom in it…
In many aspects, the OSSTMM is another “step in the right direction” provided by ISECOM. Stay tuned for another post on the parts where we think it could be sharpened.
0 Comments | Posted by Enno Rey
When taking security decisions of whatever kind (e.g. for/against a certain control) one should always consider two main parameters: the security benefit of some action (“how much do we gain with regard to security/to risk reduction?”) and the operational impact or effort (“how much does it cost us opex-wise?”).
While this may seem fairly obvious it is often overlooked. One reason is that people think “doing more can’t hurt”. Which, unfortunately might be plain wrong in many cases. There is _always_ an operational cost of an additional measure. And the security benefit _must_ be worth this cost.
If it’s not, implementing a certain control might just be… waste.
Before giving two examples I’d like to note that this is one aspect I particularly like in the ISECOM OSSTMM where one of the main metrics, that is the “rav” can be higher than 100% which in turn can be used “to prove when money is being overspent on the wrong types of controls or redundant controls”.
[it should be noted that I’m in heavy disaccord with quite some other parts of the OSSTMM; more on this in a post to follow in some days. still the “rav” as a potential representation for showing waste is a really nice thing].
Back on topic, here’s two real-world examples to illustrate my point:
a) some months ago we performed a network audit in some financial institution. They heavily relied on network based security controls, namely 802.1x (the best! implementation I’ve ever seen so far, with a deployment rate > 98% in a 12K ports network. impressing stuff.) and ACLs on the layer 3 switches at the demarcation between the access and the distribution layer. One of those ACLs was of special interest. It contained about 120 lines which could be split into three pieces:
– first 118 lines allowing all types of actions from a “Quarantine VLAN” to some central systems, amonst those their domain controllers. To enable the automated installation of new systems (not disposing of a cert => put into the quarantine VLAN), with subsequent joining a domain, there were all sorts of rules allowing port UDP 53, TCP 88, 135, 445, 389, 636, ephemeral TCP ports etc. to each of the domain controllers (distributed across two data centers).
– line 119 went like: “deny quarantine_vlan any”.
– line 120 went like: “allow all_other_vlans any”.
After figuring the overall approach we asked them: “What’s the threat you want to protect against by this ACL?”.
The answer was sth like: “Malware infection from the systems in the quarantine VLAN”.
Now, ask yourself: with regard to the domain controllers (which can certainly rated “crown jewels” in this network, as in many others) does this ACL provide this protection?
[Hint: which ports did Blaster, Sasser and Conficker use?]
We then suggested: “you could heavily simplify this ACL by allowing any IP traffic to the domain controllers and – security-wise – you won’t lose much, as for the main threat you’re trying to protect from.”
Their answer was sth like: “While we understand your point we already have it and what’s wrong about having a kind-of-enhanced control, even if it does not provide much additional value?”.
And here we have what I call the “illusion of infinite resources”!
Imagine a new domain controller is added (or sth other network change affecting this ACL occurs). Some person will have to spend (precious) man power on modifying the ACL, testing it, deploying it etc. And this will cost more operational resources in case of an extensive ACL compared with a much simpler one (delivering more or less the same level of security). Those resources “wasted” could and should be much better spent on some other security optimization in their environment.
b) I just had a discussion with the CISO of another global organization. In that environment one of the national subsidiaries is going to have their own SSL VPN gateway shortly (those units can act rather autonomously) and there’s mainly two design variants discussed currently. That are
– put the internal interface of that SSL VPN gateway directly into the subsidiary’s LAN (with placement of the SSL VPN gateway in a DMZ at the firewall).
– put the internal interface of the SSL VPN gateway into the DMZ as well and route all the incoming VPN traffic through the firewall (placement in DMZ again, evidently).
He said to me: “Enno, obviously I prefer the second option”.
I replied: “Why? What’s the additional security benefit? Given you’ll have a ‘all traffic from VPN -> all internal networks: allow’ rule anyway, what’s the benefit of the extra hop [the firewall] and the extra filtering instance?”
I mean, there is practically none. So it’s just an additional rule (set) to be administered, to be looked at when troubleshooting etc.
Without any added security (visible to me at least).
This again, might serve as an example that one should always carefully reconsider those two parameters mentioned in the beginning of this post. Think about it.
have a good one,
We’re delighted to announce the first speakers of next year’s Troopers edition. Looks like it’s going to be a great event again ;-).
Here we go:
Ravishankar Borgaonkar & Kevin Redon: Femtocell: Femtostep to the Holy Grail (Attacks & Research Track)
Abstract: Femtocells are now being rolled out across the world to enhance third generation (3G) coverage and to provide assurance of always best connectivity in the 3G telecommunication networks. It acts as an access point that securely connect standard mobile handset to the mobile network operator’s core network using an existing wired broadband connection.
In this talk, we will evaluate security mechanisms used in femtocells and discuss practical & potential misuse scenarios of the same. In particular, our talk will cover:
# Femtocell and Telecom business model
# Security architecture of the femtocell
# Location verification techniques and how to beat them for free roaming calls
# Hacking of the device
-accessing confidential information stored on the device
-installing malicious applications on the device
-accessing mobile network operator’s infrastructural elements
# Possible countermeasures
Bios: Ravi received his joint master degree in Security and mobile computing from Royal Institute of Technology (KTH) and from Helsinki University of Technology (TKK). After finishing his master degree, he works as a researcher in the the Security in Telecommunications department at Deutsche Telekom Laboratories (T-labs) and is pursuing his PhD studies. His research themes are related to data security challenges in new telecommunication technologies. His research interest includes Wireless networking security (in particular, security in 2G/3G networks), M2M security, and malware & botnet analysis.
Kevin received bachelor of Computing from Napier University Edinburgh, Scotland. He is now finishing his Master degree in Computing with specialization in Communication Systems at the Technical University of Berlin. This is also where he joined the Security in Telecommunication work group in cooperation with the Deutsche Telekom Laboratories (T-labs). His research interest includes network security, in particular telecommunication network as GSM/UMTS, peer to peer networks, and smart cards.
Mariano Nuñez Di Croce: Your crown jewels online – Attacks to SAP Web Applications (Defense & Management Track)
Abstract: “SAP platforms are only accessible internally”. You may have heard that several times. While that was true in many organizations more than a decade ago, the current situation is completely different: driven by modern business requirements, SAP systems are getting more and more connected to the Internet. This scenario drastically increases the universe of possible attackers, as remote malicious parties can try to compromise the organization’s SAP platform in order to perform espionage, sabotage and fraud attacks.
SAP provides different Web interfaces, such as the Enterprise Portal, the Internet Communication Manager (ICM) and the Internet Transaction Server (ITS). These components feature their own security models and technical infrastructures, which may be prone to specific security vulnerabilities. If exploited, your business crown jewels can end up in the hands of cyber criminals.
Through many live demos, this talk will explain how remote attackers may compromise the security of different SAP Web components and what you can do to avoid it. In particular, an authentication-bypass vulnerability affecting “hardened” SAP Enterprise Portal implementations will be detailed.
Bio: Mariano Nuñez Di Croce is the Director of Research and Development at Onapsis. Mariano has a long experience as a Senior Security Consultant, mainly involved in security assessments and vulnerability research. He has discovered critical vulnerabilities in SAP, Microsoft, Oracle and IBM applications.
Mariano leads the SAP Security Team at Onapsis, where he works hardening and assessing the security of critical SAP implementations in world-wide organizations. He is the author and developer of the first open-source SAP & ERP Penetration Testing Frameworks and has discovered more than 50 vulnerabilities in SAP applications. Mariano is also the lead author of the “SAP Security In-Depth” publication and founding member of BIZEC, the Business Security community.
Mariano has been invited to hold presentations and trainings in many international security conferences such as BlackHat USA/EU, HITB Dubai/EU, DeepSec, Sec-T, Hack.lu, Ekoparty and Seacure.it as well as to host private trainings for Fortune-100 companies and defense contractors. He has also been interviewed and quoted in mainstream media such as Reuters, IDG, NY Times, PCWorld and others.
Friedwart Kuhn & Michael Thumann: Integration of the New German ID Card (nPA) in Enterprise Environments – Prospects, Costs & Threats (Defense & Management Track)
Abstract: The talk will cover the new nPA and related software like the AusweisApp with a special focus on possible use cases in the enterprise (“have the government run your corporate PKI” ;-)). Besides outlining prerequisites for an integration of the nPA within an organization, it will also answer questions about legal aspects that have to be considered and threats and risks that must be controlled and mitigated. Furthermore we will give a short overview about our own security research of the AusweisApp.
Bios: Friedwart Kuhn is a senior security consultant, head of the ERNW PKI team and co-owner of ERNW. He is a frequent speaker at conferences and has published a number of whitepapers and articles. Besides the daily consulting and assessment work, Windows enterprise security and aspects of technical and organizational PKI related topics are areas of special interest for him. In his (sparse) free time Friedwart likes to play music and loves literature.
Michael Thumann is Chief Security Officer and head of the ERNW “Research” and “Pen-Test” teams. He has published security advisories regarding topics like ‘Cracking IKE Preshared Keys’ and buffer overflows in web servers/VPN software/VoIP software. Michael enjoys sharing his self-written security tools (e.g. ‘tomas—a Cisco Password Cracker’, ikeprobe—IKE PSK Vulnerability Scanner’ or ‘dnsdigger—a dns information gathering tool’) and his experience with the community. Next to numerous articles and papers he wrote the first German Pen-Test Book that has become a recommended reading at German universities. In addition to his daily pentesting tasks he is a regular conference speaker and has also contributed exploit code to the Metasploit Framework. With more than 10 years of experience in computer security Michael’s main interest is to uncover vulnerabilities and security design flaws from the network to the application level.
Chema Alonso: I FOCA a .mil domain (Attacks & Research Track)
Abstract: FOCA is a tool to help you in the fingerprinting phase among a pentesting work. This tool helps you to find lost data, hidden information in public documents, fingerprinting servers, workstations, etc.
This talk will provide an extensive demo as a good example of the results which can be obtained using FOCA. The target domain? You’ll see in Troopers…
Chema is a Computer Engineer by the Rey Juan Carlos University and System Engineer by the Politecnica University of Madrid. He has been working as security consultant in the last ten years and had been awarded as Microsoft Most Valuable Professional since 2005 to present time. He is a frequent speaker at security conferences and is currently working on his PhD thesis about Blind Techniques.
Graeme Neilson: Tales from the Crypt0 (Defense & Management Track)
Abstract: Does the thought of SSL, HTTPS and S/MIME make you squeamish? Does PKI make you want to scream? Does encrypting data at rest make you want to bury yourself alive?
Cryptography is an important part of most web applications these days, and developers and admins need to understand how, why and when to employ the best and appropriate techniques to secure their servers, applications, data and the livelihoods of their users. Join Graeme Neilson (Aura Software Security) for a series of scary stories of real-world crypto failures and to learn how to do it the right way (with lots of code samples).
Bio: Graeme Neilson is lead security researcher at Aura Software Security based in Wellington, New Zealand. Originally from Scotland he has 10 years of
security experience. Graeme specialises in secure networks, network infrastructure, reverse engineering and cryptanalysis. Graeme is a regular presenter at international security conferences and has spoken at conferences in Australia, Europe and the US including Black Hat.
More talks to follow soon. See you in Heidelberg next year,
0 Comments | Posted by Enno Rey
Today I’m going to discuss the (presumably) most complex and difficult-to-handle of the three parameters contributing to a risk (as of the RRA), that is the “vulnerability [factor]”.
First it should be noted that “likelihood” and “vulnerability” must (“mentally”) be clearly separated which means that “likelihood” denotes: likelihood of threat showing up _without_ consideration of existing controls. Security controls already present will affect the vulnerability factor (in particular if they are effective ;-), but _not_ the likelihood.
First reflect on “how often will somebody stand at the door of our data center with the will to enter?” or “how often will a piece of malware show up at our perimeter?” or “how often will it happen that an operator commits a mistake?” and assign an associated value to the likelihood.
Then, _in a separate_ step, think about: “will that person be able to enter my data center?” (maybe it’s an external support engineer and, given their high workload, your admins are willing to violate the external_people_only_allowed_to_access_dc_when_attended policy. which – of course – is purely fictional and will never happen in your organization ;-)) or “how effective are our perimeter controls as for malware?” (are they? ;-)) or “hmm… what’s the maturity of our change management processes?” and assign an associated value to the vulnerability factor.
As stated in an earlier post: this will allow for identifying areas where to act and thus allow for efficient overall steering of infosec resources.
Mixing likelihood and vulnerability might lead to self complacent stuff like “oh, evidently likelihood of unauthorized access to datacenter is ‘1’ as we have that brand new shiny access control system” …
Now, how to rate the “vulnerability” (on a scale from 1 to 5)?
In my experience a “rough descriptive scale” like
1: Extensive controls in place, threat can only materialize if multiple failures coincide.
2: Multiple controls, but highly skilled+motivated attacker might overcome those.
3: Some control(s) in place, but highly skilled+motivated attacker will overcome those. Overall exposure might play a role.
4: Controls in place but they have limitations. High exposure given and/or medium skilled attacker required.
5: Maybe controls, but with limitations if at all. High Exposure and/or low skills required.
works quite well for exercises with participants somewhat experienced with the approach. If there are people in the room (or call) who’ve already performed RRA (or, for that matter, similar exercises) a joint understanding what a “2” or “4” mean in the context of figuring “the vulnerability [factor]” can be attained quickly. This might even work when most of the participants are complete newcomers to risk assessments. In this case discussing some examples is the key for gaining that joint understanding.
That’s why in RRAs – where getting results in a timely manner is crucial (see introductory notes in part 1 on the complexity vs. efficiency trade-off) – we usually use some scale like the one outlined above.
Still, this – perfectly legitimately – may seem a bit “unscientific” to some people. For example, I currently do a lot of work in an organization where relevant people involved in the (risk assessment) process heavily struggle with the above scale, feeling it does not permit “a justified evaluation of the vulnerability factor due to being too vague”. In general, in such environments going with a “weighted summation method” is a good idea. More or less this works as follows:
a) identify some factors contributing to “vulnerability” (or “attack potential”) like “overall (network connectivity) exposure of system” or skills/time needed by an attacker and so on.
b) assign individual value to those factors, perform some mathematical operation on them (usually simply adding them), map the result to a 1-5 scale and voilà, here’s the – “justified and calculated” – vulnerability factor.
Again, an example might help. Let’s assume the threats-to-be-discussed are different types of attacks (as opposed to all the other classes of threats like acts of god, hardware failures etc.). One might look at three vulnerability-contributing factors, that are “type of attacker”, “exposure of system” and “extent of current controls” and assign values to each three of them as shown in the following – sample – table:
|Attacker (knowledge)||Exposure||Extent of controls||Value|
|Script kid||Internet facing||none||5|
|Bot||Business partners||Some, but insufficient||4|
|Skilled + motivated attacker||Only own organization||some but not resistant to skilled and motivated attacker||3|
|Organized crime||Own organization, restricted||multiple controls, but single failure might lead to attack succeeding||2|
|Nation state/agency||Very restricted or mgmt access only||multiple controls, multiple failures needed at the same time for attack to succeed||1|
with the following “mapping scale”:
Sum of values in the range 1-3 gives overall value: 1.
4-6 gives a 2.
Now, suppose we discuss the vulnerability of certain type of attack against a certain asset (here: some system). In case an attacker shows up (the likelihood of this event would be expressed by the respective value that is not included in this example) a skilled and motivated attacker is assumed to perform the attack (leading to a “3” in the first column). Furthermore the system is exposed to business partners (=> “4” in second column) and, due to the high sensitivity of the data processed, it has multiple layers of security controls (=> “1” for “extent of controls”). Adding these values gives on overall “8” which in turn means a vulnerability [factor] “3”.
For an internet facing system (“5) which we expect to be attacked/attackable by script kids (“5″) and which does not dispose of good controls (thus “4”) adding the respective values gives a 14 and subsequently an overall vulnerability of “5”.
An extensive presentation of this approach can be found in the clause B.4 of the Common Criteria Evaluation Methodolgy (CEM). There the respective values “characterising [the] attack potential” are:
a) Time taken to identify and exploit (Elapsed Time);
b) Specialist technical expertise required (Specialist Expertise);
c) Knowledge of the TOE design and operation (Knowledge of the TOE);
d) Window of opportunity;
e) IT hardware/software or other equipment required for exploitation.
with a fairly advanced description of the different variants for these values and an elaborated point scale.
To the best of my knowledge (and I might be biased given I’m a “BSI certified Common Criteria evaluator”) this is the best description of the “weighted summation method” in the infosec space. Feel free to let me know if there’s a better (or older) source for this.
Back to the topic, I’d like to state that while I certainly have quite some sympathy for this approach, using it for risk assessments – obviously – requires (depending on the type of people involved and their “discussion culture” potentially much) more time for the actual exercise. Which in turn might endanger the overall goal of delivering timely results. That’s why the latter way of rating the vulnerability is not used in the RRA.
Stay tuned for the next part to follow soon,
0 Comments | Posted by Enno Rey
This is the second part of the series (part 1 here) providing some background on the way we perform risk assessments. It can be seen as a direct continuation of the last post; today I cover the method of estimation and the scale & calculation formula used.
1.1 Method of Estimation
Again, two main approaches exist:
- Qualitative estimation which uses a scale of qualifying attributes (e.g. Low, Medium, High) to describe the magnitude of each of the contributing factors listed above. [ISO 27005, p. 14] states that qualitative estimation may be used
- As an initial screening activity to identify risks that require more detailed analysis.
- Where this kind of analysis is appropriate for decisions.
- Where the numerical data or resources are inadequate for a quantitative estimation.
As the latter is pretty much always the case for information security risks, in the infosec space usually qualitative estimation can be found. A sample qualitative scale (1–5, mapping to “very low” to “very high”) for the vulnerability factor will be provided in the next part of this series.
- Quantitative estimation which uses a scale with numerical values (rather than the descriptive scales used in qualitative estimation) for impact and likelihood, using data from a variety of sources. [ISO 27005, p. 14] states that “quantitative estimation in most cases uses historical incident data, providing the advantage that it can be related directly to the information security objectives and concerns of the organization. A disadvantage is the lack of such data on new risks or information security weaknesses. A disadvantage of the quantitative approach may occur where factual, auditable data is not available thus creating an illusion of worth and accuracy of the risk assessment.”
1.2 Scale & Calculation Formula Used
Each of the contributing factors (that are likelihood, vulnerability [factor] and impact) will be rated on a scale from 1 (“very low”) to 5 (“very high”). Experience shows that other scales either are not granular enough (as is the case for the scale “1–3″) or lead to endless discussions if too granular (as is the case for the scale “1–10″).
Most (qualitative) approaches use a “1–5″ scale.
It should be noted that usually all values (for likelihood, vulnerability and impact) are mapped to concrete definitions; examples to be provided in the next part. Furthermore the “impact value” will not be split into “subvalues” for different security objectives (like individual values for “impact on availability”, “impact on confidentiality” and so on) in order to preserve the efficiency of the overall approach.
To get the resulting risk, all values will be multiplied (which is the most common way of calculating risks anyway). It should be noted that there are some objections with regard to this approach which, again, will be discussed in the next part.
The values themselves should be discussed by a group of experts (appropriate to the exercise-to-be-performed) and should not be assigned by a single person. The usefulness and credibility of the whole approach heavily depends on the credibility and expertise of the people participating in an exercise!
Wherever possible some lines of reasoning should be given for each value assigned, for documentation and future use purposes.
 [ISO 27000], section 188.8.131.52 provides a good overview.
 Models with quantitative estimation don’t use a “vulnerability factor” (as this one usually can’t be expressed in a quantitative way).
 And so does the example 2 (section E.2.2) of [ISO27005] which can be compared to the methodology described here.
Feel free to get back to us with any comments, criticism, case studies, whatever. Thanks,