Breaking

The Key to your Datacenter

 

During our ongoing research on the security of cloud service providers and cloud based applications, we performed a regular audit of our AWS account password. Thinking of popular incidents and evergreens in attack vectors, we were wondering which consequences an online bruteforce attack on our AWS password would have. So we decided to perform a bruteforce attack against our own account. Analyzing the login process of AWS, the following requirements for the bruteforce tool to be used could be derived:

  • Cookie Handling
  • HTTPS support
  • HTTP 3xx support.

It turned out that it was pretty hard to find a password testing tool which fulfilled these requirements and would be able to actually handle the complex AWS login process — eventually there was none. Since we use and like Burp Suite pretty much, the Intruder suggested itself as an alternative which is straight forward to configure even though it might lack the speed and efficiency of special bruteforcing tools. Using burp’s history, we were able to identify the request which triggered the login process:

After the request is sent to the Intruder, the password field is marked

and the payloads to be used are configured.

Using exemplary payloads, it is possible to identify a successful login attempt, since it results in a redirect to the authenticated area/SSO server/whatever whereas a wrong passwords results in HTTP 200 presenting the AWS login page again:

Having this basic bruteforcing process established, the wordlist to be used must be generated. To decide which complexity should be covered, the Amazon password policy must be analyzed — if the restrictions in place deserve to be called a policy. The only restriction is that the password is between 6 and 20 characters (though the upper limit was determined regarding the maxlength field parameter when changing the password using the webfrontend, since there is no documentation about this available. Thinking of business needs, this behavior might be understandable since Amazon loses “endusers” and therefore money if their password policy is too strict). So we decided to use a wordlist which contains all passwords of 6 characters consisting of numbers (which can be generated pretty easy reactivating some old perl scripting skills: perl -le ‘printf “%06d”, $_ foreach(1..999999)’ 😉 ). Such passwords even might be pretty common when thinking of “birthday passwords”.

After performing about 400k requests, we paused the attack and searched for requests which resulted in a HTTP 302 response, just as the baseline request did.

And indeed, it was possible to bruteforce the password — which is not such a big surprise though. The bigger — and worse — surprise is, that it was still possible to login to our amazon account after performing about 2 million requests (including some dry runs) within two days originating from one single IP adress without having the account locked, being throttled down or notified in any way. And we were performing about 80k requests per hour.

 

Coming back to the title of the blogpost: At the moment of our investigation, there were no protection mechanisms against bruteforce attacks for the key to your datacenter — which your AWS credentials actually can be, if you are hosting a large amount of your services in EC2. Following a repsonsible disclosure policy, we contacted the AWS Security Team and got a very comprehensive answer. As we supposed, they pointed us to their MFA solution, which is basically, even though there was a major incident recently, a viable security control when authenticating users for data center access. But in addition, we had a long and beneficial dialog about potential mechanisms such as connection throttling and account locking. The outcome of our discussion is a CAPTCHA mechanism which kicks in after a brute force attempt is detected — and was also re-tested several times by our bruteforcing attempt. It was quite impressive to see that it was possible for Amazon to implement additional security measures in such a short time frame, regarding the huge size and complexity of the AWS environment. So we were really glad to get in touch with the committed AWS Security Team and were really happy to see that those guys are really into security and trying to communicate with their customers.

Continue reading
Building

iOS Hardening Configuration Guide

Hi everybody,
eye-catching title of this post, huh?

Actually there is some justification for it ;-), that is bringing this excellent document covering the exact topic to your attention.
Other than that this post contains some unordered reflections which arose in a recent meeting in a quite large organization on the “common current iPad topic” (executives would like to have/use an iPad, infosec doesn’t like the idea, business – as we all know – wins, so bring external expertise in “to help us find a way of doing this securely” yadda yadda yadda).
Which – given those nifty little boxes are _consumer_ devices which were probably never meant to process sensitive corporate data – might be a next-to-impossible task… at least in a way that satisfies business expectations as for “usability”…[btw: can anybody confirm my observation that there’s a correlation between “rigor of restriction approach” to “number of corporate emails forwarded to private webmail accounts”?]

Anyway, in that meeting – due to my usual endeavor to look at things in a structured way – I started categorizing flavors of data wiping. I came up with
a) device-induced (call it “automatic” if you want) wipe. Here the trigger (to wipe) comes from the device itself, usually after some particular condition is met, which might be

  • number of failed passcode entries. This is supposed to help against an opportunistic attacker who “has found an iPad somewhere” and then tries to get access. Still, assuming a 4-digit passcode, based on their distribution the attacker might have a one-in-seven chance to succeed when the number of passcodes-to-fail is set to ten (isn’t this is the default setting? I don’t use such a device so I really don’t know ;-)).
  • check some system parameter (“am I jailbroken?”) and then perform a wipe.This somehow raises a – let’s call it – “matrix problem”: “judge the world’s trustworthy state from the own perspective and then delete my memory if found untrustworthy”. But how can I know my decision is a correct one if my own overall (“consciousness”) state might heavily depend on the USB port I’m connected to…
  • phone home (“Find My iPhone” et.al.), find out “I’m lost or stolen”, quickly wipe myself.This one requires a network connection, so a skilled+motivated attacker going after the data on the device will prevent this exact (network) connection. As most of you probably already knew ;-).

b) remote-wipe. That largely overhyped feature going like “if we learn that one of our devices is lost or stolen, we’ll just push the button and, boom, all the data on the device is wiped remotely”.

Unfortunately this one requires that the organization is able to react once the state of the device changed from “trustworthy environment” to “untrustworthy environment”. Which in turn usually relies on processes involving humans, e.g. might require people to call the organization’s service desk to inform them “I just lost my iPad”… which, depending on various circumstances that I leave the reader to imagine, might happen “in close temporal proximity to the event” or not …
And, of course, a skilled+motivated attacker will prevent the network connection needed for this one, as stated above.

So, all these flavors of wiping have their own share of shortcomings or pitfalls. At some point during that discussion I silently asked myself:

“How crazy is this? why do we spend all these cycles and resources and life hours of smart people on a detective+reactive type of control?”

Why not spend all this energy on avoiding the threat in the first place by just not putting the data on those devices (which lack fundamental security properties and are highly exposed to untrustworthy human behavior and environments)?
Which directly leads to the plea expressed in my Troopers keynote “Do not process sensitive data on smartphones!” (but use those just as display terminals to applications and data hosted in secure environments).

Yes, I know that “but then we depend on network connectivity and Ms. CxO can’t read her emails while in a plane” argument. And I’m soo tired of it. Spending so much operational effort for those few offline minutes (by pursuing the “we must have the data on the device” approach) seems just a bit of waste to me [and, btw, I’m a CxO “of company driven by innovation” myself ;-)]. Which might even be acceptable if it wouldn’t expose the organization to severe risks at the same time. And if all the effort wasn’t doomed anyway in six months… when your organizations’ executives have found yet another fancy gadget they’d like to use…

Think about it & have a great sunday,

Enno

PS: as we’re a company with quite diverse mindsets and a high degree of freedom to conduct an individual lifestyle and express individual opinions, some of my colleagues actually think data processing on those devices can be done in a reasonable secure way. See for example this workshop or wait for our upcoming newsletter on “Certificate based authentication with iPads”.

Continue reading
Breaking

Week of releases – apnbf

Another day, another tool 😉

Today I’m proudly releasing the first version of apnbf, a small python script designed for enumerating valid APNs (Access Point Name) on a GTP-C speaking device. It tries to establish a new PDP session with the endpoint via sending a createPDPContextRequest. This request needs to include a valid APN, so one can easily distinguish from a valid APN (which will be answered with a createPDPContextResponse) and an invalid APN (which will be answered with an error indication message). In addition the tool also parses the error indication and displays the reason (which should be “Missing or unknown APN” in case of an invalid APN).

Don’t waste time, get the source here (5a122f198ea35b1501bc3859fd7e87aa57ef853a)

cheers

/daniel

Continue reading
Breaking

Week of releases – gtp_scan-0.7

So, after having a completely new release yesterday, we will stay with already known but updated software today. You might have heard of gtp_scan before, which is a small python script for scanning mainly 3G and 4G devices and detecting GTP (GPRS Tunneling Protocol) enabled ports. As GTP is transported via UDP and we all know, UDP scanning is a pain, the tool uses the GTP build-in echo mechanism to detect GTP speaking ports. Since the last version I’ve implemented some new features:

  • Support of complete GTP spectrum (GTP-C, GTP-U, GTP’)
  • Support for scanning on SCTP
  • Improved result output, including validity check of response packages

Find the sources here (bbdcc8888ebb4739025395f8c1c253fa5fd2bb15).

 

have a nice one.

/daniel

Continue reading
Breaking

Week of releases – dizzy

I’m proud to announce, today a new fuzzing framework will see the light of day. It’s called dizzy and was written because the tools we used for fuzzing in past didn’t match our requirements. Some (unique) features are:

  • Python based
  • Fast!
  • Can send to L2 as well as to upper layers (TCP/UDP/SCTP)
  • Ability to work with odd length packet fields (no need to match byte borders, so even single flags or 7bit long fields can be represented and fuzzed)
  • Very easy protocol definition syntax
  • Ability to do multi packet state-full fuzzing with the ability to use received target data in response.

We already had a lot of success using it, now you will be able to know the true promises.

Find the source here (c715a7ba894b44497b98659242fce52128696a17).

/daniel

Continue reading
Breaking

Week of releases – loki-0.2.7

Today I’m going to open up the ‘Week of releases’, which means there will be some new software in the next days.

Lets start with a new version of loki. The version goes up to 0.2.7 and there are a lot of new features:

  • SCTP support in the base.
  • Invalid option and invalid header scan in the ICMP6 module.
  • On-line msg updates for neighbor messages in the RIP module.
  • New module for rewriting 802.1Q labels
  • Lots of small improvements and bug-fixes
  • Some new features I won’t tell right now, get the source and find them yourself 😉

Also there are new packages for gentoo, ubuntu-11.04 and fedora-15, also its the first time, packages for amd64 systems are available.

Downloads:

  • Package for gentoo – c29a6cca7a1f7394a473d4b50a1766e9f13fd5a5

    Dependencies:

    • Manifest – 9338ebcc6a3cb58478671f00cac3114efe5df337
  • Package for ubuntu 11.04 i386 – bf9fa05aa20677ac209126b78c3829940daaa8ee

    Dependencies:

    • pylibpcap – e30c9c8ab1a8e1ee3ddedd05475767dc9f85b526
  • Package for ubuntu 11.04 amd64 – 50f5c784f039a15613affd52e304e61fd2a16a58

    Dependencies:

    • pylibpcap – 9457644ef52fd6bfdb0da8790eee759cc4f76c8b
  • Package for fedora 15 i686 – 06398d9c8ca5fd0d80b0da65756b01bfe07652b4

    Dependencies:

    • pylibpcap – d7e2a9249cba4362d4e435643257ee6a89a412cf
    • libdnet-python – 83bbe3895a58d264190afaef586aba8c2bd921f4
  • Package for fedora 15 amd64 – 06c1fca3f8390cbe00e8e5c427327379c30222d6

    Dependencies:

    • pylibpcap – 62d8cc32ef42211584df439ace8f453a3822d5b1
    • libdnet-python – d8e969b35b2b5613f364525f21c8e0738a42e061

enjoy!

/daniel

Continue reading