Thursday, April 6, 2017

vCenter REST API and the Content Library

Untitled Document.md

Overview

In this post I will demonstrate the use of the REST API to place an ISO image within the content library of vCenter,
creating a new VM, and finally booting that VM from the ISO image.

I built these examples by following the documentation which is part of the REST SDK.

Getting Started

The examples assume the availability of a bash shell and curl, however, the tools themselves are not as
important as the actual process.

Set Environment Variables

Store the base URL of the vcenter host. Example:

VCENTER=https://vc1

Generate a unique client token for use later on.

CLIENT_TOKEN=$(uuidgen)

Authenticate

Post using http basic auth. Capture the session id which is returned.

curl -k -X POST -H 'Accept: application/json' \
--basic -u administrator@spinhirne.com:VMware1!  \
$VCENTER/rest/com/vmware/cis/session  | python -m json.tool

Store the session id within an environment variable. Example:

SESSION_ID=e37ebe2698fcf1b91af36cfbfcfd2acf

Creating a New Content Library

Get supported content library item types

Optional step, but informative.

vCenter has built in support for a couple of content types. Query the supported types. For this
guide we’ll make use of the “iso” type.

curl -k -X GET -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
$VCENTER/rest/com/vmware/content/type  | python -m json.tool

Get a list of datastore IDs

Datastore IDs are needed in order to define a backend for local content libraries. Capture the
ID (the “datastore” attribute of the returned json) of each datastore you wish to use.

curl -k -X GET -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
$VCENTER/rest/vcenter/datastore | python -m json.tool

Store the datastore IDs within an environment variable(s). Example:

DATASTORE0="datastore-12"

Create a new local library

Create a library and capture its UUID.

(curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-d @- \
$VCENTER/rest/com/vmware/content/local-library \
<<- JSON
{
    "client_token": "$CLIENT_TOKEN",
    "create_spec": {
        "description": "test lib",
        "name": "testLib1",
        "storage_backings": [
            {
                "datastore_id": "$DATASTORE0",
                "type": "DATASTORE"
            }
        ],
        "type": "LOCAL"
    }
}
JSON
) | python -m json.tool

Store the UUID within an environment variable. Example:

LIBRARY=d38b7d74-70c5-4fa5-b4b6-5cd9001457ba

Add an ISO item to the library

Locate an iso file and capture the file name as an environment variable. Example:

ISO_FILE=/tmp/test.iso

Capture its size and md5sum as well:

ISO_MD5=$(md5sum $ISO_FILE | gawk '{print $1}')
ISO_SIZE=$(ls -l $ISO_FILE | gawk '{print $5}')

Create a library item for this iso.

(curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-d @- \
$VCENTER/rest/com/vmware/content/library/item \
<<- JSON
{
        "client_token": "$CLIENT_TOKEN",
        "create_spec": {
                "description": "a custom iso",
                "library_id": "$LIBRARY",
                "name": "$(basename $ISO_FILE)",
                "type": "iso"
        }
}
JSON
) | python -m json.tool

Capture the library item within an environment variable. Example:

LIBRARY_ITEM=b568e170-4702-42d4-804a-93b35f678e30

Create an update session

An update session is needed in order to upload the actual iso file.

(curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-d @- \
$VCENTER/rest/com/vmware/content/library/item/update-session \
<<- JSON
{
        "client_token": "$CLIENT_TOKEN",
        "create_spec": {
            "library_item_id": "$LIBRARY_ITEM"
        }
}
JSON
) | python -m json.tool

Capture the update session within an environment variable. Example:

UPDATE_SESSION=39bf1ccb-d7b9-4aff-b250-83b3c6c3620a:8bed2a31-ce25-4d9d-b0e8-dc1a861ebc1e

Request URI for file upload

We’re going to use a PUSH session to push the file to vCenter. For this, vCenter will give us an
endpoint for the upload.

(curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-d @- \
$VCENTER/rest/com/vmware/content/library/item/updatesession/file/id:$UPDATE_SESSION?~action=add \
<<- JSON
{
    "file_spec": {
        "checksum_info": {
            "algorithm": "MD5",
            "checksum": "$ISO_MD5"
        },
        "name": "$(basename $ISO_FILE)",
        "size": $ISO_SIZE,
        "source_type": "PUSH"
    }
}
JSON
) | python -m json.tool

Capture the upload endpoint with an environment variable. Example:

UPLOAD_ENDPOINT=$VCENTER/cls/data/75be4d03-dcaa-451a-b946-1ba0aa38f947/dsl-4.4.10.iso

Upload the file

Upload the file using a PUT request with the file uploaded as multipart form data.

curl -v -k -X PUT \
-F "$(basename $ISO_FILE)=@$ISO_FILE" \
-H 'Accept: application/json' -H "vmware-api-session-id: $SESSION_ID" \
$UPLOAD_ENDPOINT

Validate the upload

Check for any upload errors.

curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
$VCENTER/rest/com/vmware/content/library/item/updatesession/file/id:$UPDATE_SESSION?~action=validate \
| python -m json.tool

Complete or fail the session

If there are any validation errors then fail the session:

curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-d '{"client_error_message": "upload failed"}' \
-w "%{http_code}\n" \
$VCENTER/rest/com/vmware/content/library/item/update-session/id:$UPDATE_SESSION?~action=fail

Otherwise mark it as complete. Marking it complete causes the item to actually update with the uploaded content.

curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
$VCENTER/rest/com/vmware/content/library/item/update-session/id:$UPDATE_SESSION?~action=complete

Get the status of the update session

Optional step, but informative.

curl -k -X GET -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
$VCENTER/rest/com/vmware/content/library/item/update-session/id:$UPDATE_SESSION \
| python -m json.tool

Delete the update session

Clean up.

curl -k -X DELETE -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
$VCENTER/rest/com/vmware/content/library/item/update-session/id:$UPDATE_SESSION 

Creating a New VM

Get a list of folder IDs

We need to find a VM folder for our VM. Get a list of vm folders from the system and
capture the ID (the “folder” field) of one of them.

curl -k -X GET -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
$VCENTER/rest/vcenter/folder?filter.type=VIRTUAL_MACHINE | python -m json.tool

Store the folder id within an environment variable. Example:

VMFOLDER="group-v3"

Get a list of hosts

Get a list of hosts from the system and capture the ID (the “host” field) of one of them.

curl -k -X GET -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
$VCENTER/rest/vcenter/host | python -m json.tool

Store the host id within an environment variable. Example:

VCHOST="host-9"

Create a test VM

Now we’ll create a VM. We’ll deploy to the host/folder gathered previously and we’ll use the datastore
we used for our content library for its disk backing.

(curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-d @- \
$VCENTER/rest/vcenter/vm \
<<- JSON
{
    "spec": {
        "name": "testVM",
        "guest_OS": "OTHER_LINUX_64",
        "placement": {
            "host": "$VCHOST",
            "datastore": "$DATASTORE0",
            "folder":"$VMFOLDER"
        }
    }
}
JSON
) | python -m json.tool

Store the VM id within an environment variable. Example:

VM="vm-59"

Mount the iso from the library on the VM

Mounting the iso from the content library will create a new cdrom device on the VM which is backed
by the library item. The call will return the id of the newly created cdrom.

(curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-d @- \
$VCENTER/rest/com/vmware/vcenter/iso/image/id:$LIBRARY_ITEM?~action=mount \
<<- JSON
{
    "vm": "$VM"
}
JSON
) | python -m json.tool

Capture the id of the cdrom within an environment variable. Example:

VMCDROM=3000

Power on the VM

Power on the VM and then verify from the console that it has booted the iso image.

curl -k -X POST -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
$VCENTER/rest/vcenter/vm/$VM/power/start

Power off the VM

Optionally power off the VM if you’re finished with it.

curl -k -X POST -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
$VCENTER/rest/vcenter/vm/$VM/power/stop

Unmount the cdrom

curl -k -X POST \
-H 'Accept: application/json' -H "Content-Type: application/json" -H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
-d @- \
$VCENTER/rest/com/vmware/vcenter/iso/image/id:$VM?~action=unmount \
<<- JSON
{
    "cdrom": "$VMCDROM"
}
JSON

Final Clean Up

Delete the VM

curl -k -X DELETE -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
$VCENTER/rest/vcenter/vm/$VM

Delete the library item

curl -k -X DELETE -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
$VCENTER/rest/com/vmware/content/library/item/id:$LIBRARY_ITEM

Delete the library

curl -k -X DELETE -H 'Accept: application/json' \
-H "vmware-api-session-id: $SESSION_ID" \
-w "%{http_code}\n" \
$VCENTER/rest/com/vmware/content/local-library/id:$LIBRARY

Tuesday, October 4, 2016

OVN and Containers

Untitled Document.md

Overview

Following up on my previous post, the subject of this discussion is OVN integration with containers. By the end of this lab we will have created a container host “VM” which houses a pair of containers. These containers will be tied direcly into an OVN logical switch and will be reachable directly from all VMs within the logical network.

The OVN Container Networking Model

According to the man page for ovn-architecture, OVN’s container networking strategy of choice is to use a VLAN trunk connection to the conainer host VM and require that traffic from each container be isolated within a unique VLAN. This, of course, means that some level of coordination must take place between OVN and the container host to ensure that they are in sync regarding which VLAN tag is being used for a given container. It also places a certain level of responsibility on the container host to make sure that containers are properly isolated from one another internally.

Going into a bit more detail, the basic idea is that within OVN you will create a logical port representing the connection to the host VM. You will then define logical ports for your containers, mapping them to the “parent” VM logical port, and defining a VLAN tag to use. OVN will then configure OVS flows which will map VLAN tagged traffic from the parent VM’s logical port to the appropriate container logical port. The diagram below illustrates this design.

ovn containers

The Existing Setup

Take a moment to review the current setup prior to continuing.

The lab network:

ovn lab

The OVN logical network:

ovn logical components

Defining the Logical Network

For this lab we are going to create a new fake “VM”, vm5, which will host our fake “containers”. The new VM will plug into the existing DMZ switch alongside vm1 and vm2. We’re going to use DHCP for both the new VM and its containers.

Before creating the logical port for vm5 we need to locate the DHCP options that we created for the DMZ network during the previous lab. We’ll query the OVN northbound DB directly for this information. Here is the output on my system.

root@ubuntu1:~# ovn-nbctl list DHCP_Options
_uuid               : 7e32cec4-957e-46fa-a4cc-34218e1e17c8
cidr                : "172.16.255.192/26"
external_ids        : {}
options             : {lease_time="3600", router="172.16.255.193", server_id="172.16.255.193", server_mac="02:ac:10:ff:01:93"}

_uuid               : c0c29381-c945-4507-922a-cb87f76c4581
cidr                : "172.16.255.128/26"
external_ids        : {}
options             : {lease_time="3600", router="172.16.255.129", server_id="172.16.255.129", server_mac="02:ac:10:ff:01:29"}

We want the UUID for the “172.16.255.128/26” network (c0c29381-c945-4507-922a-cb87f76c4581 in my case). Capture this UUID for use in later commands.

Lets create the logical port for vm5. This should look pretty familiar. Be sure to replace {uuid} with the UUID from the DHCP options entry above. From ubuntu1:

ovn-nbctl lsp-add dmz dmz-vm5
ovn-nbctl lsp-set-addresses dmz-vm5 "02:ac:10:ff:01:32 172.16.255.132"
ovn-nbctl lsp-set-port-security dmz-vm5 "02:ac:10:ff:01:32 172.16.255.132"
ovn-nbctl lsp-set-dhcpv4-options dmz-vm5 {uuid}

Now we will create the logical ports for the containers which live on vm5. This process is nearly identical to creating a normal logical port but with a couple of additional settings. From ubuntu1:

# create the logical port for c51
ovn-nbctl lsp-add dmz dmz-c51
ovn-nbctl lsp-set-addresses dmz-c51 "02:ac:10:ff:01:33 172.16.255.133"
ovn-nbctl lsp-set-port-security dmz-c51 "02:ac:10:ff:01:33 172.16.255.133"
ovn-nbctl lsp-set-dhcpv4-options dmz-c51 {uuid}

# set the parent logical port and vlan tag for c51
ovn-nbctl set Logical_Switch_Port dmz-c51 parent_name=dmz-vm5
ovn-nbctl set Logical_Switch_Port dmz-c51 tag=51

# create the logical port for c52
ovn-nbctl lsp-add dmz dmz-c52
ovn-nbctl lsp-set-addresses dmz-c52 "02:ac:10:ff:01:34 172.16.255.134"
ovn-nbctl lsp-set-port-security dmz-c52 "02:ac:10:ff:01:34 172.16.255.134"
ovn-nbctl lsp-set-dhcpv4-options dmz-c52 {uuid}

# set the parent logical port and vlan tag for c52
ovn-nbctl set Logical_Switch_Port dmz-c52 parent_name=dmz-vm5
ovn-nbctl set Logical_Switch_Port dmz-c52 tag=52

So the only real difference here is that we’ve set a parent_name and tag for the container logical ports. You can validate these by looking at the database entries. For example, the output on my system:

root@ubuntu1:~# ovn-nbctl find Logical_Switch_Port name="dmz-c51"
_uuid               : ea604369-14a9-4e25-998f-ec99c2e7e47e
addresses           : ["02:ac:10:ff:01:31 172.16.255.133"]
dhcpv4_options      : c0c29381-c945-4507-922a-cb87f76c4581
dhcpv6_options      : []
dynamic_addresses   : []
enabled             : []
external_ids        : {}
name                : "dmz-c51"
options             : {}
parent_name         : "dmz-vm5"
port_security       : ["02:ac:10:ff:01:31 172.16.255.133"]
tag                 : 51
tag_request         : []
type                : ""
up                  : false

Configuring vm5

The first thing to remember about this lab is that we’re not using real VMs, but rather simulating them as ovs internal ports directly on the Ubuntu hosts. For vm1-vm4 we’ve been creating these internal ports directly on br-int but for vm5 our requirements are a bit different so we’ll be using a dedicated ovs bridge. This bridge, called br-vm5, will not be managed by OVN and will simulate how you might actually configure an ovs bridge internal to a real container host VM. This bridge will provide local networking to both the VM and its containers and will be configured to perform VLAN tagging. Below is a diagram illustrating how things will look when we’re done.

ovn container lab

My lab setup is simple in that I’m placing the containers all on the same logical switch. However, there is no requirement to do this since I can place the container logical port on any logical switch that I wish.

Lets get started. The first step is to create the setup for vm5. We’re going to do this on the ubuntu2 host. From ubuntu2:

# create the bridge for vm5
ovs-vsctl add-br br-vm5

# create patch port on br-vm5 to br-int
ovs-vsctl add-port br-vm5 brvm5-brint -- set Interface brvm5-brint type=patch options:peer=brint-brvm5

# create patch port on br-int to br-vm5. set external id to dmz-vm5 since this is our connection to vm5 
ovs-vsctl add-port br-int brint-brvm5 -- set Interface brint-brvm5 type=patch options:peer=brvm5-brint
ovs-vsctl set Interface brint-brvm5 external_ids:iface-id=dmz-vm5

# create vm5 within a namespace. vm5 traffic will be untagged
ovs-vsctl add-port br-vm5 vm5 -- set interface vm5 type=internal
ip link set vm5 address 02:ac:10:ff:01:32
ip netns add vm5
ip link set vm5 netns vm5
ip netns exec vm5 dhclient vm5

Verify connectivity from vm5 by pinging its default gateway.

root@ubuntu2:~# ip netns exec vm5 ping 172.16.255.129
PING 172.16.255.129 (172.16.255.129) 56(84) bytes of data.
64 bytes from 172.16.255.129: icmp_seq=1 ttl=254 time=0.797 ms
64 bytes from 172.16.255.129: icmp_seq=2 ttl=254 time=0.509 ms
64 bytes from 172.16.255.129: icmp_seq=3 ttl=254 time=0.404 ms

Configuring the vm5 “Containers”

Now that vm5 is up and working we can configure its fake “containers”. These are going to look almost exactly like our fake “vms” with the exception that we’re configuring them for vlan tagging.

# create c51 within a namespace. c51 traffic will be tagged with vlan 51
ip netns add c51
ovs-vsctl add-port br-vm5 c51 tag=51 -- set interface c51 type=internal
ip link set c51 address 02:ac:10:ff:01:33
ip link set c51 netns c51
ip netns exec vm5 dhclient c51

# create c52 within a namespace. c52 traffic will be tagged with vlan 52
ip netns add c52
ovs-vsctl add-port br-vm5 c52 tag=52 -- set interface c52 type=internal
ip link set c52 address 02:ac:10:ff:01:34
ip link set c52 netns c52
ip netns exec c52 dhclient c52

Verify connectivity.

root@ubuntu2:~# ip netns exec c51 ping 172.16.255.129
PING 172.16.255.129 (172.16.255.129) 56(84) bytes of data.
64 bytes from 172.16.255.129: icmp_seq=1 ttl=254 time=1.33 ms
64 bytes from 172.16.255.129: icmp_seq=2 ttl=254 time=0.420 ms
64 bytes from 172.16.255.129: icmp_seq=3 ttl=254 time=0.371 ms

root@ubuntu2:~# ip netns exec c52 ping 172.16.255.129
PING 172.16.255.129 (172.16.255.129) 56(84) bytes of data.
64 bytes from 172.16.255.129: icmp_seq=1 ttl=254 time=1.53 ms
64 bytes from 172.16.255.129: icmp_seq=2 ttl=254 time=0.533 ms
64 bytes from 172.16.255.129: icmp_seq=3 ttl=254 time=0.355 ms

Final Words

As per the ovn-architecture guide, if you run containers directly on a hypervisor or otherwise patch them directly into the integration bridge then they have the potential to completely bog down an OVN system depending on the scale of your setup. The “nested” network solution is nice in that it greatly reduces the number of VIFs on the integration bridge and thus minimizes the performance hit which containers would otherwise entail. Again, the point of this exercise was not to set up a real world container simulation, but rather to demonstrate the in-built container networking feature set of OVN.