Random Post: VAAI and supportability
RSS .92| RSS 2.0| ATOM 0.3
  • Home

    vCloud Director not generating discreet BIOS UUID by default

    November 14th, 2012

    vCloud Director in the 1.x series does not generate discreet BIOS UUID by default.  This item is documented in two KB’s that I was able to find:



    Prior to running the SQL statement, which must be run against the vCloud Database, dbo.config table (which is not really clearly mentioned), all VM’s deployed in a vApp do not get the discreet BIOS UUID, and at least in the case of Windows 2008, all machines will than generate the same OS GUID.  This may or may not be bad depending on your circumstances.

    You can look at the .vmx of each machine to determine if the BIOS UUID is the same, or run two quick PowerShell commands:

    To show the BIOS UUID inside Windows 2008 guest OS:

    get-wmiobject win32_computersystemproduct

    To show the Windows UUID, run:

    get-wmiobject win32_computersystemproduct |select-object -expandproperty uuid

    When we conducted the change, I shut down the cell to guarantee there were no writes to the database from vCD’s perspective.  We ran the SQL statement, started the cell and deployed new machines.  Each one had a new BIOS UUID and Windows GUID.

    PowerCLI: Add VM’s in vApp, within vCloud Directory to Security Groups withing vShield App

    October 3rd, 2012

    The title says it all!  The use case:

    You are using vCloud Director, and want to add Virtual Machines from deployed vApps to specific Security Groups within vShield App.  In my case, there were three Security Groups created to make a 3-tier environment.  Web, App and Database.

    Once again Alan Renouf came through by creating a vShield module for PowerCLI.  Follow the directions in his video to install it.  It’s actually quite easy.

    The script I am going to list below requires valid connections to three sources in order to do the work:

    1. The vCenter that manages the compute nodes in your vCloud
    2. The vCloud Director cell.
    3. The vShield Manager for the vCloud stack.

    (You also need to be licensed for vShield App.)

    Prior to connecting to vShield Manager, you will need to instantiate the module Alan created.  That _should_ have been done when watching his video, but if not, do:

    import-module vshield

    within PowerCLI.

    At this point you can connect to your three services:

    • connect-viserver <for vCenter>
    • connect-ciserver <for vCloud Director>
    • connect-vshieldserver <for vShield Manager>

    Ok, so now hopefully our connections are set up.  Let’s describe the script a little more.  As I said before, the use case was to create a 3-tier environment via vShield App: Web, App and DB.  Our VM’s in the vApp are conveniently named “WWW,” “APP” or “DB.”  We are sort of cheating, and keying off that nomenclature to identify the VM’s.

    We have three hardcoded security groups in the script: Web, App and DB.  Their variables are $SGWeb, $SGApp and $SGDb.  I know I am clever.

    We are going to provide the name of a vAPP in vCloud Director from the command line.  This script will then walk the contents of the vApp, which are our three servers.  For those who are heavily involved in vCloud Director, you know that each VM in vCenter is identified by <VMNAME> (vCloud UUID).  In order for us to add a VM to vShield App, which is tied to vCenter, we must actually push that naming nomenclature.  I’m frankly not the best at coding, so I had to cheat and use the trim() function twice in order to pull the UUID out of the urn:vcloud:vm:uuid string.

    At that point, we use PowerShell’s like function to do string comparison, and then run Mr. Renouf’s set-vshieldsecuritygroup in order to place the VM in to appropriate vShield App Security Group.  That command is covered in his movie.  I hope you find it useful!

    Usage: ./<scriptname>.ps1 -vapp <vAPP name in vCD> -datacenter <the datacenter object where your vCD and vShield are attached>

    param (
    # Hardcode Security Groups, for now
    $SGWeb = "Web"
    $SGApp = "App"
    $SGDb = "DB"
     Foreach ($VM in (get-CIVM -vapp $vApp)) {
     $vCloudVM = $VM.name
     write-host "VM name: " $vCloudVM
     $vCloudID = $VM.id
     write-host "vCloud ID: " $vCloudID
     # for whatever reason the trim() function cuts off too much
     # so I had to trim twice. beats me why...
     $vCloudIDtrim = ($vCloudID).trim("urn:vcloud:")
     $vCloudIDtrim = ($vCloudIDtrim).trim("m:")
     write-host "Trimmed vCloud ID: " $vCloudIDtrim
     if ($vCloudVM -like '*www*'){
     write-host "Adding $vCloudVM to Security Group $SGWeb..."
     # add VM to SecurityGroup
     set-vShieldSecurityGroup -Add -Datacenter (get-Datacenter $dataCenter) -SecurityGroup $SGWeb -VM (Get-VM "$vCloudVM ($vCloudIDtrim)")
     elseif ($vCloudVM -like '*app*') {
     write-host "Adding $vCloudVM to Security Group $SGApp ..."
     # add VM to SecurityGroup
     set-vShieldSecurityGroup -Add -Datacenter (get-Datacenter $dataCenter) -SecurityGroup $SGApp -VM (Get-VM "$vCloudVM ($vCloudIDtrim)")
     elseif ($vCloudVM -like '*db*') {
     write-host "Adding $vCloudVM to Security Group $SGDb ..."
     # add VM to SecurityGroup
     set-vShieldSecurityGroup -Add -Datacenter (get-Datacenter $dataCenter) -SecurityGroup $SGDb -VM (Get-VM "$vCloudVM ($vCloudIDtrim)")

    The output will be of the form:

    VM Name: www001
    vCloudID: urn:vcloud:vm:<UUID>
    Trimmed vCloudID: <UUID>
    Adding www001 to Security Group Web …

    ID : securitygroup-nn
    Datacenter : datacenter
    Member : @{name=www001 (<UUID>); object
    TypeName=VirtualMachine; objectId=<moref>}
    Description :
    Name : Web

    PowerCLI – Disable Host in vCloud Director and place host in Maintenance mode

    September 24th, 2012

    Since I am heavily involved in a vCloud deployment, I have asked many many VMware employees how we can make it easier for our operations staff conduct maintenance on an ESXi server.  As you may or may not know, an ESXi host that is prepared and being used by vCloud Director should be disabled and all virtual machines migrated off prior to maintenance.  In order to accomplish this action, a host must be disabled in vCloud Director, and then placed in to maintenance mode in vCenter.  Two separate interfaces.

    I met Alan Renouf after his PowerCLI session at VMworld 2012, and asked him if he knew of a way to disable a host via PowerCLI.  And he did!  Alan has created a function to conduct enable and disable operations.  He gave me permission to include it in the following code I built as a wrapper to conduct the operation from a command line via PowerCLI.


    1. PowerCLI installed with the vCloud Director cmdlets.  This is an option at install time, and is disabled by default (for whatever reason).
    2. vCloud Director (naturally)
    3. My script.
    First start PowerCLI.

    In order to connect to vCloud Director, first instantiate a connection via

    Connect-CIServer <vCloud Director cell>

    Start a session to the vCenter server that manages the vCloud pod via

    connect-VIServer <vCenter server>

    Now run the script.  There are two options from the command line -server <ESXi server name> and -state <enable/disable>.

    An example run would be: ./conductMaintenanceVCD.ps1 -server esxi001 -state disable

    Watch vCD and vCenter and be wowed.  Thanks again to Alan for creating the Disable-CIHost and Enable-CIHost functions!

    param (
    ## Enable/Disable-CIHost function provided by Alan Renouf
     Function Disable-CIHost {
     Param (
     Process {
     $Search = Search-cloud -QueryType Host -Name $CIHost
     #$HostEXT = $search.ExtensionData
     $HostEXT = $Search | Get-CIView
     # Disable the host in vCloud Director
     if ($HostEXT.Enable) {
    Function Enable-CIHost {
     Param (
     Process {
     $Search = Search-cloud -QueryType Host -Name $CIHost
     #$HostEXT = $search.ExtensionData
     $HostEXT = $search | Get-CIView
     # Disable the host in vCloud Director
     if ($HostEXT.Disable) {
    # conduct work on input
    write-host "Conducting $state operation on $server..."
    if ($state -eq "enable"){
     $serverState = get-vmhost $server
     if ( $serverState.ConnectionState -eq "Maintenance") {
     write-host "Taking $server out of maintenance mode"
     $returnCode = set-VMhost -VMHost $server -State Connected
     # sleep for 45 seconds for the host to exit maintenance mode.
     start-sleep -s 45
     write-host "Enabling host in vCloud Director"
     Enable-CIHost -CIHost $server
     elseif ($state -eq "disable"){
     write-host "Disabling host in vCloud Director"
     Disable-CIHost -CIHost $server
     # sleep for 5 seconds for the host to disable in vCD
     start-sleep -s 5
     write-host "$server entering maintenance mode"
     $returnCode = set-VMhost -VMHost $server -State Maintenance -Evacuate

    VMworld 2012 round-up: INF-VSP1196 What’s new with vCloud Director Networking

    August 29th, 2012

    VMware 2012 presentation INF-VSP1196: What’s new with vCloud Director Networking

    This session was discussed the new networking features of vCloud Director 5.1 (VMware decided to sync the version with the release of vSphere 5.1, jumping from 1.5.1 all the way to 5.1).

    From the presentation content, the bulk of changes focus vShield Edge and VxLAN.  vShield now is bundled in two ways: Security and Advanced and sold as Standard or Enterprise.  More will be discussed below about the changes, but in short the actual Edge VM is deployed in two sizes, with different supported features.

    New features of vShield Edge:

    • Multiple interfaces, up to 10, are now supported with the Advanced bundle.  This is an increase of 2.
    • The virtual hardware is now 7.
    • The appliance, as stated before, can be deployed as the compact or full version of edge.  The major difference, according the presentation, is the support for higher throughput and a active/standby edge appliance.  I for one welcome the change since the current instantiation of Edge only allowed for a respawn of a device which required an outage.
    • The Edge appliance can act as a DNS relay for internal clients.
    • External address space can be increased on the fly.
    • Non-contiguous networks can be applied to the external interface of the vShield Edge.
    • Ability to sub-allocate IP addresses to Organization vDCs.

    With vCloud Director version 5.1, a new network object is available for use by Organizations: Organization vDC (virtual datacenter) Networks.  Since an Organization Network (OrgNet) is mapped to a single Organization, the new Org vDC Network can now span multiple org vDC within an Organization.  The fellow glossed over the use-case for this situation, and one does not easily come to mind at the moment.

    VMware is also debuting something they call Service Insertion.  This is basically a new security API for 3rd party vendors to integrate directly in to the networking stack for their products.  Profiles can now be created based on services, and these profiles can then be applied to a Port Group of a Distributed Switch.  I do believe VMware is attempting to allow providers to create billing and a-la carte models to generate income from their clients.  It is an interesting play to see if it is really used only in Public offerings, or if private clouds offer it in a charge-back model.

    Edge can provide a DHCP service, available on isolated networks.  You now can use:

    • Multiple DHCP pools per edge device (necessary with 10 supported interfaces).
    • Single pool per interface.
    • No option for advanced features such as lease times.


    • Rules can be applied to an interface.
    • Rules can be arranged via a drag and drop interface, but they are evaluated from top down.  The first hit causes an exit.
    • Source NAT (SNAT) and Destination NAT (DNAT) supports: TCP, UDP, TCP and UDP, ICMP or any.
    • There are predefined ICMP types.


    • VMware is still trumpeting their Edge firewall as 5 tuple (5 different options for filtering, but it still isn’t all that great).
    • Rules can be arranged via drag and drop.
    • Logging per rule.
    • Support for TCP, UDP, TCP and UDP.
    • Can not filter on ICMP types (ping versus traceroute).  I do believe it is all or nothing.

    Static Routing

    • VMware stated it is useful for routing between Org networks.  I think this use-case would be for far more advanced configurations.
    • Can be used for deep reach in vApp networks.  The current Edge device does support static routing even when using vCDNI, but the MAC in MAC encapsulation adds some serious latency to the connections.  I suspect VxLAN is to thank for this configuration to be better supported.


    • IPsec or SSL site to site configuration, not for user remote access.
    • Compatible with 3rd party software and hardware VPN, since Edge is doing IPsec or SSL.  Nothing proprietary there.

    Load Balancer

    • Load Balance on HTTP, HTTPS or any old TCP port.
    • Can conduct a basic health check of the back-end servers with either a URI (except for HTTPS) or tcp port.
    • Configure pool servers and VIP.
    • Balance on IP Hash, URI or least connections.
    • NOTE:  The current version uses nginx.  I saw it not work even close to correctly with certain network configurations based around VCDNI.  Let’s hope it works better in this version.

    Virtual Service (Load balancing)

    • HTTP persistence can be configured to use cookies with insert feature.
    • HTTPS can use session IDs.
    • There is no persistence option for regular TCP ports.

    And now for the queen mother of all session topics: VXLAN.  Boiling it down, VXLAN allows for a layer 2 network, say, to exist live in two places at once.  Think 2 datacenters, or in this case, the Cloud.

    • Layer 2 overlay on a Layer 3 network .
    • Each overlay network is known as a VXLAN segment.
    • VXLAN identified by 24 bit segment ID, known as a VNI.
    • Traffic carried by VXLAN tunnel endpoints, known as VTEP.
      • ESXi hosts or Cisco Nexus 1000v can act as VTEP.
    • Virtual machines have no idea of the existence of VXLAN transporting their traffic.
    • VM to VM traffic is encapsulated in a VXLAN header.
    • Traffic on same portgroup is not encapsulated.
    • Here is the big kicker: multicast is required
      • Used for VM broadcast and multicast messages
      • In essence, a dedicated virtual Distributed Switch
      • Available vNIC and IP address per switch
      • Mutlicast addresses
      • Multicast configured on the physical network
    • Requires multicast end to end (all networking points between the VTEP).
    • Minimum MTU of 1600 (in the network).

    The technology sounds cool, is hopefully better than VCDNI, but the requirement of multicast may be a show-stopper to some people.

    VMware vCloud Networking Options

    May 7th, 2012

    Having worked with VMware vCloud-based technologies for a few months, I’ve come to the conclusion that networking and the automation glue which is required to make the magic happen, are both the most important pieces of the stack.

    To get started, I’ll list out some terms, and then we’ll build from there.

    • VXLAN
    • External Network(s)
    • Organization Network(s)
    • Network Pools
    • VLAN-backed
    • vSphere port group-backed
    • vAPP

    Let’s start from the bottom and work our way up.

    vAPP is not a networking technology, but a way to encapsulate an environment.  With it, we can create a three-tier stack, encapsulate it in a vAPP, and then roll out it out N times, all looking exactly the same.  One can also set start-up precedence (database VM starts first, app second, web third).  It’s great stuff.

    vSphere port group-backed networks are what you would traditionally use in a vSphere environment.  Create a Distributed Virtual Switch, and then create a port group.  vCloud Director can use port group-backed in many scenarios.  It is a simple way to get started by using known methods.

    VLAN-backed networks are a fun little way of defining a pool of VLAN’s (something like VLAN IDs 100-200).  Of course, it is necessary that the network team actually configure the VLAN ID’s on the network, and then assign them to the trunks for your ESXi servers.

    vCloud Director Networking Infrastructure (VCDNI) is a method  of creating private networks backed by a single physical [email protected] on your network.  Once you get more involved in vCloud, it is one way to create vAPP sandboxes in your environment.  In short, VCDNI uses MAC-in-MAC encapsulation.  Basically it works by creating private VLAN’s (you will actually see the port groups attached to your vDS) and then stuffing that data inside a packet that can be used on the physical VLAN.  Is the data private and secure?  From my experience, the answer is: sorta.    If your vAPPs are using VCDNI-backed networking, and attached to the same broadcast domain (the org network), the machines can be hit by any host in that broadcast domain (and then with the use of vShield Edge, you can ACL that).  To be clear, the default rule on a vShield Edge device is deny ingress).  If you have vAPPs in different broadcast domains, they are protected from one another (on layer 2).  One kicker, your virtual Distributed Switch must have MTU set to 1524 (if it was set to default of 1500) to allow for the larger header due to encapsulation.

    Is VCDNI good?  Yes.  Is VCDNI bad?  Probably could be argued by networking folks, since they technically do not control the allocation of networks, other than the physical VLAN VCDNI uses.  Is it the future?  Allegedly that is something else called VXLAN.  (update)My opinion:  It is a path to create private networks in a rapid fashion with minimal interaction by the network team.  It works for now, but hopefully VXLAN will be better.

    Now that we have defined methods to transport the data, we will get in to the nomenclature of vCloud.

    Network Pools can either be defined by VLAN-backed, Network isolation-backed (VCDNI) or Port group-backed.  These pools are consumed by virtual datacenters to create vAPP networks.

    Organization Networks are assigned to an Organization virtual DataCenter.  There are multiple ways to define an OrgNetwork:

    • Direct connection:  This network is akin to a traditional port group-backed network in vSphere.  In short, it provides connectivity to LAN, WAN or Internet traffic.  It is tied to an External network and usually sits on internally routable RFC-1918 address space (most likely for private cloud) or Internet-routable address space for providers.
    • NAT-routed connection:  This connection allows for Network Address Translation (NAT) of External IP space to internal private networks.  The NAT-routed OrgNet is typically in RFC-1918 address space, however there are other cases.
    • Internal Organization network: This is strictly an internal network for the vApps to communicate with each other, but have no external network access.

    External Networks are port group-backed networks (defined in vCenter) that provide ingress and egress to the Cloud environment.  They should be routable networks, either RFC-1918 for private, or Internet routable for providers.

    vCloud – vShield Edge Deployment Failure

    February 21st, 2012

    If you get errors during deployment of vApps in vCloud Director, specifically that vShield Edge (vse) devices can not be deployed with any of the following errors:

    The host type is not supported in vCenter, messages regarding not finding port group UUID’s that do not actually exist in the environment, or activity details such as:

    reboot your vShield Manager appliance.  I have found no VMware KB articles about the subject, but it has helped to clear any issues between vCenter, vCloud director and vShield Manager.

    Place ESXi in to Maintenance Mode from vCloud Director

    February 21st, 2012

    So you have your handy dandy cloud built on top of VMware vSphere and vCloud Director. And then you find out you need to conduct maintenance on the host.  What to do?

    Easy!  Browse to:

    • System-> Manage & Monitor
    • vSphere Resources -> Hosts
    1. Find the host you need to place in to maintenance mode, right click and select Disable Host.
    2.  At that point, the status will turn from a green circle with a check, to a red circle.
    3. Right click on the host again and select Redeploy All VMs.
    4. The ESXi host will go in to maintenance mode in the vCenter server and evacuate all virtual machines as usual.
    5. (Optional!) If you see vsla errors (such as the screenshot), issues with deleting vApps, Unprepare the host which removes the vCloud agent from ESXi
    6. (Optional!) Prepare the host for vCloud by pushing the vCloud agent to ESXi
    7. When maintenance is complete, right click and Enable Host.
    8. And your work is complete!

    The Great Road Trip to the Cloud

    December 22nd, 2010

    Cloud computing is one of the new buzz words of the tech industry.  Everyone is jumping on the bandwagon.  The adoption of virtualization in the Enterprise has led to the rise of Cloud.  Cloud has even gone mainstream with Microsoft’s “To the Cloud” add campaign.

    I became interested in Cloud when I worked at a SaaS company.  At the time we had to graft three different environments together due to acquisition.  I started to think of a better way to standardize on an application server, Operating System and platform.  In effect we were dealing with a 3x3x3x3x3x3x3 syndrome.  We had 3 different web servers, 3 application servers, 3 operating systems, 3 database platforms, 3 SAN’s, 3 networks and 3 sets of hardware.  It was painful.

    I stumbled upon the blog of Don MacAskill from a service called SmugMug (http://www.smugmug.com)  He wrote about his version of SkyNet to elastically extend his environment to Amazon AWS.  Needless to say it was a turning point, sort of like when I heard Led Zeppelin I for the first time.

    A Short History

    Virtualization is not new technology.  In fact, it has its roots in Mainframes.  The tech industry is a circular beast.  Central computing with dumb terminals gave way to distributed computing, client/server, and now a hybrid where data can be found on multiple hubs, and a combination of smart and dumb spokes.  The industry also realized that running a data center is not an easy task.  Running multiple data centers incurs huge expense.  Thus, the rise of co-location.  Business realized it could be a cheaper proposition to pay someone else to do some of the dirty work (space, power, cooling, physically security), all the way up to a managed service.

    Business then realized it was still booking Capital Expense (CapEx) and Operational Expense (OpEx) in the dealing with co-lo.  Servers are not being used as much as expected.  When growth hit unexpectedly, giant road blocks presented themselves in both acquiring gear fast enough, but finding space, and still staying within the original co-lo agreement.

    Virtualization nudges itself in to the equation because people realized that everything shouldn’t be focused on the application and an infrastructure that is a) expensive and b) underutilized.  If you want to focus solely on your application stack, you can now do that.  If you don’t want to go through CapEx to buy infrastructure, you can easily lease CPU time, in effect, from the cloud.


    So now you may ask yourself “what is cloud computing?”  Good question.  A good answer: It all depends on who you ask.

    I’ll give you my opinion on the state of Cloud.

    • Public
    • Private
    • Hybrid
    • SaaS
    • IaaS
    • PaaS
    • AaaS

    Public: The cloud is hosted by a third-party, somewhere on the Internet.

    Private: The cloud is hosted inside the firewalls of the business.

    Hybrid: A grafting of resources from Public and Private clouds, used to augment the infrastructure.  In short, if Public and Private are two circles in a venn diagram, their intersection is Hybrid.

    Saas: It could be argued that Software as a Service (SaaS) was the first of the new generation of infrastructure that begat cloud.  A person or business consumes a resource that is hosted, and possibly sold, by a third-party.  Twitter and Facebook and World of Warcraft all fall in to this category.  The SaaS provider usually built their own web, application and database servers, storage and network.  Most likely at great cost.  The environment may have been self-hosted, or in a co-lo.

    IaaS: I believe technology developed by VMware has led to Infrastructure as a Service (IaaS).  I know IBM, Sun and HP have been doing virtualization for years, but only on high-end gear.  VMware was the mainstream player that rammed it down everyone’s throats.  Turning cheap x86 based servers in to powerhouses.  Servers went from scale out, to scale up/scale out configurations.  We need bigger, but less.  Short provision cycles, and chargeback models all help to turn IaaS in to a business generator, and less a budget black hole.  Amazon AWS is probably the biggest player in Public Cloud IaaS.

    PaaS: PaaS provides an infrastructure as a bundled stack, where infrastructure is abstracted and is presented as a consumable resource.  It seems to me that VMware’s vCloud Director is going to allow business to provision the private cloud, and sell resources to its internal, and external customers.

    AaaS: I count the App as a Service to be a power-play by vendors.  They give application developers a fully abstracted platform, and expose certain pieces by API calls.  The users and developers on top of this platform do not care at all how the plumbing works, only that it does.  Google App Engine, Microsoft Azure and Salesforce are big players in this arena.  VMware and Red Hat are making in-roads with their latest purchases.


    The race to the cloud includes a tipping-point for business when consuming public-cloud resources becomes more expensive than building a private-cloud.  There are always use-cases for all the current cloud types I have listed.  Industry is trying to build partnerships to allow private cloud application stacks to migrate to public, and vice-versa.  The technology is not ready as of the end of 2010, but by mid-2011 I do believe we will see the beginnings of true migration paths to create Hybrid clouds to create active-active infrastructure.

    This blog post will be a living document as things change.  Stay tuned!