RSS .92| RSS 2.0| ATOM 0.3
  • Home
  •  

    PowerCLI – Disable Host in vCloud Director and place host in Maintenance mode

    September 24th, 2012

    Since I am heavily involved in a vCloud deployment, I have asked many many VMware employees how we can make it easier for our operations staff conduct maintenance on an ESXi server.  As you may or may not know, an ESXi host that is prepared and being used by vCloud Director should be disabled and all virtual machines migrated off prior to maintenance.  In order to accomplish this action, a host must be disabled in vCloud Director, and then placed in to maintenance mode in vCenter.  Two separate interfaces.

    I met Alan Renouf after his PowerCLI session at VMworld 2012, and asked him if he knew of a way to disable a host via PowerCLI.  And he did!  Alan has created a function to conduct enable and disable operations.  He gave me permission to include it in the following code I built as a wrapper to conduct the operation from a command line via PowerCLI.

    Requirements:

    1. PowerCLI installed with the vCloud Director cmdlets.  This is an option at install time, and is disabled by default (for whatever reason).
    2. vCloud Director (naturally)
    3. My script.
    First start PowerCLI.

    In order to connect to vCloud Director, first instantiate a connection via

    Connect-CIServer <vCloud Director cell>

    Start a session to the vCenter server that manages the vCloud pod via

    connect-VIServer <vCenter server>

    Now run the script.  There are two options from the command line -server <ESXi server name> and -state <enable/disable>.

    An example run would be: ./conductMaintenanceVCD.ps1 -server esxi001 -state disable

    Watch vCD and vCenter and be wowed.  Thanks again to Alan for creating the Disable-CIHost and Enable-CIHost functions!

    param (
     [string]
     $server
     ,
     [ValidateSet("enable","disable")]
     [string]
     $state
     )</pre>
    ## Enable/Disable-CIHost function provided by Alan Renouf
     Function Disable-CIHost {
     Param (
     $CIHost
     )
     Process {
     $Search = Search-cloud -QueryType Host -Name $CIHost
     #$HostEXT = $search.ExtensionData
     $HostEXT = $Search | Get-CIView
    
     # Disable the host in vCloud Director
     if ($HostEXT.Enable) {
     $HostEXT.Disable()
     }
     }
    }
    
    Function Enable-CIHost {
     Param (
     $CIHost
     )
     Process {
     $Search = Search-cloud -QueryType Host -Name $CIHost
     #$HostEXT = $search.ExtensionData
     $HostEXT = $search | Get-CIView
    
     # Disable the host in vCloud Director
     if ($HostEXT.Disable) {
     $HostEXT.Enable()
     }
     }
    }
    
    # conduct work on input
    write-host "Conducting $state operation on $server..."
    
    if ($state -eq "enable"){
    
     $serverState = get-vmhost $server
     if ( $serverState.ConnectionState -eq "Maintenance") {
     write-host "Taking $server out of maintenance mode"
    
     $returnCode = set-VMhost -VMHost $server -State Connected
    
     # sleep for 45 seconds for the host to exit maintenance mode.
     start-sleep -s 45
     }
    
     write-host "Enabling host in vCloud Director"
     Enable-CIHost -CIHost $server
     }
     elseif ($state -eq "disable"){
     write-host "Disabling host in vCloud Director"
     Disable-CIHost -CIHost $server
    
     # sleep for 5 seconds for the host to disable in vCD
     start-sleep -s 5
    
     write-host "$server entering maintenance mode"
     $returnCode = set-VMhost -VMHost $server -State Maintenance -Evacuate
     }
    

    VMworld 2012 round-up: INF-VSP1196 What’s new with vCloud Director Networking

    August 29th, 2012

    VMware 2012 presentation INF-VSP1196: What’s new with vCloud Director Networking

    This session was discussed the new networking features of vCloud Director 5.1 (VMware decided to sync the version with the release of vSphere 5.1, jumping from 1.5.1 all the way to 5.1).

    From the presentation content, the bulk of changes focus vShield Edge and VxLAN.  vShield now is bundled in two ways: Security and Advanced and sold as Standard or Enterprise.  More will be discussed below about the changes, but in short the actual Edge VM is deployed in two sizes, with different supported features.

    New features of vShield Edge:

    • Multiple interfaces, up to 10, are now supported with the Advanced bundle.  This is an increase of 2.
    • The virtual hardware is now 7.
    • The appliance, as stated before, can be deployed as the compact or full version of edge.  The major difference, according the presentation, is the support for higher throughput and a active/standby edge appliance.  I for one welcome the change since the current instantiation of Edge only allowed for a respawn of a device which required an outage.
    • The Edge appliance can act as a DNS relay for internal clients.
    • External address space can be increased on the fly.
    • Non-contiguous networks can be applied to the external interface of the vShield Edge.
    • Ability to sub-allocate IP addresses to Organization vDCs.

    With vCloud Director version 5.1, a new network object is available for use by Organizations: Organization vDC (virtual datacenter) Networks.  Since an Organization Network (OrgNet) is mapped to a single Organization, the new Org vDC Network can now span multiple org vDC within an Organization.  The fellow glossed over the use-case for this situation, and one does not easily come to mind at the moment.

    VMware is also debuting something they call Service Insertion.  This is basically a new security API for 3rd party vendors to integrate directly in to the networking stack for their products.  Profiles can now be created based on services, and these profiles can then be applied to a Port Group of a Distributed Switch.  I do believe VMware is attempting to allow providers to create billing and a-la carte models to generate income from their clients.  It is an interesting play to see if it is really used only in Public offerings, or if private clouds offer it in a charge-back model.

    Edge can provide a DHCP service, available on isolated networks.  You now can use:

    • Multiple DHCP pools per edge device (necessary with 10 supported interfaces).
    • Single pool per interface.
    • No option for advanced features such as lease times.

    NAT

    • Rules can be applied to an interface.
    • Rules can be arranged via a drag and drop interface, but they are evaluated from top down.  The first hit causes an exit.
    • Source NAT (SNAT) and Destination NAT (DNAT) supports: TCP, UDP, TCP and UDP, ICMP or any.
    • There are predefined ICMP types.

    Firewall

    • VMware is still trumpeting their Edge firewall as 5 tuple (5 different options for filtering, but it still isn’t all that great).
    • Rules can be arranged via drag and drop.
    • Logging per rule.
    • Support for TCP, UDP, TCP and UDP.
    • Can not filter on ICMP types (ping versus traceroute).  I do believe it is all or nothing.

    Static Routing

    • VMware stated it is useful for routing between Org networks.  I think this use-case would be for far more advanced configurations.
    • Can be used for deep reach in vApp networks.  The current Edge device does support static routing even when using vCDNI, but the MAC in MAC encapsulation adds some serious latency to the connections.  I suspect VxLAN is to thank for this configuration to be better supported.

    VPN

    • IPsec or SSL site to site configuration, not for user remote access.
    • Compatible with 3rd party software and hardware VPN, since Edge is doing IPsec or SSL.  Nothing proprietary there.

    Load Balancer

    • Load Balance on HTTP, HTTPS or any old TCP port.
    • Can conduct a basic health check of the back-end servers with either a URI (except for HTTPS) or tcp port.
    • Configure pool servers and VIP.
    • Balance on IP Hash, URI or least connections.
    • NOTE:  The current version uses nginx.  I saw it not work even close to correctly with certain network configurations based around VCDNI.  Let’s hope it works better in this version.

    Virtual Service (Load balancing)

    • HTTP persistence can be configured to use cookies with insert feature.
    • HTTPS can use session IDs.
    • There is no persistence option for regular TCP ports.

    And now for the queen mother of all session topics: VXLAN.  Boiling it down, VXLAN allows for a layer 2 network, say 192.168.100.0/24, to exist live in two places at once.  Think 2 datacenters, or in this case, the Cloud.

    • Layer 2 overlay on a Layer 3 network .
    • Each overlay network is known as a VXLAN segment.
    • VXLAN identified by 24 bit segment ID, known as a VNI.
    • Traffic carried by VXLAN tunnel endpoints, known as VTEP.
      • ESXi hosts or Cisco Nexus 1000v can act as VTEP.
    • Virtual machines have no idea of the existence of VXLAN transporting their traffic.
    • VM to VM traffic is encapsulated in a VXLAN header.
    • Traffic on same portgroup is not encapsulated.
    • Here is the big kicker: multicast is required
      • Used for VM broadcast and multicast messages
      • In essence, a dedicated virtual Distributed Switch
      • Available vNIC and IP address per switch
      • Mutlicast addresses
      • Multicast configured on the physical network
    • Requires multicast end to end (all networking points between the VTEP).
    • Minimum MTU of 1600 (in the network).

    The technology sounds cool, is hopefully better than VCDNI, but the requirement of multicast may be a show-stopper to some people.


    Quick and dirty method to mount BCV/Snapshot

    November 25th, 2011

    There are many many blog posts about mounting BCV (Business Continuity Volume) or SAN Snapshots, however here is my method.  It is a quick shell script to run on each ESXi server.  Add it to your business operations manager, and create an ad-hoc method of mounting BCV/snaps for a DR exercise.

    NOTE: Commands are in bold.

    Verify from the storage team that they have assigned the BCV/snap to your hosts

    SSH in to ESXi server (assumes you have all of the buttons pressed and knobs turned)

    Search of the BCV/snap volumes.  Do: esxcfg-volume -l  Note: Be patient, this may take a few minutes.

    The output will be as follows:

    VMFS3 UUID/label: <Datasture UUID>/<Datastore label>
    Can mount: Yes
    Can resignature: Yes
    Extent name: naa.<device ID>     range: <size in MB>

    It is now possible to mount the volume manually via the Datastore UUID or Datastore Label.   Do: esxcfg-volume -m <Datastore UUID -OR- Datastore Label>
    Note: This will conduct a force mount of the volume.
    If there are powered-on VM’s on that datastore, you can unmount it.  Do: esxcfg-volume -u <Datastore UUID -OR- Datastore Label>

    To magically wrap this up to scan for any assigned BCV/snaps and mount them automagically, run the following:

    for volume in `esxcfg-volume -l |grep VMFS3 |awk ‘BEGIN {FS=”/”} ; {print $2}’ |awk ‘{print $2}’` ; do echo “Mounting volume with UUID $volume” ; esxcfg-volume -m $volume ; done


    VMworld session – VSP3305 — Upgrading to VMware ESXi 5.0

    August 29th, 2011

    Kyle Gleed presented a session Monday morning to cover tips hints and gotchas for the migration to ESXi 5.0.   5.0 will be the first version of vSphere to only ship with the stripped down ESXi.  No more service console.  At work we conducted the migration from ESX 3.5 to ESXi 4.1 in preparation of the release of 5.0.

    Kyle stressed that script, or command line management of ESXi will be conducted via the esxcli command set, vicfg and PowerCLI.  esxcfg commands have been deprecated.  It is important to note that esxcli commands can be run either local to the ESXi instance, or via the remote CLI suite.

    If you are planning on upgrading your ESXi instances to 5.0 (not including a full install), you must be running ESX 4.x.  NOTE: You can upgrade from ESX 4.x to ESXi 5.0 easily!  You can not upgrade from ESX 3.5 directly to 5.0.

    One important thing to note is that you must have 5G allocated for your boot device, be it SAN or local disk.  The ESXi installer partitions off the first 1G for the OS, and the next 4G for scratch space.

    The usual upgrade path of vCenter to 5.0, followed by ESXi to 5.0, and then virtual machine tool and/or virtual hardware, are steps that were taken from 3.5 to 4.x.  ESXi 5.0 now introduces VMFS-5, which according to VMware is a hot upgrade that does not affect running virtual machines.