I Really Need to be Scared More

In college I was always the guy in the group who would volunteer to do more of the “real work” so that I wouldn’t have to speak in front of the class. The fear of speaking in front of people was so pervasive, I would take a lesser grade just to get out of it. Although I’m closing in on my forties and have learned much, that fear has never really left me…

img_3481
A new pin to add to the collection

Yet when I think back to some of the most amazing events in my life, there has often been an element of fear to them. I’m not talking primal, afraid for my life scared (in most instances), but the fear of the unknown. The one where doubt and uncertainty seeps into your thoughts. The one that’s not quite terror, but something that gets the fight or flight adrenaline going. Like…

  • That time when a very good friend was visiting VT and she, my wife and I came up with the (adult beverage enhanced) idea to go sky-diving. When the next day came I thought for sure that we’d all bail. I would have if not for two reasons. A- The ladies went first. B- I got shoved out of the plane. Until the day I die, I will never forget the image as I rolled onto my back and watched that plane fly away from me. There is no doubt in my mind that I’ve never experienced a physical event that was as exhilarating as that one.
  • That time I decided to change jobs…  More than once I’ve been told that I’m an enigma. I love comfort, but if I feel that I’m getting complacent, I get an itch to move. That doesn’t mean that these changes don’t come without a lot of sleepless nights and self-doubt. Each and every time has been an enlightening and enriching experience.
  • That time I became a Dad… I’m not sure this one requires an explanation. It’s the biggest, toughest, scariest job I’ve ever taken on. The day we left the hospital, I’m pretty sure I drove home at about 15 MPH. My dad said that the train of cars behind us was epic. I didn’t notice because I was staring straight forward, hands white-knuckled at 10&2 on the steering wheel.

Beyond fear, what all of these moments have in common is that they’ve shown me something about myself. They’ve reinforced my self-worth and they’ve invigorated me to do more. In every case I’ve found myself inspired by our humanity and our ability to help each other. Overcoming that fear can force you to acknowledge, sometimes against all that you hold true, that you’re capable of incredible feats.

Which brings us back to the present tense. For some unknown reason (perhaps it was the KBS eh Callahan?) I thought that I could impart something of myself onto my peers by presenting at a major technology conference. For some reason I thought that I would be able to bring some value to individuals by giving a touch of myself. I’m pretty sure that I knew I’d get rejected, which to be 100% honest was part of what convinced me to actually go through with submitting my proposal. Imagine my chagrin when we actually got accepted to speak… to present… to stand up in front of my peers… to expose myself to their critiques. Hundreds of them. HOLY $H!T!

Susan could tell you, I pored myself into preparing for VMworld 2017, but the doubts persisted. Even after I did OK in my first vBrownBag, I wasn’t sure what was going to happen. I arrived in the hall with plenty of time to spare. There was no other way, as I’m someone who needs to be prepared, but … being there and watching people continue to file in…. Did I mention HOLY $H!T!

And then it was time. When there are hundreds of people watching you, not to mention that big TV camera, well there’s nothing you can do but get on with it. I’ll let you be the judge of my performance, but that’s not what post is about. It’s about the experience. How you learn and grow from it. In the end my experience was amazing. It certainly was fear inducing, but like many things once you get past the first few minutes, the situation normalizes and you have no choice but to try and do your best. Focusing not on the pitfalls, but on the job at hand can be a key element to overcoming that fear.

I think we did pretty well, but even more important, we helped some people. My favorite part of our presentation was answering questions and talking with folks afterwards. Being able to share and connect with other people was pretty humbling. I’ve worked on some decent projects and I’ve had my fair share of success professionally. However this experience was more like skydiving: I can assure you that I have never been so jacked up after a professional experience as I was after our presentation.

So despite the fear and trepidation, it seems that scaring myself has been a pretty solid way to grow as a person, and now as a professional. With all of our natural instincts to run from threatening situations, if you stay and face the situation, the possibility for growth can be mind blowing.

With that, it’s time to sign off. I’ve got another presentation to prepare for…

Advertisements

VM life-cycle with PowerCLI

I’ve heard many authors talk about how through their writing process the story or the characters change. I guess this is one reason titles aren’t decided on until the very end of the writing process. When I submitted my session proposal for “No Money, No Problem!” I had originally planned on writing about using PowerShell/PowerCLI as an automation/orchestration engine. I’ve learned over the years that it “may” not be the best idea to fight the muse. During the writing process for this presentation I followed where the muse led and in the end this presentation ended up being much more about the potential ways that you can automate the life-cycle of a VM. I’m hopeful that if you attended the talk, that it was still useful for you despite the slight pivot. Fifteen minutes is not a lot of time for a technical talk, so this post is an deeper dive into the content from my VMTN presentation. So without further ado….

Day 1

The activities on Day 1 are all about configuring the environment. The example I chose to use in my session was setting up a vDS. You can just as easily apply the same logic to something like setting up iSCSI datastores, clusters or the like. The overall premise is that by leveraging PowerCLI you can speed up the delivery of your environments, while delivering higher quality infrastructure. Let’s take a quick peek at how this is done in the context of delivering the elements necessary to run a Distributed Switch. Striping out extraneous code and comments, what you’re left with is a five liner that will give you a Distributed Switch.

$DC=Get-Datacenter -Name "NoProblem"
New-VDSwitch -Name "vds_iscsi" -Location $DC -LinkDiscoveryProtocol LLDP -LinkDiscoveryProtocolOperation "Both" -Mtu 9000 -MaxPorts 256

New-VDPortgroup -Name "pg_iscsi_vlan5" -VDSwitch $(Get-VDSwitch -Name "vds_iscsi") -VlanId 5 -NumPorts 16

Add-VDSwitchVMHost -VMHost esx-06a.corp.local -VDSwitch vds_iscsi
$pNic=Get-VMHost esx-06a.corp.local|Get-VMHostNetworkAdapter -Physical -Name vmnic1

Get-VDSwitch -Name "vds_iscsi"|Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic $pNic -Confirm:$false

So we pretty immediately get to the heart of why I’m such a PowerCLI advocate with this example. When we look at a command like “New-VDSwitch” it’s pretty intuitive what’s going on; we are creating a new vDS. It kind of doesn’t make sense to go through the plethora of switches/options as they are highly dependent on the situation. That being said there are a couple of items I’d like to call out in this example.

  1. PowerShell allows you to run a command in-line and pass the resulting object directly into a variable. That’s what you see happening here, where the Get-VDSwitch call is wrapped by $(…):
    $(Get-VDSwitch -Name "vds_iscsi")
  2. The power of the PipeLine. By using this powerful tool you can string multiple commands together to create complex actions in a very small amount of real estate.

Day 2

New VM’s 1

It’s my belief that if folks do nothing else but automate the provisioning of VM’s, then then can deliver immense value to their organizations and do it quite quickly. In the code below we create an OSCustomizationSpec, leverage that within a temporary spec with static IP addressing, which is then leveraged in the New-VM example. This is a pretty basic example, but you take it as far as you’d like. In a previous role this simple VM deployment evolved and became the 1700 line basis for our automated deployments.

$DomainCred=Get-Credential -Message "Enter Domain Admin credentials" -UserName "corp.local\Administrator"
New-OSCustomizationSpec -name Win2k12 -Domain "corp.local" -DomainCredentials $DomainCred -Description "Windows 2012 App Server" -FullName "PatG" -OrgName "VMware" -ChangeSid
Get-OSCustomizationSpec "Win2k12" |New-OSCustomizationSpec -Name "Win2k12_temp" -Type NonPersistent
for ($i = 1; $i -le 4; $i++){
  Get-OSCustomizationNicMapping-OSCustomizationSpec "Win2k12_temp"|Set-OSCustomizationNicMapping-IpMode UseStaticIP -IpAddress "192.168.10.10$i"-SubnetMask "255.255.255.0"-DefaultGateway "192.168.10.1"-Dns "192.168.10.10"
  New-VM-Name "WinApplication0$i"-Template "base-w12-01"-OSCustomizationSpec "Win2k12_temp"-ResourcePool $(Get-Cluster"MyCluster")
}

Walking through this example line by line

Line 1: We enter domain credentials and store them in a PSCredential object for use later on.

Line 2: Using the New-OSCustomizationSpec we create a base OS Customization spec, which is used in …

Line 3: We create a temporary OS Spec which we’ll leverage in the customization and deployment of our VM’s. All of this however is just laying the ground work for

Line 5: Within the loop we take the previously created temporary OS Spec and we customize it for use with the …

Line 6: We get to the meat of the matter where we are deploying a VM using the new-vm cmdlet and our newly created and updated temp OS spec to configure Windows for us.

New VMs – Linux from json

While the previous example will simplify matters, it’s not exactly the prettiest code, not to mention the fact that values are hard coded. If you want to start taking your automation to the next level you have to be able to accept inputs in order for the code to be more portable. Thanks to PowerShell’s ability to interpret json (as well as xml, and host of other formats) we can simply read in the desired configuration, and somewhat dynamically create the VM. If you want to include splatting you can go even further with your abstractions, but that’s a post for another day.

$InputFile = “c:\temp\linux_servers.json”
$Servers = $(Get-Content $InputFile -Raw| ConvertFrom-Json).Servers

foreach ($Server in $Servers)
{
    new-vm -name $Server.Name -ResourcePool $Server.Cluster -NumCpu $Server.CPU -MemoryGB $Server.Mem
}

The other Day 2 activity I chose to highlight is reporting. After all, how will you know about the performance and capacity of the environment, if you aren’t taking it’s pulse? Thanks to the kind folks at VMware, statistics can be exposed for use via the get-stat cmdlet which is the star of this example.

$objServers = Get-Cluster Demo | Get-VM
foreach ($server in $objServers) {
    if ($server.guest.osfullname -ne $NULL){
        if ($server.guest.osfullname.contains("Windows")){
            $stats = get-stat -Entity $server -Stat "cpu.usage.average","mem.usage.average" -Start $start -Finish $finish

            $ServerInfo = "" | Select-Object vName, OS, Mem, AvgMem, MaxMem, CPU, AvgCPU, MaxCPU, pDisk, Host
            $ServerInfo.vName  = $server.name
            $ServerInfo.OS     = $server.guest.osfullname
            $ServerInfo.Host   = $server.vmhost.name
            $ServerInfo.Mem    = $server.memoryGB
            $ServerInfo.AvgMem = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Average).Average)
            $ServerInfo.MaxMem = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Maximum).Maximum)
            $ServerInfo.CPU    = $server.numcpu
            $ServerInfo.AvgCPU = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "cpu.usage.average"} | Measure-Object -Property Value -Average).Average)
            $ServerInfo.MaxCPU = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "cpu.usage.average"} | Measure-Object -Property Value -Maximum).Maximum)
            $ServerInfo.pDisk  = [Math]::Round($server.ProvisionedSpaceGB,2)

            $mycol += $ServerInfo
        }
    }
}

$myCol | Sort-Object vName | Export-Csv "VM_report.csv" -NoTypeInformation

In the example above we simply iterate through the cluster, and obtain statistics on our Windows VM’s via the aforementioned get-stat cmdlet. Next we store all of the information we care about in the $ServerInfo hashtable. This hashtable is ultimately what’s used for output at the end of the code snip.

I do want to take a moment to breakdown what’s happening in the calculations functions, as it can be a little off-putting if you don’t know what’s happening there. So let’s take the following line and break it down piece by piece.

$("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Average).Average)

$(…) as we should know by now anything that leads with a $ sign is a variable. PowerShell allows us to cast results from commands or other objects into a variable simply by enclosing in parenthesis with a leading $.
“{0:N2}” Mathematical operator. In this case we’re formatting the value that is a result of the command that follows “-f”. In this specific instance I choose to keep two digits to the right of the decimal place. This is indicated by N2.

The command yielding our number just shows the amount of fun you can get into with pipelines. Starting just to the right of the “-f” option, we take our $stats object and pare it down using the Where-Object cmdlet. These values get further parsed out by piping into Measure-Object, which in this particular case is simply calculating out the average of the desired values in the object.

After all that is said and done, we can use the ever handy Export-Csv to come up with a pretty CSV for the powers that be, which shows just how efficient your environment is humming along!

D-day

Just like this blog post, and the presentation it supports, all good things must come to an end. And so it is with infrastructure as well. In our final example, we use the metrics from the all-powerful and omniscient vRealize Operations Manager. It should probably come as no surprise to you that the metrics which are stored in the all-knowing vROps server can be exposed to you via PowerCLI. If you’ve used or tested vROps I’m guessing that one of the first thing you checked out was the “Oversized VMs” report. We’ll use one of the statistics that make up this report in this last code snip:

$cred=Get-Credential
Connect-OMServer -Server "Demo" -Credential $cred -AuthSource "ADdomain"

$when = $(get-date).AddDays(-20)
$vkey = "Summary|Oversized"
$threshold = ".75"

foreach ($vm in $(get-vm|Select-Object -First 33)){
    $vrstat=$vm|Get-OMResource
    $avg = $vrstat|Get-OMStat -Key $vkey -from $when|Select-Object -ExpandProperty Value|Measure-Object -Average

    write-host $vm.name, $avg.average
    if($avg.Average -gt $threshold){
        write-host $vm.name, $avg.average
        if($vm.PowerState -eq "PoweredOn"){
            stop-vm -vm $vm -Confirm:$true -WhatIf:$true
        }
    Start-Sleep 3
    Remove-VM -vm $vm -DeletePermanently -RunAsync -Confirm:$true -WhatIf:$true
    }
}

Starting to work with the vROps commandlets (contained within the VMware.VimAutomation.vROps module) has a little bit of a learning curve associated with it, so we’ll break down some of the key elements on a line by line basis again.

Line 2: vROps is a separate entity from vCenter so we need to use the Connect-OMServer cmdlet to connect. One of the things that is pretty poorly documented, which may trip you up, is the authentication model in use with this cmdlet. If you are using domain credentials you want to use your short-name and the display name that you setup in vROps as the domain authentication source.

line 9: In this case I chose to pass in a VM object into the Get-OMResource, but you can just as easily use the -Name parameter. Get-OMResource simply returns the vROPs object that we’ll use with…

Line 10: Get-OMStat. The Get-OMStat cmdlet is where you actually start returning metrics out of vROps. In this case I’m using the “Summary|Oversized” statistics key. There are literally thousands of key’s that you can leverage. I’d suggest perusing Mr. Kyle Ruddy’s post on this subject here. For the purposes of this very simple example I figured I’d use an average of the data returned over time to see if this machine is oversized and therefore a candidate for removal. Obviously in an real situation you’d want a lot more logic around an action like this.

 $vrstat|Get-OMStat -Key $vkey -from $when|Select-Object -ExpandProperty Value|Measure-Object -Average 

Breaking down the command line by line. We call the

$vrstat

object and pipeline it into Get-OMstat where we narrow down the results by key using the previously defined $vkey variable as well as date duration of 20 days defined in the $when variable. Finally I’m just interested in the actual values stored within the $vrstat object so we pipeline through the

Select-Object -ExpandProperty Value

cmdlet to pull only the data we want. Lastly we use the

Measure-Object

cmdlet to get an average for use in our simple example.

Line13: A simple If statement checks if we’ve crossed our pre-defined threshold. If we have, we move on to our D-day operations. Otherwise the script moves on to the next VM.

Line 16: You can’t delete a VM that’s powered on. Since we are deleting this VM anyway, there’s no need to gracerfully shut down, so we just power it off.

Line 19: So sorry to see you go VM.

Remove-VM

does exactly what it sounds like. If we omit the

-DeletePermanently

parameter we’ll simply remove the VM from inventory. In this case, we want it removed from disk as well, so the parameter is included. Lastly we don’t want to wait for the Remove operation before moving on to our next victim, so the

-RunAsync

parameter tells our script not to wait for a return before moving on.

NOTE: I don’t know if anyone will use this code or not (I surely hope it helps someone), however just in case you do, I’ve set -Confirm and -WhatIf to $true so that you don’t have any OOPS moments. Once you’re comfortable with what’s happing and how it’ll affect your environment, you can set these fit your needs.

As I said at the outset, I hope you found this talk and post useful. I plan on doing a couple of deeper dives into some of the above topics, so if you’re still reading I appreciate your follows.

Lastly, I’d like to offer up a huge thanks to the good folks at VMTN and vBrownBag for the opportunities they offer people like me and you. If you find this interesting or feel that you have a voice to contribute, please do! A vibrant community depends on engaged people like you.

Thanks for reading, see you soon.

SD

T-minus…

In 10 days I board a jet-plane for VMworld (HOLY CRAP!) which means the excitement is starting to ramp up. There are meetings, demos, events and of course parties to plan for, but how you approach a major conference is something that is very particular to the individual and that’s what I’d like to spend a few minutes discussing today. There are about as many ways to approach a major conference as there are attendees. Really? No,  but let’s just pretend so that I can share a few lessons learned from my conference experiences.

The Lab Guy

My first major conference I spent the majority of my time sitting in the lab soaking up as much hands on experience as I could. I would go hit the expo floor, grab a snack and an adult beverage, hide said adult beverage and hit the lab for hours. Not to say it wasn’t valuable but it damaged my spirit a little when late in the conference I learned that all of the labs would be available online after the conference ended… Oof.

After hearing this bit of spirit breaking news I learned that there is a really valuable reason to be in the labs: the guided sessions. Any time you can sit down with an expert, pick their brains, while gaining hand’s on experience, well… that’s just a win right there.

The pros to this are pretty obvious, you get to spend dedicated time learning, which is never a bad thing, but the fact that HOL will have the labs available on-demand after VMworld Europe diminishes the value somewhat.

The Expo/Party Guy

These strange beasts are very closely related to the Party People. Tech conferences are fun. There is a lot of beer and a lot of free stuff. But there’s always the person who devotes themselves strictly to these endeavors. A lot of really great information can be gleaned off the floor, but at a price… the dreaded badge scanners! If you’re ok with that, then you have a really great opportunity to learn about emerging tech.

Now the expo floor is great, and I have my fair share of headaches/swag to show for it, however there are some folks who make this the primary objective of their conference. The expo floor should be one tool in your conference bat-belt, but if it’s your bat-mobile… maybe it’s being overdone a little bit.

Breakdancers

Yeah ok, that sub-title is horrible, but I couldn’t think of a fun (and appropriate) way to label the folks who are all breakouts, all the time. This one is usually me. Attending conferences isn’t cheap, even if it isn’t coming directly out of your pocket. I usually want to return some value to my sponsor and that historically has taken the form of trying to take away as much directly applicable knowledge as possible. Next to the labs, this might be the most effective way to soak up as much technological knowledge as possible. In my mind, Breakouts are the meat and potatoes of a conference, so it’s hard for me to find a downside here. But like all things, including meat and potatoes, take it in moderation.

Contributors

“Active Participators” may be another way to frame this and it’s a new one for me. At Dell/EMC world 2017, I made it my mission to blog as much as possible. Going into VMworld 2017 I’m really making it my mission to get involved as much as I can. There are so many events that you can get involved with, I’d urge you to get out there and broaden your horizons a bit. On top of the parties located on the gatherings page, you can find opportunities to play games, get into the hackathon, blog in the village and a whole host of other activities.

Whatever your approach, I hope you find the right balance of activities to make your conference amazing. See you in Sin City!

PS: If you need more things to do, come check out my sessions.

Dell EMC World – Day 2 VMware day!

General Session – Realize Transformation

End User transformation

focus seems like it’s going to be on the end user space. How are we going to enable (and secure) our workforce in 2017. It looks like we are going to have some solid insights into where Dell is looking to go in the personal device space.

New product announcement: wireless laptop charging! I’ll take two. Coming June 1stIMG_3039

95% of all breaches start at the endpoint. OOF.

Nike and Dell working together on some really amazing tech. Dell Canvas allows user to have a much more tactile experience when designing. It’s going to be a very niche product, but really really cool.FullSizeRender (1)

Dell is projecting AR/VR to be a $45B business by 2025. It’s pretty obvious they’re going to go after this space. AR/VR is also a big focus on the solution floor. Daqri & Dell are partnering to come up with some interesting solutions in this space and hopefully using their scale to drive cost downward.

IoT and grocery. I know some people who might be interested in this part of the presentation. Grocery and supermarkets have a lot of capabilities with how they store products, but they typically just set it and forget it with their thermostats in the freezer & cold cases. Using IoT to track where your products are allows you to fine-tune thermostat controllers and realize real energy & waste savings. Grocery is just one use case, but the idea translates to other verticals. Dell has created a new Open initiative called EdgeXFoundry to start setting standards for the various IoT functions that happen at the edge.

VMware – Realize What’s PossibleFullSizeRender (3)

My favorite part of the general session. It’s fanboy time. Here comes Pat Gelsinger.

Where are we headed… Technology is magic, or has the ability to create magic. We’ve seen this from mainframes->client/server->cloud and IoT and the edge are the next frontier, but it’s happening now.FullSizeRender (2).jpg

LAUNCH ALERT: VMware Pulse IoT Center. Centralize management/security/operation of the network of IoT. Built on AirWatch/vRops/NSX.

 

Just like yesterday it appears that VMware has finally realized that their public cloud offerings … let’s just say they haven’t gone well. They are skipping to next gen of managing the devices at the edge and looking forward to Mobile Cloud.

Workspace one. make it simple for the consumer, but secure the enterprise. Seems like an overlap in the portfolio. How does ThinApp & AppVolumes play into this? Regardless VMware is taking a stronger focus on EUC this year.

FullSizeRender (4).jpgAnnouncement time: VMware VDI Complete. Client devices from Dell, converged infrastructure, and vsphere. It’s VDI in a box. Super Sweet! Oh and here comes Sakac running on stage hooting. Awesome.

Cross cloud architecture. Finally we are getting somewhere. Don’t do the cloud, enable it! At last we get to see VMware Cloud on AWS! vRA is up next. Please just start giving vRA away! To go faster and compete with the public cloud, we need the tools. It’s a loss leader! FullSizeRender (5)

Announcement: VMware and Pivotal are announcing a collaboration to come up with a developer ready app platform with a focus on cloud native/serverless/micro-services/function.

FullSizeRender (6).jpgPivotal Cloud Foundry works with the most powerful cloud providers enabling Dev and IT to get to market faster, delivering value and time back to the business. It’s taken a couple of years to get there, but it seems like VMware is finally got a good handle on their micro-services & cloud portfolio. Today’s presentations are really exciting to see where we’re going.

What a week!

What a crazy week it’s been. It all started off with a little swim to help some awesome folks…

And they did it!!!!! So proud of our plungers! #penguinplunge #specialolympicsVT

A post shared by NorthCountry FCU (@northcountryfcu) on

Followed by a climb to one of Vermont’s highest peaks with some friends

001

And *I thought* bookended by one of the greatest football games of all time.

BUT then today, I was honored to find out that I’ve been awarded #vExpert status from VMware.

vmw-logo-vexpert-2017-k

With all the uncertainty in the world these days I thought that I was going to sit down and hammer out some “Deep Thoughts by Jack Handy” type proverbs. But perhaps what we (and I really mean me) need these days is little more appreciation and gratitude.

I’m so happy that I got to help the Special Olympics in some meager way. If you’d like to help as well, please visit https://specialolympicsvermont.org/. Despite getting my derriere kicked climbing up Camel’s Hump, I’m appreciative for the friendship that brought me there and the beauty that we experienced. I am happy for the simple joys in life, like rooting on my favorite team and celebrating being an underdog. I’m thankful for my career and the opportunities it’s brought me. And last but almost certainly not least, I am appreciative for my family who’ve supported and encouraged me through all these endeavors.

I hope that you find the same joy and appreciate in those things that matter most to you.

Be well,
Scott

Another day, another PowerCLI report

Another day another reason to love PowerShell.

 I have to come up with a list of all of my Windows machines, their OS versions and editions. My first thought being nearly 100% virtualized is “WooHoo, thank you PowerCLI”…

Except that they don’t include the edition for each VM… Sad face.

image001

However, one of my favorite elements of the PowerCLI tool is the Invoke-VMScript cmdlet contained within the VMware.VimAutomation.Core module. For more about modules, see my post Getting Started with PowerCLI. This script does exactly what it sounds like; it allows you to run a script in the guest OS. Now there’s obviously a number of pre-requisites to leveraging this tool. The big ones are as such.

  • VMtools must be running within the VM
  • You must have access to vCenter or the host where the machine resides
  • You must have Virtual Machine.Interaction.Console Interaction privilege
  • And of course you must have the necessary privileges within the VM.

There could also be some security concerns, allowing your VMware administrators the ability to run scripts within the virtual Operating System Environment, but this opens a whole other bag of worms that we’ll put aside for another conversation.

Once you’ve comfortable with the pre-req’s and any potential security elements, you can get started.

get-vm vm-vm | `
Invoke-VMScript -ScriptType Powershell -ScriptText "gwmi win32_Operatingsystem" 

So what are we doing here? We get the VM object and pipe it to the Invoke-VMScript commandlet where we are running the Powershell script “gwmi win32_Operatingsystem” within the context of the virtual OSE! What you get back is another PowerShell object containing the ExitCode of the script and the output within the ScriptOutput property.

Now just a quick sidenote. If you write powershell scripts, then inevitably you know about Get-member (aliased to: GM), but that only shows you methods and properties, not the values. If you’re not sure what you’re looking for and you’d like to see all the property elements of the object, you can just use $ObjectName|select -property * to output.

Back to the task at hand, I know I need a count of each OS type. I’d also ideally like that broken down by cluster. It would also be nice to know the machines that weren’t counted, so I can go and investigate them manually. So here we go.

$daCred=$host.ui.PromptForCredential("Please enter DA credentials","Enter credentials in the format of domainname\username","","")
foreach($objCluster in Get-Cluster){
    write-host "~~~Getting Window OS stats for $objCluster~~~"
    $arrOS=@()
    foreach ($objvm in $($objCluster|get-vm)){
        if($objvm.guestid.contains("windows")){
            $status=$objvm.extensiondata.Guest.ToolsStatus
            if ($status -eq "toolsOk" -or $status -eq "toolsOld"){
                $arrOS+=$(Invoke-VMScript -VM $objvm -ScriptType Powershell -ScriptText '$(gwmi win32_operatingsystem).caption' -GuestCredential $daCred -WarningAction SilentlyContinue).ScriptOutput
            }else{
                Write-Host "Investigate VMtools status on $($objvm)   Status = $status" -BackgroundColor Red
            }
        }
    }
    $arrOS|group |select count, name |ft -AutoSize -Wrap
    Write-Host
}

You may say, what’s happening here? Let me tell you

After we enter in credentials that we know will work, we are going to iterate through each cluster and as we do such we are going to create an array of each OS that we find in our journey. As we iterate through each VM in the cluster we’ll check on VMtools status as we go, and if necessary flag the VM’s for check later. Then we are going to run Invoke-VMScript within a variable so that we can only capture the ScriptOutput property that’s returned within our array. Finally we can do a little sorting and counting on the array, output to the screen, and go investigate why we have so many darn red marks dirtying up our screen!

image002

Until next time, be well!