I Really Need to be Scared More

In college I was always the guy in the group who would volunteer to do more of the “real work” so that I wouldn’t have to speak in front of the class. The fear of speaking in front of people was so pervasive, I would take a lesser grade just to get out of it. Although I’m closing in on my forties and have learned much, that fear has never really left me…

img_3481
A new pin to add to the collection

Yet when I think back to some of the most amazing events in my life, there has often been an element of fear to them. I’m not talking primal, afraid for my life scared (in most instances), but the fear of the unknown. The one where doubt and uncertainty seeps into your thoughts. The one that’s not quite terror, but something that gets the fight or flight adrenaline going. Like…

  • That time when a very good friend was visiting VT and she, my wife and I came up with the (adult beverage enhanced) idea to go sky-diving. When the next day came I thought for sure that we’d all bail. I would have if not for two reasons. A- The ladies went first. B- I got shoved out of the plane. Until the day I die, I will never forget the image as I rolled onto my back and watched that plane fly away from me. There is no doubt in my mind that I’ve never experienced a physical event that was as exhilarating as that one.
  • That time I decided to change jobs…  More than once I’ve been told that I’m an enigma. I love comfort, but if I feel that I’m getting complacent, I get an itch to move. That doesn’t mean that these changes don’t come without a lot of sleepless nights and self-doubt. Each and every time has been an enlightening and enriching experience.
  • That time I became a Dad… I’m not sure this one requires an explanation. It’s the biggest, toughest, scariest job I’ve ever taken on. The day we left the hospital, I’m pretty sure I drove home at about 15 MPH. My dad said that the train of cars behind us was epic. I didn’t notice because I was staring straight forward, hands white-knuckled at 10&2 on the steering wheel.

Beyond fear, what all of these moments have in common is that they’ve shown me something about myself. They’ve reinforced my self-worth and they’ve invigorated me to do more. In every case I’ve found myself inspired by our humanity and our ability to help each other. Overcoming that fear can force you to acknowledge, sometimes against all that you hold true, that you’re capable of incredible feats.

Which brings us back to the present tense. For some unknown reason (perhaps it was the KBS eh Callahan?) I thought that I could impart something of myself onto my peers by presenting at a major technology conference. For some reason I thought that I would be able to bring some value to individuals by giving a touch of myself. I’m pretty sure that I knew I’d get rejected, which to be 100% honest was part of what convinced me to actually go through with submitting my proposal. Imagine my chagrin when we actually got accepted to speak… to present… to stand up in front of my peers… to expose myself to their critiques. Hundreds of them. HOLY $H!T!

Susan could tell you, I pored myself into preparing for VMworld 2017, but the doubts persisted. Even after I did OK in my first vBrownBag, I wasn’t sure what was going to happen. I arrived in the hall with plenty of time to spare. There was no other way, as I’m someone who needs to be prepared, but … being there and watching people continue to file in…. Did I mention HOLY $H!T!

And then it was time. When there are hundreds of people watching you, not to mention that big TV camera, well there’s nothing you can do but get on with it. I’ll let you be the judge of my performance, but that’s not what post is about. It’s about the experience. How you learn and grow from it. In the end my experience was amazing. It certainly was fear inducing, but like many things once you get past the first few minutes, the situation normalizes and you have no choice but to try and do your best. Focusing not on the pitfalls, but on the job at hand can be a key element to overcoming that fear.

I think we did pretty well, but even more important, we helped some people. My favorite part of our presentation was answering questions and talking with folks afterwards. Being able to share and connect with other people was pretty humbling. I’ve worked on some decent projects and I’ve had my fair share of success professionally. However this experience was more like skydiving: I can assure you that I have never been so jacked up after a professional experience as I was after our presentation.

So despite the fear and trepidation, it seems that scaring myself has been a pretty solid way to grow as a person, and now as a professional. With all of our natural instincts to run from threatening situations, if you stay and face the situation, the possibility for growth can be mind blowing.

With that, it’s time to sign off. I’ve got another presentation to prepare for…

Advertisements

VM life-cycle with PowerCLI

I’ve heard many authors talk about how through their writing process the story or the characters change. I guess this is one reason titles aren’t decided on until the very end of the writing process. When I submitted my session proposal for “No Money, No Problem!” I had originally planned on writing about using PowerShell/PowerCLI as an automation/orchestration engine. I’ve learned over the years that it “may” not be the best idea to fight the muse. During the writing process for this presentation I followed where the muse led and in the end this presentation ended up being much more about the potential ways that you can automate the life-cycle of a VM. I’m hopeful that if you attended the talk, that it was still useful for you despite the slight pivot. Fifteen minutes is not a lot of time for a technical talk, so this post is an deeper dive into the content from my VMTN presentation. So without further ado….

Day 1

The activities on Day 1 are all about configuring the environment. The example I chose to use in my session was setting up a vDS. You can just as easily apply the same logic to something like setting up iSCSI datastores, clusters or the like. The overall premise is that by leveraging PowerCLI you can speed up the delivery of your environments, while delivering higher quality infrastructure. Let’s take a quick peek at how this is done in the context of delivering the elements necessary to run a Distributed Switch. Striping out extraneous code and comments, what you’re left with is a five liner that will give you a Distributed Switch.

$DC=Get-Datacenter -Name "NoProblem"
New-VDSwitch -Name "vds_iscsi" -Location $DC -LinkDiscoveryProtocol LLDP -LinkDiscoveryProtocolOperation "Both" -Mtu 9000 -MaxPorts 256

New-VDPortgroup -Name "pg_iscsi_vlan5" -VDSwitch $(Get-VDSwitch -Name "vds_iscsi") -VlanId 5 -NumPorts 16

Add-VDSwitchVMHost -VMHost esx-06a.corp.local -VDSwitch vds_iscsi
$pNic=Get-VMHost esx-06a.corp.local|Get-VMHostNetworkAdapter -Physical -Name vmnic1

Get-VDSwitch -Name "vds_iscsi"|Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic $pNic -Confirm:$false

So we pretty immediately get to the heart of why I’m such a PowerCLI advocate with this example. When we look at a command like “New-VDSwitch” it’s pretty intuitive what’s going on; we are creating a new vDS. It kind of doesn’t make sense to go through the plethora of switches/options as they are highly dependent on the situation. That being said there are a couple of items I’d like to call out in this example.

  1. PowerShell allows you to run a command in-line and pass the resulting object directly into a variable. That’s what you see happening here, where the Get-VDSwitch call is wrapped by $(…):
    $(Get-VDSwitch -Name "vds_iscsi")
  2. The power of the PipeLine. By using this powerful tool you can string multiple commands together to create complex actions in a very small amount of real estate.

Day 2

New VM’s 1

It’s my belief that if folks do nothing else but automate the provisioning of VM’s, then then can deliver immense value to their organizations and do it quite quickly. In the code below we create an OSCustomizationSpec, leverage that within a temporary spec with static IP addressing, which is then leveraged in the New-VM example. This is a pretty basic example, but you take it as far as you’d like. In a previous role this simple VM deployment evolved and became the 1700 line basis for our automated deployments.

$DomainCred=Get-Credential -Message "Enter Domain Admin credentials" -UserName "corp.local\Administrator"
New-OSCustomizationSpec -name Win2k12 -Domain "corp.local" -DomainCredentials $DomainCred -Description "Windows 2012 App Server" -FullName "PatG" -OrgName "VMware" -ChangeSid
Get-OSCustomizationSpec "Win2k12" |New-OSCustomizationSpec -Name "Win2k12_temp" -Type NonPersistent
for ($i = 1; $i -le 4; $i++){
  Get-OSCustomizationNicMapping-OSCustomizationSpec "Win2k12_temp"|Set-OSCustomizationNicMapping-IpMode UseStaticIP -IpAddress "192.168.10.10$i"-SubnetMask "255.255.255.0"-DefaultGateway "192.168.10.1"-Dns "192.168.10.10"
  New-VM-Name "WinApplication0$i"-Template "base-w12-01"-OSCustomizationSpec "Win2k12_temp"-ResourcePool $(Get-Cluster"MyCluster")
}

Walking through this example line by line

Line 1: We enter domain credentials and store them in a PSCredential object for use later on.

Line 2: Using the New-OSCustomizationSpec we create a base OS Customization spec, which is used in …

Line 3: We create a temporary OS Spec which we’ll leverage in the customization and deployment of our VM’s. All of this however is just laying the ground work for

Line 5: Within the loop we take the previously created temporary OS Spec and we customize it for use with the …

Line 6: We get to the meat of the matter where we are deploying a VM using the new-vm cmdlet and our newly created and updated temp OS spec to configure Windows for us.

New VMs – Linux from json

While the previous example will simplify matters, it’s not exactly the prettiest code, not to mention the fact that values are hard coded. If you want to start taking your automation to the next level you have to be able to accept inputs in order for the code to be more portable. Thanks to PowerShell’s ability to interpret json (as well as xml, and host of other formats) we can simply read in the desired configuration, and somewhat dynamically create the VM. If you want to include splatting you can go even further with your abstractions, but that’s a post for another day.

$InputFile = “c:\temp\linux_servers.json”
$Servers = $(Get-Content $InputFile -Raw| ConvertFrom-Json).Servers

foreach ($Server in $Servers)
{
    new-vm -name $Server.Name -ResourcePool $Server.Cluster -NumCpu $Server.CPU -MemoryGB $Server.Mem
}

The other Day 2 activity I chose to highlight is reporting. After all, how will you know about the performance and capacity of the environment, if you aren’t taking it’s pulse? Thanks to the kind folks at VMware, statistics can be exposed for use via the get-stat cmdlet which is the star of this example.

$objServers = Get-Cluster Demo | Get-VM
foreach ($server in $objServers) {
    if ($server.guest.osfullname -ne $NULL){
        if ($server.guest.osfullname.contains("Windows")){
            $stats = get-stat -Entity $server -Stat "cpu.usage.average","mem.usage.average" -Start $start -Finish $finish

            $ServerInfo = "" | Select-Object vName, OS, Mem, AvgMem, MaxMem, CPU, AvgCPU, MaxCPU, pDisk, Host
            $ServerInfo.vName  = $server.name
            $ServerInfo.OS     = $server.guest.osfullname
            $ServerInfo.Host   = $server.vmhost.name
            $ServerInfo.Mem    = $server.memoryGB
            $ServerInfo.AvgMem = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Average).Average)
            $ServerInfo.MaxMem = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Maximum).Maximum)
            $ServerInfo.CPU    = $server.numcpu
            $ServerInfo.AvgCPU = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "cpu.usage.average"} | Measure-Object -Property Value -Average).Average)
            $ServerInfo.MaxCPU = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "cpu.usage.average"} | Measure-Object -Property Value -Maximum).Maximum)
            $ServerInfo.pDisk  = [Math]::Round($server.ProvisionedSpaceGB,2)

            $mycol += $ServerInfo
        }
    }
}

$myCol | Sort-Object vName | Export-Csv "VM_report.csv" -NoTypeInformation

In the example above we simply iterate through the cluster, and obtain statistics on our Windows VM’s via the aforementioned get-stat cmdlet. Next we store all of the information we care about in the $ServerInfo hashtable. This hashtable is ultimately what’s used for output at the end of the code snip.

I do want to take a moment to breakdown what’s happening in the calculations functions, as it can be a little off-putting if you don’t know what’s happening there. So let’s take the following line and break it down piece by piece.

$("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Average).Average)

$(…) as we should know by now anything that leads with a $ sign is a variable. PowerShell allows us to cast results from commands or other objects into a variable simply by enclosing in parenthesis with a leading $.
“{0:N2}” Mathematical operator. In this case we’re formatting the value that is a result of the command that follows “-f”. In this specific instance I choose to keep two digits to the right of the decimal place. This is indicated by N2.

The command yielding our number just shows the amount of fun you can get into with pipelines. Starting just to the right of the “-f” option, we take our $stats object and pare it down using the Where-Object cmdlet. These values get further parsed out by piping into Measure-Object, which in this particular case is simply calculating out the average of the desired values in the object.

After all that is said and done, we can use the ever handy Export-Csv to come up with a pretty CSV for the powers that be, which shows just how efficient your environment is humming along!

D-day

Just like this blog post, and the presentation it supports, all good things must come to an end. And so it is with infrastructure as well. In our final example, we use the metrics from the all-powerful and omniscient vRealize Operations Manager. It should probably come as no surprise to you that the metrics which are stored in the all-knowing vROps server can be exposed to you via PowerCLI. If you’ve used or tested vROps I’m guessing that one of the first thing you checked out was the “Oversized VMs” report. We’ll use one of the statistics that make up this report in this last code snip:

$cred=Get-Credential
Connect-OMServer -Server "Demo" -Credential $cred -AuthSource "ADdomain"

$when = $(get-date).AddDays(-20)
$vkey = "Summary|Oversized"
$threshold = ".75"

foreach ($vm in $(get-vm|Select-Object -First 33)){
    $vrstat=$vm|Get-OMResource
    $avg = $vrstat|Get-OMStat -Key $vkey -from $when|Select-Object -ExpandProperty Value|Measure-Object -Average

    write-host $vm.name, $avg.average
    if($avg.Average -gt $threshold){
        write-host $vm.name, $avg.average
        if($vm.PowerState -eq "PoweredOn"){
            stop-vm -vm $vm -Confirm:$true -WhatIf:$true
        }
    Start-Sleep 3
    Remove-VM -vm $vm -DeletePermanently -RunAsync -Confirm:$true -WhatIf:$true
    }
}

Starting to work with the vROps commandlets (contained within the VMware.VimAutomation.vROps module) has a little bit of a learning curve associated with it, so we’ll break down some of the key elements on a line by line basis again.

Line 2: vROps is a separate entity from vCenter so we need to use the Connect-OMServer cmdlet to connect. One of the things that is pretty poorly documented, which may trip you up, is the authentication model in use with this cmdlet. If you are using domain credentials you want to use your short-name and the display name that you setup in vROps as the domain authentication source.

line 9: In this case I chose to pass in a VM object into the Get-OMResource, but you can just as easily use the -Name parameter. Get-OMResource simply returns the vROPs object that we’ll use with…

Line 10: Get-OMStat. The Get-OMStat cmdlet is where you actually start returning metrics out of vROps. In this case I’m using the “Summary|Oversized” statistics key. There are literally thousands of key’s that you can leverage. I’d suggest perusing Mr. Kyle Ruddy’s post on this subject here. For the purposes of this very simple example I figured I’d use an average of the data returned over time to see if this machine is oversized and therefore a candidate for removal. Obviously in an real situation you’d want a lot more logic around an action like this.

 $vrstat|Get-OMStat -Key $vkey -from $when|Select-Object -ExpandProperty Value|Measure-Object -Average 

Breaking down the command line by line. We call the

$vrstat

object and pipeline it into Get-OMstat where we narrow down the results by key using the previously defined $vkey variable as well as date duration of 20 days defined in the $when variable. Finally I’m just interested in the actual values stored within the $vrstat object so we pipeline through the

Select-Object -ExpandProperty Value

cmdlet to pull only the data we want. Lastly we use the

Measure-Object

cmdlet to get an average for use in our simple example.

Line13: A simple If statement checks if we’ve crossed our pre-defined threshold. If we have, we move on to our D-day operations. Otherwise the script moves on to the next VM.

Line 16: You can’t delete a VM that’s powered on. Since we are deleting this VM anyway, there’s no need to gracerfully shut down, so we just power it off.

Line 19: So sorry to see you go VM.

Remove-VM

does exactly what it sounds like. If we omit the

-DeletePermanently

parameter we’ll simply remove the VM from inventory. In this case, we want it removed from disk as well, so the parameter is included. Lastly we don’t want to wait for the Remove operation before moving on to our next victim, so the

-RunAsync

parameter tells our script not to wait for a return before moving on.

NOTE: I don’t know if anyone will use this code or not (I surely hope it helps someone), however just in case you do, I’ve set -Confirm and -WhatIf to $true so that you don’t have any OOPS moments. Once you’re comfortable with what’s happing and how it’ll affect your environment, you can set these fit your needs.

As I said at the outset, I hope you found this talk and post useful. I plan on doing a couple of deeper dives into some of the above topics, so if you’re still reading I appreciate your follows.

Lastly, I’d like to offer up a huge thanks to the good folks at VMTN and vBrownBag for the opportunities they offer people like me and you. If you find this interesting or feel that you have a voice to contribute, please do! A vibrant community depends on engaged people like you.

Thanks for reading, see you soon.

SD

T-minus…

In 10 days I board a jet-plane for VMworld (HOLY CRAP!) which means the excitement is starting to ramp up. There are meetings, demos, events and of course parties to plan for, but how you approach a major conference is something that is very particular to the individual and that’s what I’d like to spend a few minutes discussing today. There are about as many ways to approach a major conference as there are attendees. Really? No,  but let’s just pretend so that I can share a few lessons learned from my conference experiences.

The Lab Guy

My first major conference I spent the majority of my time sitting in the lab soaking up as much hands on experience as I could. I would go hit the expo floor, grab a snack and an adult beverage, hide said adult beverage and hit the lab for hours. Not to say it wasn’t valuable but it damaged my spirit a little when late in the conference I learned that all of the labs would be available online after the conference ended… Oof.

After hearing this bit of spirit breaking news I learned that there is a really valuable reason to be in the labs: the guided sessions. Any time you can sit down with an expert, pick their brains, while gaining hand’s on experience, well… that’s just a win right there.

The pros to this are pretty obvious, you get to spend dedicated time learning, which is never a bad thing, but the fact that HOL will have the labs available on-demand after VMworld Europe diminishes the value somewhat.

The Expo/Party Guy

These strange beasts are very closely related to the Party People. Tech conferences are fun. There is a lot of beer and a lot of free stuff. But there’s always the person who devotes themselves strictly to these endeavors. A lot of really great information can be gleaned off the floor, but at a price… the dreaded badge scanners! If you’re ok with that, then you have a really great opportunity to learn about emerging tech.

Now the expo floor is great, and I have my fair share of headaches/swag to show for it, however there are some folks who make this the primary objective of their conference. The expo floor should be one tool in your conference bat-belt, but if it’s your bat-mobile… maybe it’s being overdone a little bit.

Breakdancers

Yeah ok, that sub-title is horrible, but I couldn’t think of a fun (and appropriate) way to label the folks who are all breakouts, all the time. This one is usually me. Attending conferences isn’t cheap, even if it isn’t coming directly out of your pocket. I usually want to return some value to my sponsor and that historically has taken the form of trying to take away as much directly applicable knowledge as possible. Next to the labs, this might be the most effective way to soak up as much technological knowledge as possible. In my mind, Breakouts are the meat and potatoes of a conference, so it’s hard for me to find a downside here. But like all things, including meat and potatoes, take it in moderation.

Contributors

“Active Participators” may be another way to frame this and it’s a new one for me. At Dell/EMC world 2017, I made it my mission to blog as much as possible. Going into VMworld 2017 I’m really making it my mission to get involved as much as I can. There are so many events that you can get involved with, I’d urge you to get out there and broaden your horizons a bit. On top of the parties located on the gatherings page, you can find opportunities to play games, get into the hackathon, blog in the village and a whole host of other activities.

Whatever your approach, I hope you find the right balance of activities to make your conference amazing. See you in Sin City!

PS: If you need more things to do, come check out my sessions.

VMware library

2017-06-25_203353The entire vSphere community (myself included) seems to be in a flutter over the release of the long awaited Host Resources Deep Dive by Frank Denneman and Niels Hagoort. For me this recently resulted in a tweet-thread to see who had bought the most copies (I lost). The upside to this whole thing is I came across Mr. Ariel Sanchez Mora’s (you should follow him ) VMware book list. I love book reviews, so with Ariel’s permission I’m stealing his idea! In fairness you should really go check out his page first, but make sure to come back! Without further ado, here’s the best of my VMware library.

 

Certification

Like many people, this is where I started. I’d heard horror stories about about the VCP, so after taking Install, Configure, Manage I bought my first book, coincidentally (or not) written by the instructor of my class. I immediately followed it up by adding the second piece to my library.

The Official VCP5 Certification Guide (VMware press) by Bill Ferguson

VCP5 VMware Certified Professional on vSphere 5 Study Guide: Exam VCP-510

(Sybex) by Brian Atkinson

I think that in terms of the earlier books, I’d give the edge to the Sybex version. It covers the same fundamentals as the official guide, but goes much deeper.

Just last year while at 2016 I was wandering around the expo floor at VMworld 2016, bumming from failing my VCP6 (it was expected, but disappointing nonetheless), and I walked straight into a book signing for the new VCP6-DCV cert guide. It was destiny or something like that

VCP6-DCV Official Cert Guide (Exam #2V0-621) (VMware Press) by John Davis, Steve Baca, Owen Thomas

Here’s the thing with certification guides; the majority of the material doesn’t change from version to version. DRS is DRS is DRS (not really, but the fundamentals are all there). If you’re just getting started, or able to get a hand-me-down version of an earlier version you’ll still be leaps and bounds ahead of folks who haven’t read the books. They can be a good way to get a grasp on the fundamentals if all you’re looking to do is learn. To that end, if you’re goal is to pass the test, you can’t go wrong with picking up an official certification guide. I know the VCP6-DCV guide provided an invaluable refresher for me.

For more on the VCP-DCV, please check out my study resources.

Hands On

Learning PowerCLI (Packt publishing) by Robert van den Nieuwendijk

I didn’t realize until just right now that there was a second edition of this book released in February of this year! Regardless, this book is a great way to get started with PowerCLI, however it’s more of a recipe cookbook than a tutorial. If you need a getting started with PowerShell book, look no further than:

Learn Windows PowerShell in a Month of Lunches by Donald Jones and Jeffrey Hicks.

This is the guide to get started with PowerShell. Honestly I think the authors should give me commission for how many people I’ve told to go buy this book. It’s not a vSphere book, but if you want to be effective with PowerCLI, this book will help you on your way. It breaks the concept up into small manageable chunks that you can swallow on your daily lunch break.

DevOps for VMware Administrators (VMware press) Trevor Roberts, Josh Atwell, Egle Sigler, Yvovan Doorn

“DevOps” to me is like “the cloud”. It means different things to different people. In this case the book focuses solely on the tools that can be used to help implement the framework that is DevOps. Nonetheless, it’s a great primer into a number of the most popular tools that are implemented to support a DevOps philosophy. If you’re struggling with automation and/or tool selection for your DevOps framework, there are far worse places to start.

Mastery

Mastering VMware vSphere series

The gold standard for learning the basics of vSphere. This title could just as easily appear under the certification section, as it appears on almost every study list. A word of warning, this is not the book to learn about the latest features in a release. That’s what the official documents are for. You may notice that I linked an old version above, and that’s because the latest version was conspicuously missing nearly all of the new features in vSphere 6. That being said, it’s another standard that should be on all bookshelves.

VMware vSphere Design (Sybex) Forbes Gurthrie and Scott Lowe

Age really doesn’t matter with some things, and while that rarely pertains to IT technologies, good design practices never go out of style. This thought provoking book will help you learn how to design your datacenter to be the most effective it can be. I’d recommend this book to anyone who’s in an infrastructure design role, regardless of whether they were working on vSphere or not.

It Architect: Foundation in the Art of Infrastructure Design: A Practical Guide for It Architects John Yani Arrasjid, Mark Gabryjelski, Chris Mccain

And then on the other hand you have a design book that’s built for a specific purpose and that’s to pass the VCDX. Much of the content is built and delivered specifically for those who are looking to attain this elite certification. This is a good book, but as someone who has no intention of ever going after a VCDX, I expected something a bit less focused on a cert, and a bit more focused on design principles. If unlike me you have VCDX aspirations, you definitely need to go grab a copy.

VMware vSphere 5.1 Clustering Deepdive (Volume 1) Duncan Epping & Frank Denneman

I really don’t care if this was written for a five year old OS or not. If you want to learn about how vSphere makes decisions and how to work with HA/DRS/Storage DRS/Stretched Clusters, this is an essential read. Put on your swimmies because you’re going off the diving board and into the deep end!

I’m just going to go ahead and leave a placeholder here for the aforementioned VMware vSphere 6.5 Host Resources Deep Dive. Having heard them speak before and read their other works, I expect this book to be nothing less than mind blowing.

If you liked this, please check out my other book reviews.

Thanks for visiting!

Champlain Valley VMUG – Summer recap

Better late than never I wanted to provide some information to all of the CVVMUG’ers out there coming out of our successful June meeting.

First off another big thanks to our friend Matt Bradford (aka VMSpot) for his vRealize Operations presentation. Here are another couple of gems from Matt that may interest you further:

vmware-vsphere-6-5-host-resources-deep-dive-proof-copiesThe technical talks seem to be a big hit, so much so that we already have a community presenter lined up for October to talk a bit about DRS. To celebrate, we’ll be giving away a couple copies of the brand new Host Resources Deep Dive book. Mind blowing stuff. For a taste you should check out the recent Datanauts podcast featuring the authors and if you make it to VMworld, their Deep Dive sessions are a must attend.

Speaking of VMworld, as of this writing we are 62 days and counting until the kickoff. Please let us know if you’ll be attending. We’d like to do meetup or a happy hour or something. Also stay tuned, you may be able to find us working and/or presenting at the event. 😉

oct12_vmugSpeaking of conferences, you can find my DellEMC world recaps and thoughts here. And here is a bunch of info about VMware announcements and happenings from the event. Lastly, we talked a little bit about the fracas with VMUG and the newly announced Dell Technologies User Community, you can get another voice on that matter here.

Now that the business end is behind, we are trying to line up a BBQ social event for August, place and time TBD. While you’re penciling stuff in, circle October 12 on your calendar for our next Champlain Valley VMUG. We’re still working on a location, but we’ll have that for you soon!

Last and certainly not least, AJ Mike and I want to thank all of you, members and vendors alike for being involved. This is a community for all of us, and we really value all that you bring.

And don’t think that just because we have one speaker lined up for October doesn’t mean that you can’t also get up there. Mules are awaiting.

What is a VMUG?

vmw_vmug_logoUp here in the Champlain Valley we are getting ready for our summer VMUG meeting. While planning the other day someone asked me “What is a V. M. U. G.?” To which I responded “VMUG (pronounced “v-MUG) is…” and launched into a standard elevator, evangelist pitch. Almost immediately I regretted the canned response and started reflecting on what VMUG means to me. It really didn’t take long to reach a resolution to the question. What VMUG means to me can be summed up in a single word: Opportunity.

As I’m someone who wears everything on his sleeve, I’d like to let you know that I’ve been a VMUG leader for almost a year and have been a member and proponent of it since long before that. However long before I was a leader, VMUG offered opportunities to me that may not otherwise be available in Northern New England.

To start with, one of my first VMUG events was the opportunity to attend the UserCon in Boston. If you’ve never been to a UserCon, I suggest checking one out at your earliest opportunity. You have access to labs, technical presentations, fantastic keynotes, access to vendors and your peers! All for the low low price of Free! It’s essentially a miniature version of the big conferences held in Las Vegas, except they are regional and free.

UserCons are awesome but to me the true lifeblood of VMUG is the local communities. As I mentioned previously, I help run the Champlain Valley VMUG community. We hold local meetings 3-4 times a year where you can come to hear from your peers, vendors and industry leaders talk about what is happening in their industries, applications or local businesses. It’s a chance to network with folks in your area and learn about cutting edge tech. Your local meetings are also free, so there really just isn’t a good excuse for missing out.

IMG_3138
Is it a VMUG? p-VMUG? v-VMUG? I’m so confused…

Speaking of networking, did you know that the VMUG advantage membership now includes NSX!  On top of all the evaluation software you can get a discounted ticket to VMworld. Actually it’s the only discount that you can stack with the other discounts. If you’re going to VMworld, or planning on taking a VMware class/exam you should just buy an advantage pass, it literally pays for itself. Seriously, I know this sounds like a sales pitch, but let’s suppose you’re going to take the vSphere: Install Configure and Manage course, that’s around $4000. A VMUG advantage membership gets you 20% off right out of the gate. It’s a pretty good ROI. I’m just saying…

Earlier I mentioned networking, but in my mind this is one of the greatest opportunities that VMUG can afford. Thanks to my involvement with VMUG I’ve learned a ton, gained a ton of awareness of the IT ecosystem, met CEO’s and frankly been able to advance my career. Getting up on stage and presenting about PowerCLI, a topic I’m very passionate about, has helped me get over my fear of public speaking in addition to paying it forward.

Someone recently came up to me and said (to paraphrase) “thanks for your session on PowerShell. I really took it to heart and have been using it in my day to day since.” In all honesty I really couldn’t ask for more than that. By being involved with VMUG I’ve been able to learn, grow my skills, engage with industry leaders and help others. It may sound like a sales pitch, but really this $hit just sells itself.

As always, I’m happy to hear any feedback you may have. Until then, I hope to see you getting involved June 15, at the Champlain Valley VMUG summer meeting!

DellEMCWorld – Final Thoughts

I have received a lot of positive comments about my updates from the conference, so thank you. I’m a big believer in using critical feedback as a means to improve, if anyone out there has any other feedback for me.

I wanted to jot down a last few thoughts from the week before my brain cells totally recuperate. I’m not sure what I expected going into this first combined Dell EMC world, but I do know that I had a blast and learned a ton.

Just for posterity’s sake, here are my first few updates from the conference

Dell EMC World 2017 – Day 1

Dell EMC World – Day 1, the breakouts

Dell EMC World – Day 2 VMware day!

Strategy

captaincanada_copyServer Platforms

It was obvious from the start that with Dell purchasing the EMC federation they were going to go after hardware and namely the converged and hyper-converged markets. Beyond that I don’t think I really understood where this giant beast was going. After this past week, a few themes stuck out to me. The first is an affirmation that as the traditional hardware market slows down, Dell Technologies are indeed going to go even harder after the various converged plays. You could see a physical manifestation of this on the floor of the solutions expo. “Traditional” servers were tucked in the back, whereas the products from the converged platform division that Captain Canada leads were large and in charge of the middle of the expo floor. Prior to the acquisition VCE already owned the majority of the hyper/converged space. I don’t see how you can slow DellEMC down now that they have the servers to integrate as well.

Security

run_nsxIf 2015 was the year of flash, and 2016 was the year of DevOps, then I think I’d like to go on the record saying that 2017 is the year of Security. I work for a financial firm, so I may have a bias towards this topic, but I felt like there was a much stronger message around security at this event. It makes sense. If Dell wants to own the entire datacenter, which they obviously do, you have to be able to secure the datacenter. With RSA, SecureWorks and VMware’s NSX already in the portfolio, it’s a pretty good start. When you then look to see how security is getting integrated into each of the disparate product lines all the way down to the new 14G servers, it looks to me like Michael Dell and team know that the products need to not just perform but need to be secure in order to win.

Other things

The Internet of things space (IoT) as well as AR/VR seemed to have a sizable presence at the conference. People have been trying to emphasize cool products years, but it seems like this might be the year where mainstream adoption starts. I can’t remember the precise figure now off the top of my head but I believe in one of the general sessions they were projecting let’s call it the “ancillary” space or non-traditional servers to be a $45 billion industry by the year 2020. Just for reference sake the market cap of Dell when it was taken private again was under $25 billion. I don’t necessarily see how this plays into long-term strategy but it was everywhere in the sessions and on the expo floor and it’s very obviously on the mind of Dell executives.

The Golden Geese

img_3061During the opening day’s general session, Michael Dell said to paraphrase “A few years ago we bought Alienware. They were the best at what they were doing, and we let them continue to do it.” The not so subtle message to the community is, we bought these companies not to pillage but to leverage their success and make each other stronger. I was fortunate enough to ask Michael himself later that evening if that indeed was his message, especially as it pertains to VMware. I’m again paraphrasing but his message was. “We didn’t buy these companies to pillage them. We are obviously looking for opportunities to itegrate across Dell Technologies, but these companies are leaders in their respective industries and we’re not going to decimate them.” The answer was much longer (and nuanced) but after listening to Mr. Dell and talking to a number of folks who are way more embedded than I, my fears have finally been (mostly) assuaged. Actually after attending a number of sessions across server/compute/storage/security/networking/operations I truly believe Dell Technologies has an opportunity to build something that is bigger than the sum of their parts.

Networking

As an engineer I’ve always felt that my job at conferences is to go breakout sessions wall-to-wall and learn as much technological stuff as I possibly can. I decided to alter the plan a slight bit for this one. As many have said before me, a large part of attending conferences are the networking opportunities. If you’re inclined and motivated there are countless opportunities to get out and network with folks.  Here are a couple of the events I was fortunate enough to take a part in.

Monday

It was an exciting, if not controversial week, in the Dell EMC communities. On Monday I attended the Converged (formerly VCE) User Group meeting. This is where I was fortunate enough to ask Mr. Michael Dell the aforementioned question about the various brands under the new Dell Technologies umbrella. Now I’m a pretty shy guy, but I have never been to a User Group meeting where I haven’t met someone interesting AND learned something AND had a bunch of fun. If you haven’t yet joined up with one of the Dell Technologies communities then you are definitely missing out.

Tuesday

In my role as a systems engineer I have been fortunate enough to work with multiple VCE products across multiple companies. So I was honored to be afforded a chance to attend a technical advisory board meeting for the converged platforms. It was an eye-opening experience to see how the roadmaps & strategies come together and to offer some frank feedback to the people who actually influence these products. Unfortunately I can’t share details from the meeting but needless to say it was a very cool experience that I hope to repeat again.

Also on Tuesday was the Dell Communities event. As a VMUG leader I was very excited to attend this meeting in order to network with some peers who I’ve only emailed with. It’s always nice when you get to meet someone whom you only know by their email and make a personal connection. After all, that’s really a big part of what VMUG is about. If you’re lucky these events are also very cool opportunities to get facetime with people that you wouldn’t normally be able to sit down with.

18301796_933219129910_8177381727013137216_n
I meant Golf you fools!
And it is Vegas after all, so I was happy to wrap the day by enjoying some of the fine dining and activities that you can only find in sin city. All the while networking with one of our key partners, and meeting some cool people.

The Event itself

This is only my second trip to Dell EMC world, so the sample size is small, but each time I’ve been to the event I’ve been very impressed. From the general sessions, to the breakouts, the registration process, all the way down to how lunch is served so efficiently, it seems to be a really really well run event. I just wish that they would stop using so many disposable water bottles.

One of the fears i have attending a vendor run conference is how deep the marketing and sales pitches will run. I haven’t found Dell EMC world to be any worse than any other presentations that I’ve sat through. Some are worse, some are better in terms of the amount of “pitchiness“. On a whole I found the amount of sales at this event to be quite reasonable given all of the networking and educational opportunities that are provided.

With any luck I’ll be able to see how Dell EMC world has evolved in 2018, but until then I guess I’ll just have to wait to see you all in Vegas this August for VMworld.

Dell EMC World – Day 2 VMware day!

General Session – Realize Transformation

End User transformation

focus seems like it’s going to be on the end user space. How are we going to enable (and secure) our workforce in 2017. It looks like we are going to have some solid insights into where Dell is looking to go in the personal device space.

New product announcement: wireless laptop charging! I’ll take two. Coming June 1stIMG_3039

95% of all breaches start at the endpoint. OOF.

Nike and Dell working together on some really amazing tech. Dell Canvas allows user to have a much more tactile experience when designing. It’s going to be a very niche product, but really really cool.FullSizeRender (1)

Dell is projecting AR/VR to be a $45B business by 2025. It’s pretty obvious they’re going to go after this space. AR/VR is also a big focus on the solution floor. Daqri & Dell are partnering to come up with some interesting solutions in this space and hopefully using their scale to drive cost downward.

IoT and grocery. I know some people who might be interested in this part of the presentation. Grocery and supermarkets have a lot of capabilities with how they store products, but they typically just set it and forget it with their thermostats in the freezer & cold cases. Using IoT to track where your products are allows you to fine-tune thermostat controllers and realize real energy & waste savings. Grocery is just one use case, but the idea translates to other verticals. Dell has created a new Open initiative called EdgeXFoundry to start setting standards for the various IoT functions that happen at the edge.

VMware – Realize What’s PossibleFullSizeRender (3)

My favorite part of the general session. It’s fanboy time. Here comes Pat Gelsinger.

Where are we headed… Technology is magic, or has the ability to create magic. We’ve seen this from mainframes->client/server->cloud and IoT and the edge are the next frontier, but it’s happening now.FullSizeRender (2).jpg

LAUNCH ALERT: VMware Pulse IoT Center. Centralize management/security/operation of the network of IoT. Built on AirWatch/vRops/NSX.

 

Just like yesterday it appears that VMware has finally realized that their public cloud offerings … let’s just say they haven’t gone well. They are skipping to next gen of managing the devices at the edge and looking forward to Mobile Cloud.

Workspace one. make it simple for the consumer, but secure the enterprise. Seems like an overlap in the portfolio. How does ThinApp & AppVolumes play into this? Regardless VMware is taking a stronger focus on EUC this year.

FullSizeRender (4).jpgAnnouncement time: VMware VDI Complete. Client devices from Dell, converged infrastructure, and vsphere. It’s VDI in a box. Super Sweet! Oh and here comes Sakac running on stage hooting. Awesome.

Cross cloud architecture. Finally we are getting somewhere. Don’t do the cloud, enable it! At last we get to see VMware Cloud on AWS! vRA is up next. Please just start giving vRA away! To go faster and compete with the public cloud, we need the tools. It’s a loss leader! FullSizeRender (5)

Announcement: VMware and Pivotal are announcing a collaboration to come up with a developer ready app platform with a focus on cloud native/serverless/micro-services/function.

FullSizeRender (6).jpgPivotal Cloud Foundry works with the most powerful cloud providers enabling Dev and IT to get to market faster, delivering value and time back to the business. It’s taken a couple of years to get there, but it seems like VMware is finally got a good handle on their micro-services & cloud portfolio. Today’s presentations are really exciting to see where we’re going.

It’s a bird, it’s a plane, it’s… Invoke-VMScript

I was at work today and a need came across my desk for a solution that requires SNMP. For some reason which I can’t fathom, SNMP is not installed as a service on the majority of the servers. Who do we turn to in tumultuous times like these? PowerShell and his mighty sidekick PowerCLI!

michael-corleone-pull-me-back-in-just-when-i-thought-i-was-out-they-pull-me-back-inFirst things first I wanted to know the scope of what I was dealing with. When I dove into this problem I had every intention of trying to broaden my horizons and move away from PowerCLI, but it’s so easy to get sucked back into what you know. Besides, I knew I was only targeting a couple of clusters, so it only made sense to go back to PowerCLI, right? Right???

If you ignore the ugly formatting, what I did below was load all of the VM’s I needed to target into an object and then iterate through each of them to make sure they were windows machines and that they were powered on. In hindsight I knew that I was probably going to use invoke-vmscript to get the job done, so I probably should have checked for vmTools status (ExtensionData.Guest.ToolsRunningStatus) while I was at it.  snmp1

So now we’ve got a nice neat little hashtable full of servers that need a little TLC. You’d think that we could immediately get rocking, but without going into details things unexpectedly got a little dodgy at this point. I mentioned earlier that I originally intended to try and break away from PowerCLI just to broaden my horizons. Unfortunately as an Infrastructure person you don’t always have the opportunity to do things the way you’d like, and you have to sacrifice elegance for just getting things done. Luckily as VMware admins when we need to get $hit done, we have a very handy and very powerful tool available to us and that is invoke-vmscript.

b436ea981cd43a8a244370d95fa3f343_super-troopers-better-fix-meow-super-troopers-meme_250-131If you’ve heard me talk, reviewed my scripts or spent any time around me you’d know that I think invoke-vmscript is the cat’s meow. It is without a doubt my favorite cmdlet as it lets you get away with some pretty awesome stuff. At it’s root, invoke-vmscript allows you to run a script via VMtools within the context of the local VM. Now this is different from PSexec or PowerShell remoting; you are actually running the a script within the local OS where VMtools and PowerCLI are just the mechanisms to enable this super hero activity.

Quick sidebar: With great power comes great responsibility. I said above that invoke-vmscript “lets you get away with some pretty awesome stuff.” Many people in this world just deploy VMtools and vCenter with default permissions and credentials. If you are a security person, you need to ensure that your roles and privileges are setup appropriately, of you could have exposure due to what you can accomplish with VMtools.

But I digress. We are here to get things done and at the center of it this whole exercise boils down to a one liner:

<strong>Invoke-VMScript -VM $client.Name -ScriptText "DISM /online /enable-feature /norestart /featurename:SNMP" -ScriptType Powershell

 

If you refer back to the original snip, we stored all of the servers into an array, which is being iterated through. We invoke the script targeting $client.name. The parameter for ScriptText is where we pass in the script that we would like to run on the remote system. In this case we are using the Microsoft DISM tool to add the SNMP feature to our Windows installation. Lastly is the parameter for ScriptType. You have three ScriptType options available to you as of today: Bat for you old school Windows Cats, Bash for the nix kittens and PowerShell for the up and coming cubs.

When you put it all together, here’s the code to get it done:

$serverset=$(get-cluster cluster1|Get-VM) + $(get-cluster cluster2|Get-VM)

$ArrRemediate=@()

foreach ($client in $serverset){

if($client.powerstate -eq "PoweredOn" -and $client.guestid.contains("windows")){

if(!$(get-service -ComputerName $client -Name SNMP -ea silentlycontinue)){
$ArrRemediate+=$client
Invoke-VMScript -VM $client.Name -ScriptText "DISM /online /enable-feature /norestart /featurename:SNMP" -ScriptType Powershell
get-service -ComputerName $client.Name -Name SNMP|Select-Object -Property name, status, starttype |ft

}

}

}

&nbsp;

$ArrRemediate.Count

I hope for today you’ll excuse the formatting and less than efficient code, as the mission was to get things done. We achieved our mission and escaped certain doom due to our friendly neigboorhood hero Invoke-VMScript. I hope to have a deeper expose into our masked super hero soon, but until then if you have any thoughts or would like to contribute to the conversation, please reach out.

Why you need more PowerShell

or: How I stopped worrying and learned to love the CLI

I recently gave a Tech Talk at our spring Champlain Valley VMUG on PowerShell and PowerCLI. The talk definitely was more of an introductory instructional, but one of our attendees expressed that they wanted to hear more about the value that can be delivered back to the organization by scripting with PowerShell. Hopefully I can give you a solid overview of the immense value of PowerShell here today.

Why?

The only constant is change and that holds true for IT infrastructure folks as well. Terms like DevOps, distruptor, and Shadow IT have become firmly established in our lexicon. And with good reason! We are in a world that is moving faster and faster everyday and you often see where it’s not the best product that corners a market, but rather it’s the first/fastest to market that gets a stranglehold. If you come from a classical IT role with silos and legacy processes/policies that slow your Organization down… well is it any wonder that you have disruptors changing the model?!? But what if there was a way that you could help accelerate your business, work collaboratively with the Developers, combat against Shadow IT and be the disruptor yourself? Powershell can be the tool that enables this transformation by delivering Time and Consistency to your organization.

Time

This one is simple. Time is money and by investing a little bit of effort up front  scripting a solution, you will save time moving forward. Here is the no-brainer part of the value prop: Do you want to take on the timely task of building environments by hand? Of course you don’t! You want something that’s fast and easy. There’s a take on the old adage that I’ll paraphrase here “Do it once, ok. Do it twice, shame on me. Do it three times, why haven’t you scripted it yet?”

Let’s suppose for a second that you have to install a widget dozens, hundreds or thousands of times. This activity takes hours. Once you script & automate that install, you turn it into a hands off activity freeing up your engineers to do more of the activities that will drive value instead of just watching the progress bar. Simply by the act of writing that script, you’ve saved your business time/money, and honestly you’ve probably gained a bit of expertise and employee engagement as well. Extrapolate this out to all of the infrastructure elements you need to manage: people, policy, applications, servers, storage, network, security and the list goes on and on. Even if you can only automate part of a process, you’re still going to see dividends.

51oYzgTCiyL._SX427_BO1,204,203,200_[1]A less intuitive reason for starting with PowerShell is that it has a pretty quick learning curve, especially if you come from a windows environment. If you have any programming/scripting background you can likely dive right in. This means that your team can be scripting sooner, and can start ensuring that they are driving the non-value added operations out of your day to day. Many infrastructure folks don’t have a background in development activities and as such scripting can be a bit of a hard sell. PowerShell was meant to build upon and extend the foundation of items like Batch and VBscript, but in a way that is intuitive to learn and become efficient with quickly. One of my go to guides for learning PowerShell is the Learn Windows PowerShell in a Month of Lunches guide. This book is so successful in large part because it demonstrates just how easily accessible PowerShell really is.

2017-03-28 11_47_02-Windows PowerShell ISEI mentioned earlier that you can create collaborative opportunities and combat against Shadow IT. PowerShell is built on top of the .NET framework and has support for RestAPI’s baked in. This means that you can share code, speak the same language and have smoother hand-offs. By using PowerShell you have an opportunity to increase the amount of collaboration between your groups. If you can harness this opportunity you’re likely going suffer from less finger pointing, and be able to cut out some unnecessary meetings.

Consistency

Time and consistency (and money) go hand in hand in IT. Having inconsistent environments results in more frequent issues and longer times to resolution. When you start scripting out your activities you will have a much more predictable environment, outages will decrease in frequency and your time to resolution will also drop. This all yields in greater up-time. More up-time means happier customers and happier engineers. Your business is winning!

tom-brady-goat[1]
One is a goat. The other is the GOAT.
Speaking of winning, do you know why Tom Brady is one of the Greatest Of All Time? It’s not because of his ugg’s or his supermodel wife. It’s because he has put in the work up front to ensure that no matter who he is working with, he will have a predictable and consistent outcome. This is what you should be aiming for with your environment: consistent and predictable.

Having a consistent repeatable infrastructure makes that environment easier to rebuild. If you can kick off a PowerShell script that results in a fresh server in a matter of minutes, why would you spend hours troubleshooting a problem? The saying “treat your servers like cattle, not like pets” became popularized for a reason. Wikipedia states that “The term commodity is specifically used for an economic good or service when the demand for it has no qualitative differentiation across a market.” Your servers SHOULD have no qualitative differences, and are therefore inherently commodities, and replaceable. Diving into PowerShell and PowerCLI can help get you there.

PowerCLI

I’ve mentioned it a number of times but some of you may be going, what is this PowerCLI thing? PowerCLI is VMware’s implementation of PowerCLI modules which allow you to “automate all aspects of vSphere management, including network, storage, VM, guest OS and more.” To put it short, it’s a super efficient and reliable (not to mention fun) way to manage your vSphere environments. It’s also incredibly powerful.  There are over 500 separate commandlets in the modules which make up PowerCLI. By some accounts VMware has approximately 80% of the hypervisor market, which means the majority of the worlds infrastructure run’s on vSphere and can be managed with PowerCLI.

Using PowerCLI just allows you to further expand on the amount of Time and Consistency that you can deliver back into your business. With PowerCLI you can automate/manage the network, hypervisors, storage and all of the elements that encompass your “infrastructure”. You can also take it one step further and thanks to the security models built into vSphere you can let your users do it too! With a little bit of thought and design, you can give your developers the ability to spin up and spin down their own VM’s. No more test/dev headaches for you and your developers are happier! The winning doesn’t stop!

As I said to start my VMUG presentation, I’m not an expert in PowerShell or PowerCLI, but I have used it very effectively in my day to day. It’s also a topic that I’m passionate about, otherwise you’d never catch me voluntarily speaking in front of 100 people! I’ve also managed to write some fairly complex scripts that have helped my Organizations reach goals. I hope this post helps you understand some of the value PowerShell & PowerCLI scripting. If you’d like to keep the conversation going or if you have any questions I’d love to hear from you.