Exchange Forward Rules via PowerShell

I’m really quite enamored with the Exchange PowerShell modules lately. Well to be honest, I’m just enamored with PowerShell in general… For someone who is a part time Exchange admin, the option to script out changes is really nice. I’d like to share a couple of examples of how simple managing forwarding rules is with PowerShell.

Recently we had someone go out on maternity leave and their manager wanted to keep up with their email. Reasonable request. Super easy to implement with PowerShell.

$ForwardingUser="OriginatingUsersSamAccountName"
$ForwardedToUser="DestinationUsersSamAccountName"

Set-Mailbox $ForwardingUser -ForwardingAddress $ForwardedToUser -DeliverToMailboxAndForward $true

It’s a pretty simple one-liner using Set-Mailbox. I was a little lazy on this one and used the positional parameter for -Identity to set the Originating user. Bad Scott, but I’ll fix it in a moment. Since in this case I’m forwarding within the organization, I use the -ForwardingAddress parameter with another SamAccountName, however if I needed to forward this externally I could just as easily use the -ForwardingSmtpAddress parameter. Lastly, I want both users to get copies of all messages so we set -DeliverToMailboxAndForward to $true.

Easy peasy, right?

In another recent example my employer recently merged with another entity. During our integration we wanted everyone to stay in communication. What we ended up doing in our domain was:

  • We created a contact entity in Exchange, pointed to their original domain’s SMTP address.
  • We then created their new Exchange accounts, and set the ForwardingAddress to be the aforementioned contact entity.

This solution worked great, but after the merger I didn’t really feel like going through and manually undoing all of that forwarding. Here’s the code I came up with to finish the job.

$MatchingDomain="NotForYou.org"

foreach($user in $(get-mailbox|where{$_.forwardingaddress})){
  if(Get-Contact $user.ForwardingAddress.DistinguishedName -ea Ignore){
    if( $($(get-contact $user.forwardingaddress.distinguishedname).windowsemailaddress).domain -eq $MatchingDomain ){
      Set-Mailbox $user -ForwardingAddress $null
    }
  }
}

I know that all of the entities have a ForwardingAddress attribute defined, so I use my ForEach loop to narrow down my list of users. Since I also know that all of the entities I’m looking for are configured as contacts, I leverage the get-contact cmdlet in the first IF statement to narrow down the field further. In the final if statement, I wrap the results in a number of expressions $() in order to do a comparison against $MatchingDomain. Finally we use the Set-Mailbox cmdlet again to set the -ForwardingAddress parameter to $null. As I expressed it to my team, the psuedo-code looks something like this:

If mailbox has ForwardingAddress -> If ForwardingAddress is a contact -> if domain of ForwardingAddress matches -> set-mailbox

Hope this helps someone. I’ll have more Exchange PowerShell posts soon.

Advertisements

Manage Exchange Calendar Permissions with PowerShell

I’m not sure I can even stretch this one out to make the 400 words for blogtober, the Exchange PowerShell cmdlets make it that easy.

I’ve got a user who’s out of the office, but has appointments scheduled. Somebodies got to mind the calendar.

Combine this with the fact that the last time I managed an Exchange system, PowerShell was just a glimmer in Mr. Snover’s eye, and well… I’ve got some refreshing to do. AND since my PowerShell is more up to date than my exchange, seems like the natural route.

Out of the gate I spent probably 10-15 minutes fumbling with the GUI before I said forget this and fired up PowerShell. It took me about 13 seconds to find the commands I was looking for with help permission.

exchange_permission_help

Using the set of *-MailboxFolderPermission cmdlets we can add/get/remove/set fine grained mailbox permissions, so adding that calendar entry becomes a piece of cake.

To add the necessary rights, it’s as straight-forward as using the Add-MailboxFolderPermission like this:

 Add-MailboxFolderPermission -Identity SharedFromUserName:\calendar -User SharedTOUserName -AccessRights Reviewer

A full list of the parameters and AccessRights can be found at https://technet.microsoft.com/en-us/library/dd298062(v=exchg.160).aspx

exchange_add-mailbox_folder_permissions

After the situation is resolved that necessitated additional calendar permissions, we can simply use Remove-MailboxFolderPermission like such:

 Remove-MailboxFolderPermission -Identity SharedFromUserName:\calendar -User SharedTOUserName

exchange_getmailboxfolderpermission_remove

I’m not use to writing a blog post this short, but if the tool works, and the resolution is simple…

Delete Windows Protected Files

If you’ve been following along in my continuing effort to prove that I’m in the wrong field, you may have noticed that my computer/lab blew up weeks before VMworld. Literally started smoking. Just this past week while attempting to upgrade my laptop before the requisite amount of coffee had been consumed, I blew up the raid array and ensured myself a fun Friday. Needless to say, I’ve been spending time with external disk enclosures recovering partitions.

Since I’m having so much fun I figured I’d upgrade my OTHER laptop’s hard drive. I’ve learned something of a lesson though and am least backing up my data first, which brings up the point of today’s post. The SSD I’m about to slap into this laptop was formerly a system drive, so I’d like to clean up all that garbage before I copy it off to other media.

2017-10-11 13_38_51-ClipboardNow the good folks at Microsoft know that 99 times out of 100 you shouldn’t be manually messing with the majority of the system, so removing the C:\Windows directory can be a bit of a challenge, even if that drive no longer is running your OS. It makes for an interesting conundrum though if you’re just trying to clean up a hard drive for reuse. There’s probably a more elegant way to solve this issue, but here’s how I went about it.

Now you know what happens if I try to just go in and change ownership or permissions: Windows tells me what it thinks of my storage skills. My quick way around around this is to use PsExec. Hopefully everyone knows Sysinternals by now. If you don’t, stop reading, go get Sysinternals and polish up your toolkit. So PsExec, although designed to run processes remotely can help you out locally as well. In my case I wanted to run my command as the system account to get this problem out of the way asap. We start off this fun by launching PowerShell via PSExec

psexec -i -s powershell.exe

The -i switch says you want to run in interactive mode, and the -s says to run as the LocalSystem account, which we can see here:

2017-10-11 17_04_28-Administrator_ C__Windows_System32_WindowsPowerShell_v1.0_powershell.exe

You may ask, why did you use PowerShell? Couldn’t you just use a cmd prompt? Well, sure, but cmd prompt is old, and PowerShell is awesome, so there you go. Now we are sitting in a PowerShell session running as the System account. We can use takeown.exe to set the local Administrator (/A) owner of all the files recursively (/R) without having to answer any prompts (/D Y)

takeown /F F:\Windows /A /R /D Y

I had to run this a couple of times.  It appears that there may be some funkiness with the recursion switch. I would also suggest that you output the results to a text file in case you have any issues.

Anyways… At this point we’ve got ownership taken care of, but we still need to grant ourselves access to the files using icacls, before we delete them. Since these are going away we don’t need to worry about security so we /grant the everyone group full permission and use the /T switch to tell it apply the permissions to all downstream files/directories

icacls.exe F:\Windows /grant everyone:F /T /C

And since we’re already in PowerShell let’s just take care of these badboys

remove-item F:\Windows -recurse -force

Blamo. No more Windows directory.

Next time I should really think of a less painful way to come up with a blog post

Public Presenting – Lesson’s learned from VMworld 2017

captaincanada_copy
Not everyone can be Sakac, but you can become more comfortable on stage
When I first started this post I was on my way home from VMworld 2017. It was a pretty big week for me, in that these would be my largest public speaking engagements to date. Thanks to the good folks at VMTN and vBrownBag I had the opportunity to present two community sessions. I also joined Mr. Kyle Ruddy to present the last breakout session of the conference for nearly 300 attendees.

In baseball going 2 for 3 is a really good day. In public speaking, that still leaves room for improvement. I learned a ton going through the process of getting ready for and speaking at VMworld. Now that a little time has passed and I can look at the week a bit more rationally, I thought I’d share a few thoughts on the lessons learned on public speaking.

Believe

I recently saw a post on twitter that I wish I’d saved. It’s kind of been my battle march for 2017. I don’t know if I can do it justice but to paraphrase it said

“Repeat after me:
You do not have to be an expert to present.
You do not have to be an expert to present.

You do not…

I have been spending a lot of my “free” time over the past few years learning, developing and evolving my PowerShell/PowerCLI skills. At some point I realized that I was often dropping little nuggets of knowledge among my team, community and even occasionally online.

2017-10-01 12_01_50-ClipboardAs one of our local VMUG leaders, I help set agenda’s for our meetings. You can see where this one is going… My first big presentation had some technical challenges that rattled me, but hearing things like “I’m fired up to…”, “Thanks for spreading the PowerCLI love!” and the other kudo’s got me fired up to do a little more. Don’t let me fool you: I am not an expert! I am someone who is passionate and who has been effective with a framework. That and a little belief in yourself is all you need!

For me this is the most important point. If you stop reading after this paragraph, I’m good with that. Believe in yourself! Recognize that you have something to offer and give it your all! This isn’t to say that presenting isn’t scary. It can be, but that can also be overcome and be turned into a really positive experience. If you believe you can, then you will.

Prep

Preparing for the conference should be pretty obvious, but the way the large conferences work is somewhat counter intuitive. Months before the conference, you have to submit a paper proposal. Then you wait to hear if you were accepted or not. And you wait. And wait. I guess during this waiting period you could be prepping, but what happens if you session proposal doesn’t get accepted, then you’ve expended a bunch of your effort in vain… Once that acceptance does come through, the prep can start happening in earnest.

For me I spent a lot of time figuring out the story that I wanted to convey. Once I had the story I could create an outline and really start building the content out. This isn’t anything new, but in this ever connected world it can be tough to take a break and let you creative mind wander. For me, I was jogging a bunch this past spring/summer (gotta get back on that horse!) and every time I came home I’d say hi to the family and run off to jot down notes. After tapping into the muse, I could start crafting the message that I hoped would allow me to connect with the audience and make it personal. The reason I choose to present is to share and hopefully help others. By creating a message that you believe in and that’s personal can go a long way towards having an engaging audience.

Practice

Now just recently I gave a talk which seemed successful, but I waited until the last minute to hammer it out. The week prior I was up until midnight every night getting examples recorded and the presentation dialed in. That lead me to have the material very fresh in my mind, but it certainly cut down on practice time. Which leads me to…

No body likes hearing a ton of “um” and “uh”s when you’re presenting. The way I’ve gotten past this is a by knowing the content and being comfortable with it. For me, that means basically having a script and running through the messaging until I know I’ve got it down cold. I’m an engineer, so being in front of a crowd is not my natural habitat. As such, I have been thrown off my game in the past. The way I get past the person walking out, is by practicing my script well before the day of. It’s like anything, the more comfortable you are with the subject matter, the easier it is to deal with unexpected twists and turns along the way.

Phone a friend

It can be really intimidating standing up in front of people. All eyes are on you. For my big presentation I had a co-presenter, Mr. Kyle Ruddy. And when that hall started filling up, boy was I glad to have him! Having someone up there made a huge difference in my confidence level. If I fumbled or we hit on an area that maybe I didn’t feel as comfortable, I knew that I had a partner willing to step up and help out. I’d like to think that at some point I offered the same value to him.

The other part that’s great about having a buddy presenting with you is that you can make it conversant. Let’s face it, code can be dry, even to those of us who enjoy it. If you have someone you can banter with, it makes the conversation much more real and engaging for your audience.

 

These are just a few lessons I’ve been trying to learn over the past few months. I know that I’m not alone as an Engineer who finds public speaking intimidating, but I know that intimidation can be overcome. Hopefully my words help, but if not here’s a bunch of other guys from our community that know you can do it too.

Take Your Next Step With PowerShell

I just got back from Boston where I had a great time at the fall VMUG UserCon, presenting on PowerShell and PowerCLI. Regardless of what the title was, the message  behind the presentation was that everyone can become effective with PowerShell and hopefully some of these building blocks help you along the way.

One thing I hadn’t fully thought through before is that not everyone out there has development background. Talking about code with a room full of Systems Engineers finally helped me to realize this.  A couple of techniques came up in conversation that I know helped me take steps with my coding, and obviously hold interest for others who are getting started, so that’s what I’d like to talk about today. I’ve got a little bit of squirrel syndrome today and I see a shiny thing, so let’s go!

Expressions and sub-expressions

I’ve been using these for years, but I honestly had to go back and look up the name for this technique. Expressions harken back to mathematically expressions. Essentially by wrapping statements or an object in parentheses, you are telling the PowerShell engine to process the commands as a group. It’s essentially telling PowerShell about order of operations. Nothing too fancy about that.

What I find really fun with expressions is that they allow you to access properties of the group inline within your script. Take this code for example

$InputFile = "C:\Temp\linux_servers.json"

$Servers = $(Get-Content $InputFile -Raw | ConvertFrom-Json).Servers
 After reading in the contents of the json file, we directly reference the Servers array from the file, rather than individually parsing the elements of the file. Remember that PowerShell is all about objects by wrapping Get-Content $InputFile -Raw | ConvertFrom-Json in a $( ) we can directly access the Servers attributes of the object represented within $( ).
Here’s another little example that takes it a bit farther. It’s long so I could use backticks (I’ll get to that in a few) to make it a little more readable, but I want you to see this code in all it’s glory!
 if($($agents.data[$i].hostname.tolower()).substring($agents.data[$i].hostname.tolower().indexof("\") + 1,2) -eq $serverData.Hostname.ToLower().substring(0,2))]
Super fun right? Not all that readable, so let’s break it down:
$($agents.data[$i].hostname.tolower()).substring(...)
We use the $i variable (getting updated in a loop we don’t have here) this to index through an array of $agents.data[$i]. We are accessing the hostname element and use the tolower() method to cast it for comparison. Finally we use our handy dandy $() expression to take this output and then we can use that property to parse the substring. The point of this giant example is that you can take multiple different elements, do operations inline and use the output directly via expressions.
If I didn’t do this justice for you, please check out the PowerShell blog, SS64, or Mr. Jeff Hicks author of PowerShell Scripting and Toolmaking for more information.

Functions

I’ve been known to write dirty code. What I mean is that, I don’t always need the prettiest code, or the most denormalized or code that conforms to style guides. My whole reason for writing, is to make things more efficient and that sometimes results in “dirty code”. That being said, I have a growing love affair with functions. This isn’t a coding primer, so I’ll let the SS64 & Scripting guys teach you about the nuance of a function, but essentially it’s a named piece of code that you can call from some other piece of code. Why would you want to do this? It makes your code more modular. If your code is more modular, you can reuse it more places. If you can reuse it more places you can do more faster. If you can do more faster: PROFIT!

Anyways here’s an example I did for VMworld (with a touch of here-string help from my friends).  Here-strings… sounds like another post…

$cred=Get-Credential
$Guest = Get-VM -Name "Sql01"
$DiskSize = 40
$Disk = "Hard Disk 2"
$Volume = "D"

Invoke-VMScript -vm $Guest -ScriptText "Get-PSDrive -Name $volume" -ScriptType PowerShell -GuestCredential $cred

$objDisk = Get-HardDisk -VM $Guest -Name $disk
$objDisk | Set-HardDisk -CapacityGB $DiskSize -Confirm:$false
$scriptBlock = @"
echo rescan > c:\Temp\diskpart.txt
echo select vol $Volume >> c:\Temp\diskpart.txt
echo extend >> c:\Temp\diskpart.txt
diskpart.exe /s c:\Temp\diskpart.txt
"@
Invoke-VMScript-vm $Guest-ScriptText $scriptBlock-ScriptType BAT -GuestCredential $cred

Invoke-VMScript -vm $Guest -ScriptText "Get-PSDrive -Name $volume" -ScriptType PowerShell -GuestCredential $cred
This very simple code example will use invoke-vmscript to extend the drive on a windows machine. Now consider the following code:

if($Guest.Staaatus -ne "GuestToolsRunning" ){

Write-Host"Too bad so sad, you'll have to manually extend the OS partition. Maybe you should fix VMtools..."-BackgroundColor White -ForegroundColor DarkRed
}
else {
Set-OSvolume$Guest$Volume$Credential
}
}

### Function to Extend OS Volume
function Set-OSvolume{
Param(
$Guest,
[string[]]$Volume,
[System.Management.Automation.CredentialAttribute()]$Credential
)
$scriptBlock=@"
echo rescan > c:\Temp\diskpart.txt
echo select vol $Volume >> c:\Temp\diskpart.txt
echo extend >> c:\Temp\diskpart.txt
diskpart.exe /s c:\Temp\diskpart.txt
"@

Invoke-VMScript-vm $Guest-ScriptText $scriptBlock-ScriptType BAT -GuestCredential $cred>$null
$(Invoke-VMScript-vm $Guest-ScriptText "Get-PSDrive -Name $volume"-ScriptType PowerShell -GuestCredential $cred).ScriptOutput
}

Now that we’ve taken that same code snippet and turned into a function, our code got cleaner, more modular and something something profit. OHH did someone say cleaner code?

Splatting

Formatting scripts for presentation, or blogs, or community posts and actually making the readable can be a challenge especially when you start getting long one-liners. While prepping for PowerCLI 201 (cough, top 10 session, cough) Mr Luc and I had a bit of a debate around readability. I’ve historically been a fan of of the back-tick ` method to have a command extend beyond a single line.  Here’s what that looks like:

2017-10-18 12_50_39-VMworld 2017 SER2614BU - PowerCLI 201_ Getting More Out of VMware vSphere PowerC
“kyle, what was I thinking using back-ticks?”   “I dunno man, but I see Luc shaking his head over there”
I’m OK admitting when I’m wrong, and while back-ticks may be good in some situations, in others they just look like crap. So enters the splat, or as my wife likes to call it “splatting the gui”. I’m not going to try and give a deep dive on that concept, as there are already a number of great articles out there: Don Jones, author of the Month of Lunches, Rambling Cookie Monster, the googs. Just as a real quick reminder, splatting is a way in which you can specify both the parameter and it’s assigned value within a hashtable. By using this splat hashtable, you can dramatically simplify the code and it’s readability… as with all things, in the appropriate situation.

With that reminder out of the way, what I wanted to talk about is the when/where of splatting, but first I think this picture shows you what I’m talking about in terms of simplified code..

2017-10-18 13_00_27-VMworld 2017 SER2614BU - PowerCLI 201_ Getting More Out of VMware vSphere PowerC
“See how pretty this looks now Scott.”   “whoa”
Here’s what I’m finding with splats: while they make code more readable in a presentation or a blog post, you probably don’t want to try and use them while working out a problem or a new cmdlet. If you’re going to use them in your code, you’ll probably want to figure out your use case and parameters first, then back into using a splat. Likewise I would suggest knowing your audience and their experience level. While talking about this subject in Boston the other day, I got a few blank stares. Even if it’s pretty, the code still has to be usable and that means readable as well.

Hopefully these quick tips help. I’ll have a few more bits and bobs from my Boston talk in the next couple of days!

I Really Need to be Scared More

In college I was always the guy in the group who would volunteer to do more of the “real work” so that I wouldn’t have to speak in front of the class. The fear of speaking in front of people was so pervasive, I would take a lesser grade just to get out of it. Although I’m closing in on my forties and have learned much, that fear has never really left me…

img_3481
A new pin to add to the collection

Yet when I think back to some of the most amazing events in my life, there has often been an element of fear to them. I’m not talking primal, afraid for my life scared (in most instances), but the fear of the unknown. The one where doubt and uncertainty seeps into your thoughts. The one that’s not quite terror, but something that gets the fight or flight adrenaline going. Like…

  • That time when a very good friend was visiting VT and she, my wife and I came up with the (adult beverage enhanced) idea to go sky-diving. When the next day came I thought for sure that we’d all bail. I would have if not for two reasons. A- The ladies went first. B- I got shoved out of the plane. Until the day I die, I will never forget the image as I rolled onto my back and watched that plane fly away from me. There is no doubt in my mind that I’ve never experienced a physical event that was as exhilarating as that one.
  • That time I decided to change jobs…  More than once I’ve been told that I’m an enigma. I love comfort, but if I feel that I’m getting complacent, I get an itch to move. That doesn’t mean that these changes don’t come without a lot of sleepless nights and self-doubt. Each and every time has been an enlightening and enriching experience.
  • That time I became a Dad… I’m not sure this one requires an explanation. It’s the biggest, toughest, scariest job I’ve ever taken on. The day we left the hospital, I’m pretty sure I drove home at about 15 MPH. My dad said that the train of cars behind us was epic. I didn’t notice because I was staring straight forward, hands white-knuckled at 10&2 on the steering wheel.

Beyond fear, what all of these moments have in common is that they’ve shown me something about myself. They’ve reinforced my self-worth and they’ve invigorated me to do more. In every case I’ve found myself inspired by our humanity and our ability to help each other. Overcoming that fear can force you to acknowledge, sometimes against all that you hold true, that you’re capable of incredible feats.

Which brings us back to the present tense. For some unknown reason (perhaps it was the KBS eh Callahan?) I thought that I could impart something of myself onto my peers by presenting at a major technology conference. For some reason I thought that I would be able to bring some value to individuals by giving a touch of myself. I’m pretty sure that I knew I’d get rejected, which to be 100% honest was part of what convinced me to actually go through with submitting my proposal. Imagine my chagrin when we actually got accepted to speak… to present… to stand up in front of my peers… to expose myself to their critiques. Hundreds of them. HOLY $H!T!

Susan could tell you, I pored myself into preparing for VMworld 2017, but the doubts persisted. Even after I did OK in my first vBrownBag, I wasn’t sure what was going to happen. I arrived in the hall with plenty of time to spare. There was no other way, as I’m someone who needs to be prepared, but … being there and watching people continue to file in…. Did I mention HOLY $H!T!

And then it was time. When there are hundreds of people watching you, not to mention that big TV camera, well there’s nothing you can do but get on with it. I’ll let you be the judge of my performance, but that’s not what post is about. It’s about the experience. How you learn and grow from it. In the end my experience was amazing. It certainly was fear inducing, but like many things once you get past the first few minutes, the situation normalizes and you have no choice but to try and do your best. Focusing not on the pitfalls, but on the job at hand can be a key element to overcoming that fear.

I think we did pretty well, but even more important, we helped some people. My favorite part of our presentation was answering questions and talking with folks afterwards. Being able to share and connect with other people was pretty humbling. I’ve worked on some decent projects and I’ve had my fair share of success professionally. However this experience was more like skydiving: I can assure you that I have never been so jacked up after a professional experience as I was after our presentation.

So despite the fear and trepidation, it seems that scaring myself has been a pretty solid way to grow as a person, and now as a professional. With all of our natural instincts to run from threatening situations, if you stay and face the situation, the possibility for growth can be mind blowing.

With that, it’s time to sign off. I’ve got another presentation to prepare for…

VM life-cycle with PowerCLI

I’ve heard many authors talk about how through their writing process the story or the characters change. I guess this is one reason titles aren’t decided on until the very end of the writing process. When I submitted my session proposal for “No Money, No Problem!” I had originally planned on writing about using PowerShell/PowerCLI as an automation/orchestration engine. I’ve learned over the years that it “may” not be the best idea to fight the muse. During the writing process for this presentation I followed where the muse led and in the end this presentation ended up being much more about the potential ways that you can automate the life-cycle of a VM. I’m hopeful that if you attended the talk, that it was still useful for you despite the slight pivot. Fifteen minutes is not a lot of time for a technical talk, so this post is an deeper dive into the content from my VMTN presentation. So without further ado….

Day 1

The activities on Day 1 are all about configuring the environment. The example I chose to use in my session was setting up a vDS. You can just as easily apply the same logic to something like setting up iSCSI datastores, clusters or the like. The overall premise is that by leveraging PowerCLI you can speed up the delivery of your environments, while delivering higher quality infrastructure. Let’s take a quick peek at how this is done in the context of delivering the elements necessary to run a Distributed Switch. Striping out extraneous code and comments, what you’re left with is a five liner that will give you a Distributed Switch.

$DC=Get-Datacenter -Name "NoProblem"
New-VDSwitch -Name "vds_iscsi" -Location $DC -LinkDiscoveryProtocol LLDP -LinkDiscoveryProtocolOperation "Both" -Mtu 9000 -MaxPorts 256

New-VDPortgroup -Name "pg_iscsi_vlan5" -VDSwitch $(Get-VDSwitch -Name "vds_iscsi") -VlanId 5 -NumPorts 16

Add-VDSwitchVMHost -VMHost esx-06a.corp.local -VDSwitch vds_iscsi
$pNic=Get-VMHost esx-06a.corp.local|Get-VMHostNetworkAdapter -Physical -Name vmnic1

Get-VDSwitch -Name "vds_iscsi"|Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic $pNic -Confirm:$false

So we pretty immediately get to the heart of why I’m such a PowerCLI advocate with this example. When we look at a command like “New-VDSwitch” it’s pretty intuitive what’s going on; we are creating a new vDS. It kind of doesn’t make sense to go through the plethora of switches/options as they are highly dependent on the situation. That being said there are a couple of items I’d like to call out in this example.

  1. PowerShell allows you to run a command in-line and pass the resulting object directly into a variable. That’s what you see happening here, where the Get-VDSwitch call is wrapped by $(…):
    $(Get-VDSwitch -Name "vds_iscsi")
  2. The power of the PipeLine. By using this powerful tool you can string multiple commands together to create complex actions in a very small amount of real estate.

Day 2

New VM’s 1

It’s my belief that if folks do nothing else but automate the provisioning of VM’s, then then can deliver immense value to their organizations and do it quite quickly. In the code below we create an OSCustomizationSpec, leverage that within a temporary spec with static IP addressing, which is then leveraged in the New-VM example. This is a pretty basic example, but you take it as far as you’d like. In a previous role this simple VM deployment evolved and became the 1700 line basis for our automated deployments.

$DomainCred=Get-Credential -Message "Enter Domain Admin credentials" -UserName "corp.local\Administrator"
New-OSCustomizationSpec -name Win2k12 -Domain "corp.local" -DomainCredentials $DomainCred -Description "Windows 2012 App Server" -FullName "PatG" -OrgName "VMware" -ChangeSid
Get-OSCustomizationSpec "Win2k12" |New-OSCustomizationSpec -Name "Win2k12_temp" -Type NonPersistent
for ($i = 1; $i -le 4; $i++){
  Get-OSCustomizationNicMapping-OSCustomizationSpec "Win2k12_temp"|Set-OSCustomizationNicMapping-IpMode UseStaticIP -IpAddress "192.168.10.10$i"-SubnetMask "255.255.255.0"-DefaultGateway "192.168.10.1"-Dns "192.168.10.10"
  New-VM-Name "WinApplication0$i"-Template "base-w12-01"-OSCustomizationSpec "Win2k12_temp"-ResourcePool $(Get-Cluster"MyCluster")
}

Walking through this example line by line

Line 1: We enter domain credentials and store them in a PSCredential object for use later on.

Line 2: Using the New-OSCustomizationSpec we create a base OS Customization spec, which is used in …

Line 3: We create a temporary OS Spec which we’ll leverage in the customization and deployment of our VM’s. All of this however is just laying the ground work for

Line 5: Within the loop we take the previously created temporary OS Spec and we customize it for use with the …

Line 6: We get to the meat of the matter where we are deploying a VM using the new-vm cmdlet and our newly created and updated temp OS spec to configure Windows for us.

New VMs – Linux from json

While the previous example will simplify matters, it’s not exactly the prettiest code, not to mention the fact that values are hard coded. If you want to start taking your automation to the next level you have to be able to accept inputs in order for the code to be more portable. Thanks to PowerShell’s ability to interpret json (as well as xml, and host of other formats) we can simply read in the desired configuration, and somewhat dynamically create the VM. If you want to include splatting you can go even further with your abstractions, but that’s a post for another day.

$InputFile = “c:\temp\linux_servers.json”
$Servers = $(Get-Content $InputFile -Raw| ConvertFrom-Json).Servers

foreach ($Server in $Servers)
{
    new-vm -name $Server.Name -ResourcePool $Server.Cluster -NumCpu $Server.CPU -MemoryGB $Server.Mem
}

The other Day 2 activity I chose to highlight is reporting. After all, how will you know about the performance and capacity of the environment, if you aren’t taking it’s pulse? Thanks to the kind folks at VMware, statistics can be exposed for use via the get-stat cmdlet which is the star of this example.

$objServers = Get-Cluster Demo | Get-VM
foreach ($server in $objServers) {
    if ($server.guest.osfullname -ne $NULL){
        if ($server.guest.osfullname.contains("Windows")){
            $stats = get-stat -Entity $server -Stat "cpu.usage.average","mem.usage.average" -Start $start -Finish $finish

            $ServerInfo = "" | Select-Object vName, OS, Mem, AvgMem, MaxMem, CPU, AvgCPU, MaxCPU, pDisk, Host
            $ServerInfo.vName  = $server.name
            $ServerInfo.OS     = $server.guest.osfullname
            $ServerInfo.Host   = $server.vmhost.name
            $ServerInfo.Mem    = $server.memoryGB
            $ServerInfo.AvgMem = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Average).Average)
            $ServerInfo.MaxMem = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Maximum).Maximum)
            $ServerInfo.CPU    = $server.numcpu
            $ServerInfo.AvgCPU = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "cpu.usage.average"} | Measure-Object -Property Value -Average).Average)
            $ServerInfo.MaxCPU = $("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "cpu.usage.average"} | Measure-Object -Property Value -Maximum).Maximum)
            $ServerInfo.pDisk  = [Math]::Round($server.ProvisionedSpaceGB,2)

            $mycol += $ServerInfo
        }
    }
}

$myCol | Sort-Object vName | Export-Csv "VM_report.csv" -NoTypeInformation

In the example above we simply iterate through the cluster, and obtain statistics on our Windows VM’s via the aforementioned get-stat cmdlet. Next we store all of the information we care about in the $ServerInfo hashtable. This hashtable is ultimately what’s used for output at the end of the code snip.

I do want to take a moment to breakdown what’s happening in the calculations functions, as it can be a little off-putting if you don’t know what’s happening there. So let’s take the following line and break it down piece by piece.

$("{0:N2}" -f ($stats | Where-Object {$_.MetricId -eq "mem.usage.average"} | Measure-Object -Property Value -Average).Average)

$(…) as we should know by now anything that leads with a $ sign is a variable. PowerShell allows us to cast results from commands or other objects into a variable simply by enclosing in parenthesis with a leading $.
“{0:N2}” Mathematical operator. In this case we’re formatting the value that is a result of the command that follows “-f”. In this specific instance I choose to keep two digits to the right of the decimal place. This is indicated by N2.

The command yielding our number just shows the amount of fun you can get into with pipelines. Starting just to the right of the “-f” option, we take our $stats object and pare it down using the Where-Object cmdlet. These values get further parsed out by piping into Measure-Object, which in this particular case is simply calculating out the average of the desired values in the object.

After all that is said and done, we can use the ever handy Export-Csv to come up with a pretty CSV for the powers that be, which shows just how efficient your environment is humming along!

D-day

Just like this blog post, and the presentation it supports, all good things must come to an end. And so it is with infrastructure as well. In our final example, we use the metrics from the all-powerful and omniscient vRealize Operations Manager. It should probably come as no surprise to you that the metrics which are stored in the all-knowing vROps server can be exposed to you via PowerCLI. If you’ve used or tested vROps I’m guessing that one of the first thing you checked out was the “Oversized VMs” report. We’ll use one of the statistics that make up this report in this last code snip:

$cred=Get-Credential
Connect-OMServer -Server "Demo" -Credential $cred -AuthSource "ADdomain"

$when = $(get-date).AddDays(-20)
$vkey = "Summary|Oversized"
$threshold = ".75"

foreach ($vm in $(get-vm|Select-Object -First 33)){
    $vrstat=$vm|Get-OMResource
    $avg = $vrstat|Get-OMStat -Key $vkey -from $when|Select-Object -ExpandProperty Value|Measure-Object -Average

    write-host $vm.name, $avg.average
    if($avg.Average -gt $threshold){
        write-host $vm.name, $avg.average
        if($vm.PowerState -eq "PoweredOn"){
            stop-vm -vm $vm -Confirm:$true -WhatIf:$true
        }
    Start-Sleep 3
    Remove-VM -vm $vm -DeletePermanently -RunAsync -Confirm:$true -WhatIf:$true
    }
}

Starting to work with the vROps commandlets (contained within the VMware.VimAutomation.vROps module) has a little bit of a learning curve associated with it, so we’ll break down some of the key elements on a line by line basis again.

Line 2: vROps is a separate entity from vCenter so we need to use the Connect-OMServer cmdlet to connect. One of the things that is pretty poorly documented, which may trip you up, is the authentication model in use with this cmdlet. If you are using domain credentials you want to use your short-name and the display name that you setup in vROps as the domain authentication source.

line 9: In this case I chose to pass in a VM object into the Get-OMResource, but you can just as easily use the -Name parameter. Get-OMResource simply returns the vROPs object that we’ll use with…

Line 10: Get-OMStat. The Get-OMStat cmdlet is where you actually start returning metrics out of vROps. In this case I’m using the “Summary|Oversized” statistics key. There are literally thousands of key’s that you can leverage. I’d suggest perusing Mr. Kyle Ruddy’s post on this subject here. For the purposes of this very simple example I figured I’d use an average of the data returned over time to see if this machine is oversized and therefore a candidate for removal. Obviously in an real situation you’d want a lot more logic around an action like this.

 $vrstat|Get-OMStat -Key $vkey -from $when|Select-Object -ExpandProperty Value|Measure-Object -Average 

Breaking down the command line by line. We call the

$vrstat

object and pipeline it into Get-OMstat where we narrow down the results by key using the previously defined $vkey variable as well as date duration of 20 days defined in the $when variable. Finally I’m just interested in the actual values stored within the $vrstat object so we pipeline through the

Select-Object -ExpandProperty Value

cmdlet to pull only the data we want. Lastly we use the

Measure-Object

cmdlet to get an average for use in our simple example.

Line13: A simple If statement checks if we’ve crossed our pre-defined threshold. If we have, we move on to our D-day operations. Otherwise the script moves on to the next VM.

Line 16: You can’t delete a VM that’s powered on. Since we are deleting this VM anyway, there’s no need to gracerfully shut down, so we just power it off.

Line 19: So sorry to see you go VM.

Remove-VM

does exactly what it sounds like. If we omit the

-DeletePermanently

parameter we’ll simply remove the VM from inventory. In this case, we want it removed from disk as well, so the parameter is included. Lastly we don’t want to wait for the Remove operation before moving on to our next victim, so the

-RunAsync

parameter tells our script not to wait for a return before moving on.

NOTE: I don’t know if anyone will use this code or not (I surely hope it helps someone), however just in case you do, I’ve set -Confirm and -WhatIf to $true so that you don’t have any OOPS moments. Once you’re comfortable with what’s happing and how it’ll affect your environment, you can set these fit your needs.

As I said at the outset, I hope you found this talk and post useful. I plan on doing a couple of deeper dives into some of the above topics, so if you’re still reading I appreciate your follows.

Lastly, I’d like to offer up a huge thanks to the good folks at VMTN and vBrownBag for the opportunities they offer people like me and you. If you find this interesting or feel that you have a voice to contribute, please do! A vibrant community depends on engaged people like you.

Thanks for reading, see you soon.

SD

T-minus…

In 10 days I board a jet-plane for VMworld (HOLY CRAP!) which means the excitement is starting to ramp up. There are meetings, demos, events and of course parties to plan for, but how you approach a major conference is something that is very particular to the individual and that’s what I’d like to spend a few minutes discussing today. There are about as many ways to approach a major conference as there are attendees. Really? No,  but let’s just pretend so that I can share a few lessons learned from my conference experiences.

The Lab Guy

My first major conference I spent the majority of my time sitting in the lab soaking up as much hands on experience as I could. I would go hit the expo floor, grab a snack and an adult beverage, hide said adult beverage and hit the lab for hours. Not to say it wasn’t valuable but it damaged my spirit a little when late in the conference I learned that all of the labs would be available online after the conference ended… Oof.

After hearing this bit of spirit breaking news I learned that there is a really valuable reason to be in the labs: the guided sessions. Any time you can sit down with an expert, pick their brains, while gaining hand’s on experience, well… that’s just a win right there.

The pros to this are pretty obvious, you get to spend dedicated time learning, which is never a bad thing, but the fact that HOL will have the labs available on-demand after VMworld Europe diminishes the value somewhat.

The Expo/Party Guy

These strange beasts are very closely related to the Party People. Tech conferences are fun. There is a lot of beer and a lot of free stuff. But there’s always the person who devotes themselves strictly to these endeavors. A lot of really great information can be gleaned off the floor, but at a price… the dreaded badge scanners! If you’re ok with that, then you have a really great opportunity to learn about emerging tech.

Now the expo floor is great, and I have my fair share of headaches/swag to show for it, however there are some folks who make this the primary objective of their conference. The expo floor should be one tool in your conference bat-belt, but if it’s your bat-mobile… maybe it’s being overdone a little bit.

Breakdancers

Yeah ok, that sub-title is horrible, but I couldn’t think of a fun (and appropriate) way to label the folks who are all breakouts, all the time. This one is usually me. Attending conferences isn’t cheap, even if it isn’t coming directly out of your pocket. I usually want to return some value to my sponsor and that historically has taken the form of trying to take away as much directly applicable knowledge as possible. Next to the labs, this might be the most effective way to soak up as much technological knowledge as possible. In my mind, Breakouts are the meat and potatoes of a conference, so it’s hard for me to find a downside here. But like all things, including meat and potatoes, take it in moderation.

Contributors

“Active Participators” may be another way to frame this and it’s a new one for me. At Dell/EMC world 2017, I made it my mission to blog as much as possible. Going into VMworld 2017 I’m really making it my mission to get involved as much as I can. There are so many events that you can get involved with, I’d urge you to get out there and broaden your horizons a bit. On top of the parties located on the gatherings page, you can find opportunities to play games, get into the hackathon, blog in the village and a whole host of other activities.

Whatever your approach, I hope you find the right balance of activities to make your conference amazing. See you in Sin City!

PS: If you need more things to do, come check out my sessions.

VMware library

2017-06-25_203353The entire vSphere community (myself included) seems to be in a flutter over the release of the long awaited Host Resources Deep Dive by Frank Denneman and Niels Hagoort. For me this recently resulted in a tweet-thread to see who had bought the most copies (I lost). The upside to this whole thing is I came across Mr. Ariel Sanchez Mora’s (you should follow him ) VMware book list. I love book reviews, so with Ariel’s permission I’m stealing his idea! In fairness you should really go check out his page first, but make sure to come back! Without further ado, here’s the best of my VMware library.

 

Certification

Like many people, this is where I started. I’d heard horror stories about about the VCP, so after taking Install, Configure, Manage I bought my first book, coincidentally (or not) written by the instructor of my class. I immediately followed it up by adding the second piece to my library.

The Official VCP5 Certification Guide (VMware press) by Bill Ferguson

VCP5 VMware Certified Professional on vSphere 5 Study Guide: Exam VCP-510

(Sybex) by Brian Atkinson

I think that in terms of the earlier books, I’d give the edge to the Sybex version. It covers the same fundamentals as the official guide, but goes much deeper.

Just last year while at 2016 I was wandering around the expo floor at VMworld 2016, bumming from failing my VCP6 (it was expected, but disappointing nonetheless), and I walked straight into a book signing for the new VCP6-DCV cert guide. It was destiny or something like that

VCP6-DCV Official Cert Guide (Exam #2V0-621) (VMware Press) by John Davis, Steve Baca, Owen Thomas

Here’s the thing with certification guides; the majority of the material doesn’t change from version to version. DRS is DRS is DRS (not really, but the fundamentals are all there). If you’re just getting started, or able to get a hand-me-down version of an earlier version you’ll still be leaps and bounds ahead of folks who haven’t read the books. They can be a good way to get a grasp on the fundamentals if all you’re looking to do is learn. To that end, if you’re goal is to pass the test, you can’t go wrong with picking up an official certification guide. I know the VCP6-DCV guide provided an invaluable refresher for me.

For more on the VCP-DCV, please check out my study resources.

Hands On

Learning PowerCLI (Packt publishing) by Robert van den Nieuwendijk

I didn’t realize until just right now that there was a second edition of this book released in February of this year! Regardless, this book is a great way to get started with PowerCLI, however it’s more of a recipe cookbook than a tutorial. If you need a getting started with PowerShell book, look no further than:

Learn Windows PowerShell in a Month of Lunches by Donald Jones and Jeffrey Hicks.

This is the guide to get started with PowerShell. Honestly I think the authors should give me commission for how many people I’ve told to go buy this book. It’s not a vSphere book, but if you want to be effective with PowerCLI, this book will help you on your way. It breaks the concept up into small manageable chunks that you can swallow on your daily lunch break.

DevOps for VMware Administrators (VMware press) Trevor Roberts, Josh Atwell, Egle Sigler, Yvovan Doorn

“DevOps” to me is like “the cloud”. It means different things to different people. In this case the book focuses solely on the tools that can be used to help implement the framework that is DevOps. Nonetheless, it’s a great primer into a number of the most popular tools that are implemented to support a DevOps philosophy. If you’re struggling with automation and/or tool selection for your DevOps framework, there are far worse places to start.

Mastery

Mastering VMware vSphere series

The gold standard for learning the basics of vSphere. This title could just as easily appear under the certification section, as it appears on almost every study list. A word of warning, this is not the book to learn about the latest features in a release. That’s what the official documents are for. You may notice that I linked an old version above, and that’s because the latest version was conspicuously missing nearly all of the new features in vSphere 6. That being said, it’s another standard that should be on all bookshelves.

VMware vSphere Design (Sybex) Forbes Gurthrie and Scott Lowe

Age really doesn’t matter with some things, and while that rarely pertains to IT technologies, good design practices never go out of style. This thought provoking book will help you learn how to design your datacenter to be the most effective it can be. I’d recommend this book to anyone who’s in an infrastructure design role, regardless of whether they were working on vSphere or not.

It Architect: Foundation in the Art of Infrastructure Design: A Practical Guide for It Architects John Yani Arrasjid, Mark Gabryjelski, Chris Mccain

And then on the other hand you have a design book that’s built for a specific purpose and that’s to pass the VCDX. Much of the content is built and delivered specifically for those who are looking to attain this elite certification. This is a good book, but as someone who has no intention of ever going after a VCDX, I expected something a bit less focused on a cert, and a bit more focused on design principles. If unlike me you have VCDX aspirations, you definitely need to go grab a copy.

VMware vSphere 5.1 Clustering Deepdive (Volume 1) Duncan Epping & Frank Denneman

I really don’t care if this was written for a five year old OS or not. If you want to learn about how vSphere makes decisions and how to work with HA/DRS/Storage DRS/Stretched Clusters, this is an essential read. Put on your swimmies because you’re going off the diving board and into the deep end!

I’m just going to go ahead and leave a placeholder here for the aforementioned VMware vSphere 6.5 Host Resources Deep Dive. Having heard them speak before and read their other works, I expect this book to be nothing less than mind blowing.

If you liked this, please check out my other book reviews.

Thanks for visiting!

Champlain Valley VMUG – Summer recap

Better late than never I wanted to provide some information to all of the CVVMUG’ers out there coming out of our successful June meeting.

First off another big thanks to our friend Matt Bradford (aka VMSpot) for his vRealize Operations presentation. Here are another couple of gems from Matt that may interest you further:

vmware-vsphere-6-5-host-resources-deep-dive-proof-copiesThe technical talks seem to be a big hit, so much so that we already have a community presenter lined up for October to talk a bit about DRS. To celebrate, we’ll be giving away a couple copies of the brand new Host Resources Deep Dive book. Mind blowing stuff. For a taste you should check out the recent Datanauts podcast featuring the authors and if you make it to VMworld, their Deep Dive sessions are a must attend.

Speaking of VMworld, as of this writing we are 62 days and counting until the kickoff. Please let us know if you’ll be attending. We’d like to do meetup or a happy hour or something. Also stay tuned, you may be able to find us working and/or presenting at the event. 😉

oct12_vmugSpeaking of conferences, you can find my DellEMC world recaps and thoughts here. And here is a bunch of info about VMware announcements and happenings from the event. Lastly, we talked a little bit about the fracas with VMUG and the newly announced Dell Technologies User Community, you can get another voice on that matter here.

Now that the business end is behind, we are trying to line up a BBQ social event for August, place and time TBD. While you’re penciling stuff in, circle October 12 on your calendar for our next Champlain Valley VMUG. We’re still working on a location, but we’ll have that for you soon!

Last and certainly not least, AJ Mike and I want to thank all of you, members and vendors alike for being involved. This is a community for all of us, and we really value all that you bring.

And don’t think that just because we have one speaker lined up for October doesn’t mean that you can’t also get up there. Mules are awaiting.