But, Wait! There’s More! Toying with PowerShell write speed.

Why hello old friend. It’s been a while. Sometimes life happens…

… and then you remember you actually still have a blog!

I have a random task in front of me at work where I have to do a lot of text manipulation. Obviously I’m going to PowerShell the crap out of this, but as I start framing out the script I realize I’m going to be writing a lot of strings. There is no speed requirements to this operation, but because I’m weird like that, I wanted to figure out what’s the fastest way to write a whole bunch of data to a file.

Here’s the wee little script I wrote to play around:


$StringLength = "400"
$FileLength = "10000"

#test one. Build array, output array in one shot.
Measure-Command {
  write-host "Test: create array of characters and output entire array"
  for($i=0; $i -le $FileLength; $i++){
    $arrDataToWrite+=$(-join $($charSet |get-random -Count $StringLength))
  $arrDataToWrite |Out-File $arrTestFile

#test two. Iterate through, outputting as we go.
  write-host "Test2: create characters and output inline"
  for($line=0; $line -le $FileLength; $line++){
      $(-join $($charSet |get-random -Count $StringLength))|Out-File $lineTestFile -Append

Methodology was pretty simple.

  • First test: build one big array full full of strings of random characters. After building the array, output the whole array to file
  • Second test: build strings of random characters and use Out-File to append the strings as we loop

2019-05-16 13_05_33-Windows PowerShell ISEThe results as you can see are pretty clear. After several test runs of varying sizes, appending strings to the file in-line is pretty consistently ~3x slower than the array method.  This makes sense when you think about the operations. In test two, every time I write to the file I have to access/open the file, write the data, close the file. If I’m counting right, that’s 3x the number of file operations… and that loop takes 3x longer…. huh…

Obviously it’s an over-simplification, but there are vastly more activities taking place in test two, hence the much longer execution time. Case closed, send this to print!

51WRFJ1M91L._SX258_BO1,204,203,200_However, in the famous words of Ron Popeil: But wait! There’s more!

I typically do a litteof digging around before posting thoughts. Call it pragmatic, call it worrying, call it imposter syndrome, but reality is I just don’t want to stick my foot in my mouth. So I’m reading a few articles when I come across the article titled Slow Code: Top 5 Ways to Make Your PowerShell Scripts Run Faster from Ashely McGlone.

Sure enough, half way down this informative article  is a heading that reads “appending to arrays”. Well, that’s what we’re working on here! So I give this technique a shot, and voila my fastest run just got 50% faster! Here’s the code from test1 further optimized

#But wait, there's more!
#test three: Build the dataset within the loop and assign it to the array in one shot
  write-host "Test3: assign all data to array in one shot and output entire array"
  $arrDataToWrite_Fast = for($j=0; $j -le $FileLength; $j++){
    $(-join $($charSet |get-random -Count $StringLength))
  $arrDataToWrite_Fast |Out-File $arrTestFile2

As Mr. McGlone states in the linked article, the For loop runs and stores data in memory.2019-05-16 13_26_42-Windows PowerShell ISE By assigning that For loop to the array in one shot, you only have one expensive array operation rather than ‘N’ number of array operations.

As I stated at the outset, there’s no real reason behind this activity other than learning and having fun. To that end, I hope you keep learning, keep scripting and keep having fun.

Oh and Ashley, if you should by chance come across this, thanks for your article and for helping me learn something new!


A Tale of Two Architects – A Review

I just finished two really great books on architecture, that really necessitate sharing with all of you. These two books were written for very different purposes and with very different voices, but I found them both to be enjoyable reads with educational content.

VDI Design Guide: A comprehensive guide to help you design VMware Horizon, based on modern standards

Johan van Amersfoort

The title doesn’t exactly roll of the tongue, but honestly that might be my biggest issue with this book…

My team has a small, but challenging VDI environment, so I placed an order the day this book was released and immediately tore into that lovely amazon box when it arrived. It’s been a running joke in the infrastructure and virtualization worlds for some time now that “this is the year of VDI!” Perhaps the fact that this prognostication hasn’t come to pass is due to the complexity to running a well functioning VDI environment. Seriously, just think about all the various components that make up a VDI environment: your core infrastructure Hypervisor/SAN/Network/Security/Compute, the client, the delivery model, nevermind what’s actually in the guest! Johan does a really great job of walking through all of these components and so much more in his first book.

The style of the book emulates the VCDX design methodology. I am not, nor ever will be, a VCDX but I found his explanation of the methods to be much more engaging than in other tomes. What I mean by this is that there are some architecture books out there that are extremely dogmatic and really are just guides towards attaining a certification. Johan on the other hand does such a nice job walking the reader through the various design and architecture phases that I’d strongly consider giving this book to any burgeoning architect, whether they cared about VDI or not.

Now don’t let me fool you, that this is all method and no meat, because that would be a tragedy. Like I mentioned at the outset, my team has some VDI challenges, and with the authors thorough and detailed dissection of all the various components of a VDI infrastructure, we had immediate technical takeaways. Johan walks through all the components that make up a VDI environment, providing his recommendations for why you may want to go in a specific direction, and just as importantly why you may not!

I’ve been told on a couple of occasions that I have a unique voice when I write. Given that, I have to say I thoroughly enjoyed Mr. van Amersfoort voice through this book. As it was pointed out during his recent visit to the Datanauts podcast reading this book was like sitting down with a colleague and chatting through a technical issue. It truly made it one of the most fun technical reads I can remember.

All in all if you’re interested in VDI or general Architecture principles, do yourself a favor and pick up Johan van Amersfoort’s first book.

You can find Johan at @vdidesignguide & @vhojan

IT Architect Series: The Journey

Melissa Palmer

product_thumbnail.phpI picked up this book solely because I’ve enjoyed Melissa’s blog (https://vmiss.net) for some time. In the review above, I alluded to having read some bad architecture books, so I intentionally went at this book with no expectations. I have to come right out of the gate and say that this book was one of the most interesting technology books I’ve read, in that it talked about technology very little. The subtitle for this book reads “A guidebook for anyone interested in IT architecture” and a guidebook is really what Melissa gave us.

The premise to this book is to help anyone interested in technology or a burgeoning IT practitioner understand just what an Architect is and what it takes to get to be one. I can speak for no one but myself and my observations over the past 20 or so years in IT, but it seems that many systems architects just kind of eventually land in that role. They get good in one area, and maybe good in another. After some time they end up being the smartest gal/guy in the room. This is not the book to help with that sort of an endeavor and I love it! In writing this book Melissa provides a path, that worked for her en route to VCDX, on how to take a more active approach to becoming a solution provider. A sampling of the topics covered include, “Learning New Skills”, “Infrastructure Areas of Expertise” and “Architectural Building Blocks”. The format is more about the journey rather than a prescriptive roadmap. In fact throughout the book, the reader is encouraged to take a step back and see how the information shared fits within their role and worldview.

While I really enjoyed the approach and Melissa’s voice, my knock on this read is that it could use a copy edit. If you are someone who has ever joined in on the “On Premises” debate, please approach this book knowing that there is some small amount of errata present. As a wanton comma abuser, I’m certainly not throwing stones and I hope this doesn’t stop you from picking up the book; the content contained within absolutely makes up for any grammatical oopsies.

The primary content of the book clocks in at just under 200 pages. If you already are or aspire to be an architect you are going to read technical guides that are way longer than this! Just like with her blog, Melissa’s personality carries through this book. It’s obvious that a passionate person wrote this piece, in an effort to help others, all the while maintaining a sense of self. A perfect example is when discussing assumptions towards the end of the book, Melissa creates an analogy where she uses the word ‘chicken’ ten times in a paragraph. I literally laughed out loud, to which my wife responded “Is your geek book amusing dear?” Yes, yes it is.

Many IT practitioners discount some of the “softer” skills required in a business environment. It’s in this vein where I think the book really shines. If you are someone who has a hard time communicating in either written or verbal form, you are probably going to have a hard time obtaining an architect level role. Melissa spends a significant portion of the book emphasizing what these skills actually are, why you need them and tips on how to improve them. I’m thinking about getting a couple more copies of the book for some folks who could really us some self-reflection in this area…

Obviously anyone with aspirations of reaching an architect level would benefit from picking up this guide. If I were a college professor teaching folks what it was like to work in IT and to give them a broad perspective, I’d have them read this book. As someone who’s worked in an architectural role, I learned a number of things as well, meaning even seasoned IT pros can benefit from picking this up. Reading this book over the past few days, it became obvious that Melissa cares about people and the solutions they provide, so by that token perhaps we could all benefit from the reflective approach conveyed throughout this book.

You can find Melissa at @vmiss33 & @ITArchJourney

VMworld 2018 – FOMO? Never fear!


In just a few days friends, colleagues, teachers, luminaries and thought leaders will be converging on Las Vegas for the biggest and best virtualization conference in the world. If you’re in the same shoes as me, VMworld 2018 just isn’t in the cards. Hearing that Tony Hawk , Run DMC, The Roots and Snoop would be a part had me a bit bummed. However it was when I heard that Malala would be participating in the general sessions, that I turned that attitude around.

It was then that I realized there is still a wealth of ways to experience VMworld, even when you’re 2,638 miles away from Las Vegas, not that I’m counting or anything.

General Sessions

Like I alluded to above, it was seeing that Malala would be participating in the general sessions that helped turn my attitude around. The reason for this is that VMware makes an effort to broadcast the General Sessions live.

If you haven’t been to a major conference, these sessions are the reason why a lot of people refer to conferences as a “show”. It’s time for the heavy hitters, for the big production and for news to drop. The general sessions that I’ve attended tend to follow a pattern:

Day 1 State of the Union. Let’s highlight our successes, broad industry trends and how we are positioned to respond or better yet, led those trends.
Day 2-N  Thought leaders. Talk about growth, and what the future holds. Not everything that you see at a tech conference will become reality. I feel like it’s these days where you see organizations testing the water to see how ideas and roadmaps feel among the various stakeholders.
Last day  Honestly these are my favorite sessions. The show’s almost over, some folks have already left town and honestly the people who are left are likely kind of burnt out. VMworld always saves something cool for those brave and/or hardy folks who are left standing on the last day.

Now unfortunately that final cool session is only for attendees. It’s probably a good reason to start working on your budget justification to attend next year… For the Monday and Tuesday sessions however, you’ll want to set a calendar reminder to tune in at 9:00AM PT for the general sessions live on VMworld.com

vBrownBag Tech Talks

The vBrownBag talks are one of my favorite parts of VMworld. If you’re reading this blog, you already know about the crew, but if by some chance you don’t know… vBrownBag is a community of passionate people who want to share and facilitate sharing within the IT Infrastructure community.

2017-10-01 12_01_50-Clipboard
Getting my feet wet at my first #vBrownBag session

The other cool part about vBrownBag is that they produce Tech talks. These are short community sessions ranging from just a few minutes up to a half hour in length. You can check out my 2017 session on life as a SMB in a big Enterprise world or PowerCLI for examples. (Go easy on me, I was nervous about my other sessions). The whole point of vBrownBag is sharing and the very cool people who produce the Tech Talks do a damn fine job at it. If you want to follow along live, you can check out the action on vbrownbag.com or if you are unable to participate live all sessions are posted to the vBrownBag YouTube channel, usually within an hour or so.

Community members coming together to share with each other. For everyone involved it’s a labor of love and how can you beat that?

VMware {code} Power Sessions

2018-08-23 22_08_03-VMware code - Home _ FacebookI am super excited about this new offering! And maybe a touch bummed that I’m not going to be participating… But just because I won’t be presenting, doesn’t mean that I won’t be following along. Similar to what the vBrownBag folks are doing, the VMware community team will be hosting expert-led presentations from community members, but with a focus on DevOps and developers. All the action will be live streamed via the {code} facebook page. You can check out the entire line-up by searching for CODE sessions in the content catalog.


Since we’re talking about community, let’s not forget about VMTN. The VMTN page is always a hotbed of activity during VMworld. I’m not sure why it’s a secret, but nevertheless it is kind of the secret sauce to staying in the know during the show. If you wanted a place to participate in contests, watch live streams, chime in with all of your community friends, then you might want to head over to the VMTN page.


Holy crap! How can I forget the bloggers! While writing a blog post! Shame!

In my mind the blogosphere (is that still a term?) is the lifeblood of our vCommunity. It’s where passionate people go to talk about the things that matter most. Where we share our successes, our trials and all of the cool things we learn about! What better place to do that than at the VMworld. VMware has a really strong blog presence that’s only gotten stronger over the past year or two. I’m obviously partial to the PowerCLI blog, but next week I’ll be keeping an eye on the official VMworld Blog. If you can’t make the general sessions, this is where the news will drop. I’ll just leave that there…


Beyond the official blog, there are dozens of people blogging about nearly everything that happens at the show. As someone who’s live blogged a general session, I can tell you that the only reason someone would blog from a show is to share with others. Here’s a good place to start if you’re looking for some of the fine folks who’ll be attempting to document everything that happens in Vegas next week. Well, not everything…

Beam me up … me

Sorry (not sorry), horrible dad joke there.

Did you know you can drive a robot at VMworld? Seriously. Ok, not a robot, but a BEAM. Don’t know what a BEAM is? It’s basically FaceTime mounted on a remote control car. You can register to drive one of these super awesome RC devices around the VMTN space. How awesome is that?!?


I’m sure that there are more ways to experience VMworld if you’re not there, but honestly I’m tired just writing this, let alone trying to sample all of the above options. No matter which way you go, there is definitely no fear about being able to make the most of VMworld from afar.

Start PowerShell’ing your Backups

VeeamOn was such a great and educational show. For my small part of it, I wanted to share how you can automate the deployment/management of your infrastructure. Why would you want to automate? Seriously, did I really just ask that… Speed and  standardization/predictability are the primary drivers for scripting via PowerShell. Or awesomeness. Yup, we’ll stick with the fact that PowerShell is full of awesomeness.

I started my VeeamOn presentation with an example of just how awesome you can become with your PowerShell scripts. In the first part of the video you see how long it takes a person to manually configure a job. Keep in mind this is someone who pretends to know what they are doing, but still errors happen. The second example shows just how long it takes to implement the exact same job via script. So let’s take a moment to parse the Why’s of PowerShell

  • Speed. When I made this video, I had a practiced each click. It still took 2 minutes and 45 seconds to create a backup job. Conversely, the script took 30 seconds to complete. This is for one machine. Now think about if you’re setting this up for 100 machines… manually you’re looking at 4.5 and half hours. Via script, 50 minutes. Yes this is dirty math as there are many factors that go into the equation, but you can’t argue with the fact that scripting is more efficient, even factoring in the ~20 minutes I spent writing the script.
  • Standardization. I’ve worked with Veeam B&R for a number of years now and as I mentioned above, I practiced the workflow to try and take out any bias from the equation. Still I made an oops. We’re human after all. As you’ll see in the code below, by standardizing you can remove much of the human variable (bad pun intended) and produce a more consistent output.

Without further ado, here’s the code and desciptions about what’s going on.

lines 2-3: If the PsSnapIn for Veeam isn’t loaded, let’s go ahead and load it. You’re using ISE right, so you probably want it in your editor session as I talked about here
lines 6-27: When you stop and think about it, backup jobs are complex and have lots of options. This is an uber simple example and still you’ve got 16 lines of configurations. By moving this off into a function you make your code both more readable and repeatable. Thankfully the developers have used very intuitive naming conventions for the job options!
line 32: connect to the Backup and Recovery server. Full disclosure, I set the $cred variable in a previous script and got lazy. Sorry!
lines 6-27: Finally, we are getting to the action. It’s exciting! But it’s also not. We set most of the options already. At this point all we need to do is execute the various VBR cmdlets.

  • Add-VBRViBackupJob
    Short and simple, create the backup job… Or is it????  We create a job, but we also leverage:
    Find-VBRViEntity: Find the VMware entity to backup that we specified in our variables above.
    Get-VBRBackupRepository: Fairly self-explanatory, find the backup repository that the job will use.
  • Set-VBRJobOptions
    There is a flow to objects created via the VBR PowerShell cmdlets. Create the object. Set options on the object. This is the later. These are the options configured in our SetVariables function in lines 19-26.
  • Set-VBRJobAdvancedBackupOptions
    Do I need to explain what this does?
  • Set-VBRJobSchedule
    Again, pretty self-explanatory right?
  • Add-VBRViBackupCopyJob
    Remember when I screwed up in the video above? Why do you have to bring up those painful memories??? Anyway, moving on this one creates a new copy job on the previously set variables.

Each of the above cmdlets have a significant number of options to fit your environment. I’d encourage you to peruse the Veeam PowerShell Reference guide for additional options.

Here’s the code. We’ll cover more advanced workflows in our next post in this series. Stay tuned!

#VeeamOn Simple Backup Job Demo
if( ! $(get-pssnapin -Name VeeamPSSnapIn -ea SilentlyContinue)) {
Add-PSSnapin VeeamPSSnapIn

function SetVariables{
$Global:PWord= ConvertTo-SecureString -String "VMware1!" -AsPlainText -Force
$Global:cred=New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList "lab\Administrator", $PWord



#set JobOptions
$Global:VBRJobOptions=Get-VBRJobOptions -Job $VBRBackupName
$VBRJobOptions.JobOptions.RunManually = $false
$VBRJobOptions.BackupStorageOptions.RetainCycles = 3
$VBRJobOptions.BackupStorageOptions.RetainDays = 7
$VBRJobOptions.BackupStorageOptions.EnableDeletedVmDataRetention = $true
$VBRJobOptions.BackupStorageOptions.CompressionLevel = 6
$VBRJobOptions.NotificationOptions.SendEmailNotification2AdditionalAddresses = $true
$VBRJobOptions.NotificationOptions.EmailNotificationAdditionalAddresses = "test@test.com"

#Call SetVariables function

if ( ! $(get-vbrserver -Name $VBRserver) ) {Connect-VBRServer -Credential $cred -Server $VBRserver}
sleep 5

$null=Add-VBRViBackupJob -Name $VBRBackupName -Entity $(find-vbrvientity -Tags -Name $VBRBackupEntity) -BackupRepository $(Get-VBRBackupRepository -Name $VBRRepositoryName)

$null=Set-VBRJobOptions -Job $VBRBackupName -Options $VBRJobOptions

$null=Set-VBRJobAdvancedBackupOptions -Job $VBRBackupName -EnableFullBackup $true -FullBackupDays Friday -FullBackupScheduleKind Daily

$null=Set-VBRJobSchedule -Job $VBRBackupName -DailyKind WeekDays -At 01:00

$null=Add-VBRViBackupCopyJob -DirectOperation -Name $VBRBackupCopyName -Repository $(Get-VBRBackupRepository -Name $VBRReplRepositoryName) -Entity $(find-vbrvientity -Tags -Name $VBRBackupEntity)


Getting started with Veeam for PowerShell

Shame on me! Right after VeeamOn 2018, life threw my family some major league curve-balls and I never had a chance to get my code shared out. Time to fix that…

For those of you who may be coming at this fresh, Veeam has provided a PowerShell SnapIn for configuring, maintaining and monitoring Backup and Replication. Simply choose to install the Veeam Backup and Recovery console from the B&R ..iso file or follow the instructions in this KB article. When you launch the Backup and Replication console, you’ll find a PowerShell menu option under the main Console menu. The way I write, I really need an ISE and as built you just get a PoSH window, rather than ISE. So I did a little sleuthing… and I mean little. I typed the command Get-History and low and behold, the VBR shortcut fires off a PowerShell script located at:

“C:\Program Files\Veeam\Backup and Replication\Console\Install-VeeamToolkit.ps1”

It’s always an interesting read to see how other people solve problems. Basically the script does a bunch of validation and calls another script C:\Program Files\Veeam\Backup and Replication\Console\Initialize-VeeamToolkit.ps1  more validation, aliases, options etc and finally we see two things:

  • The script functionality is delivered by VeeamPSSnapIn
  • The functions Get-VBRCommand and Get-VBRToolkitDocumentation are defined in the Initialize-VeeamToolkit.ps1 script. You’ll need another path if you want to make use of them, but I’m gonna help you out there in a minute.

TLDR: Add-PSSnapin VeeamPSSnapIn

Above I mentioned Get-VBRToolkitDocumentation.  Interestinglyl this function fires up the Veeam documentation at https://helpcenter.veeam.com/docs/backup/powershell/

It’s a pretty good and comprehensive document set, so I’d highly recommend checking it out. Seriously, it’s a really good guide with some great examples that I’d encourage you to explore.


This is an interesting one. It basically uses the Get-Command to list out all of the VBR commands. If you drill into an individual command you get some more info about what’s under the cover, but what’s really interesting to me is if you take a peek at some of the numbers coming out of this function. As of this writing, there are

  • 510 individual cmdlets in the Veeam Backup and Replication SnapIn,
  • 27 Verbs
  • 259 Nouns

If you’re like me, that’s a pretty intimidating sample to tackle. But if you look at the data slightly differently, it gets much more manageable.

Starting with the Verbs, we see that 1/5 of the cmdlets are get’s and when you combine that with Set and Add over half of all the cmdlets from this snappin are accounted for.

veeam backup and replication powershell toolkit

Now when you turn to look at Nouns, the data is very different.

veeam backup and replication powershell toolkit2

Wow… Verbs are consolidated, however nouns are numerous. This is odd at first glance, but when you think about it, it totally makes sense. The Veeam B&R snappins are meant to support a variety storage/backup/infrastructure products, but the actions you can perform across these products are more or less consistent. This consistency is great for you as you get started on your way towards automating your infrastucture with PowerShell. We’ll start going deeper into that infrastructure management in our next post, stay tuned!

If(Code -ne ISE){DontFret}

I’m behind the times, as usual…

I’ve been reluctant to give up my PowerShell ISE with ISEsteroids for years now, but I think it’s finally time to get onboard with VS code. It’s definitely a bit of a shift, so I thought that I’d add my thoughts to the chorus of users who’ve made the switch from ISE to Visual Studio Code.

I have an odd sense of humor, so I thought it would be fun to use ISE to download and install it’s replacement. Yes, I know it would’ve been faster to simply get it via browser, but did I mention my odd sense of humor…

$Uri="<a href="https://go.microsoft.com/fwlink/?Linkid=852157">https://go.microsoft.com/fwlink/?Linkid=852157</a>"


Invoke-WebRequest -Uri $uri -OutFile $download

Start-Process -FilePath $download -ArgumentList "SP","/silent","/Log"

So after allowing UAC to run the file, we have a base install of VS code, that we can launch by simply typing the command ‘code‘ inside any Windows command interpreter.

2018-06-04 14_53_22-Untitled-1 — Visual Studio CodeSince I write pretty much exclusively for PowerShell, there’s a couple of things that I need to do right out of the gate to make this tool useful. First off, code is meant to be portable and to fit many needs, so there isn’t a ton installed out of the gate. Code handles this conundrum via Extensions. To add an extension, simply click Extensions in the Activity bar. This will open up the Extension marketplace. In the marketplace simply search for the desired extension, in this case PowerShell and hit install. Code will make you reload your session in order to make use of the newly added PowerShell features.

I’m almost exclusively going to be writing PowerShell, so I’d like this to be configured  as best I can for that purpose. Step 1, make PowerShell the default terminal. We can do this a couple of ways, although it looks like the folks on the Code team may have changed the default behavior since the last time I looked. But I digress…

We can get to our user settings from the File Menu -> Preferences -> Settings, however I want to use one of the powerful features of Code, the command palette. The command palette is a very dynamic and powerful tool in Code, but much has already been written on it, so no need to retread the same ground. After entering the palette by typing ctrl-shift-p , I simply start typing what it is that I’m looking for, in this case default shell, and IntelliSense figures out the rest. After selecting the setting I’d like to change, VS code kindly offers me some suggestions. Sure enough after selecting the PowerShell option for “Terminal: Select Default Shell” I see a new setting in my user settings json file.


Finally when I go to check out my terminal, viola PowerShell is my default:2018-06-04 15_50_19-settings.json — Visual Studio Code

Next up, I want to make sure that my default language for VS code is PowerShell. This time I manually edit my settings (File Menu -> Preferences -> Settings) and add the line for “files.defaultLanguage”: “powershell”

 The reason for this is that by default VS Code files are in a plain text format (*.txt, *.gitignore). I’m sure that’s great for a lot of folks, but for me I use PowerShell in a day to day operational role. I’m not always writing code for reuse, often I’m writing and executing ad hoc scripts to be run via my friend F8. By changing the default language, when I start up a new, untitled and yet to be saved script Code knows that I’d like it to be interpretted as PowerShell and they even put the pretty icon for PoSh in the tab for my script.
Oh, you can also see that IntelliSense recognizes the language and provides context specific assistance as well. BTW for this configuration, I found that I needed to restart Code completely, not just reload the window for the defaultlanguage setting to take effect.

The last thing I need to customize to make this feel like home is to set the switch for “powershell.integratedConsole.focusConsoleOnExecute”: false. Code’s default behavior is to move context from the script selection to the console on execution. If you’ve been using ISE, you’re used to the context staying at the script selection. Setting this switch to false will replicate the ISE behavior of not changing focus.

At this point I have Code configured to feel familiar enough that I can start using it for some functions. If you just want to jump to the punchline, here’s my very simple settings.json file.

"terminal.integrated.shell.windows": "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
"files.defaultLanguage": "powershell",
"powershell.integratedConsole.focusConsoleOnExecute": false

14727674df540bb3e5302bcb011cfb45_0As a long time ISESteroids user thought I’m having a really hard time making the shift from having a persistent Variables window to using the Code debugger. I’m not sure if I’m being a crotchety old man pounding away at the keyboard or if this will really be a road block for me making the shift permanently to Code. I guess time will tell…. In the meantime, if you’re new to using this tool, I’d urge you to pay attention to the tutorial Microsoft provides on install of VS code. It’s quite a solid walk through and will help with some of nuances of getting started.

If you have any additional tips and tricks on making the shift from ISE to Code, please reach out.

Until next time, happy writing.

Exchange Out of Office via PowerShell

I think we’ve all been in the situation where we take off on a lovely vacation and at some point in that first 24 hours you go “OH $^%# I forgot to set my out of office message!” We recently had that happen in a critical department at work and rather than deal with the ECP, I wanted to script this out as the situation is not abnormal…

So without further ado:

$enddate=get-date "April 22"
$inmessage="I am out of the office until $($enddate.toshortdatestring()). For immediate access, please contact my supervisor: $forwardaddress. Thank you and have a great day!"

### if you would also like to setup a forward, change the value of $forward to true

Set-MailboxAutoReplyConfiguration -Identity $user -InternalMessage $inmessage -ExternalMessage $outmessage -EndTime $enddate -AutoReplyState $autoreply

     Set-Mailbox $user -ForwardingAddress $fowardaddress -DeliverToMailboxAndForward $fork

lines 1-5 & 8-10: Set our variables because we don’t HARD-CODE. The first rule of Powershell club is that “YOU DON”T HARD-CODE VALUES!”  Actually this is ripe for converting to parameters in v2…
line 12: Here we come to the meat of the script, the Set-MailboxAutoReplyConfiguration cmdlet. As with most well formed cmdlets, once you find the relevant cmdlet, the parameters are pretty self explanatory… That being said, here’s a breakdown:

-Identity   This is the unique identifier for the mailbox. It accepts many value types that you can review here, but the major elements you’d pass in here are name, CN or email address.
-InternalMessage/-ExternalMessage   These are the messages that you’re sending out to your respective users. There are lots of examples about how to configure this parameter if you want to use HTML, but if you want to just add a simple text message, type away.
-EndTime   OK, I’m starting to feel dumb now, because this is pretty self-explanatory. This is when you want the Out Of Office message to expire. In my example script I use this as part of the message as well as a script parameter.
-AutoReplyState   You have three options here “Enabled” or send my OOO messages. “Disabled” or don’t send my OOO messages. “Scheduled” or send my OOO messages during the timeframe I already informed you about dummy!

line 14: On line 14 we check if the $forward variable is set to true. If not, have a good day! If it is true, we proceed to…

Line 15: We use a separate cmdlet here, Set-Mailbox. The Set-Mailbox cmdlet is used for many purposes related to general mailbox maintenance. There are dozens of parameters for this script, but in our case we use the parameters specifically to forward email messages on to the vacationing employees manager:

-ForwardingAddress   Where you are sending the emails to.
-DeliverToMailboxAndForward   If you want to forward emails AND deliver the messages to the original destination mailbox, set this to $true. If you’d like the mails to forward only, this value should be set to $false. IE. Messages are NOT delivered to the original mailbox if you set this to $false.

There is a lot you can do with Set-MailboxAutoReplyConfiguration, but when you run Get-MailboxAutoReplyConfiguration there ain’t a lot of options…


2018-04-18 11_16_25-mRemoteNG - confCons.xml - Exchange

In this final example we see that our message is set as we configured and the end date of our scheduled message has been set for our forgetful vacationer. As in previuos experiences (and blog posts) the Exchange PowerShell modules lay waste to what could have been a tedious problem.

Until next time…

This VM is getting long in the tooth…

Leverage Operations Manager to check the “summary|oversized metric and if the VM crosses thresholds… So long, farewell, auf Wiedersehen, good night

### Connect to vROps server
Connect-OMServer -Server "Demo" -Credential $cred -AuthSource "ADdomain"

### set some things that will be used later when we do some things
$when = $(get-date).AddDays(-20)
$vkey = "Summary|Oversized"
$threshold = ".75"

foreach ($vm in $(get-vm|Select-Object -First 33)){
    ### get-stats on each VM from vROps
    $avg = $vrstat|Get-OMStat -Key $vkey -from $when|Select-Object -ExpandProperty Value|Measure-Object -Average

    ### Remove VM's that surpass the threshold
    write-host $vm.name, $avg.average
    if($avg.Average -gt $threshold){
        write-host $vm.name, $avg.average
        if($vm.PowerState -eq "PoweredOn"){
            ### Confirm and WhatIf have been set to true in order to protect the innocent
            stop-vm -vm $vm -Confirm:$true -WhatIf:$true
    Start-Sleep 3
    ### Confirm and WhatIf have been set to true in order to protect the innocent
    Remove-VM -vm $vm -DeletePermanently -RunAsync -Confirm:$true -WhatIf:$true

Get-EsxCli oh my!

This post is the first detailed post coming out of my presentation Ditch the UI – vSphere Management with PowerCLI at the CT VMUG Usercon. Stay tuned for the full bakers dozen of code posts!

esxcli, oh my…

I was finishing a task the day before my presentation, at the CT VMUG usercon – Ditch the UI when I happened to come across the cmdlet Get-ESXCli. My first thought was “No. Really? This command exists and I’m just finding out about it now?” sigh…

But yes it’s true, the fine folks on the PowerCLI provided us this cmdlet some time ago, and I’m just learning about it now. Well, better late than never as they say.

Before we go any further, let’s take a minute to talk about esxcli. ESXCLI is a command line interface (duh) for ESXi systems. It’s a means to modify the VMHost systems, mostly but not entirely, as it relates to the hardware systems. It’s an early & essential, but limited, way of configuring some of the management elements for your ESXi hosts.

Now I mention it’s limitations because they are essential to our story. The first big limitation is the structure of tool. It’s structure is that of namespaces with a hierarchical tree. This means that when you want to access, oh I don’t know, VMFS information/properties/methods you had to know that VMFS exists as a sub-namespaces under the esxcli storage namespace. Not a big deal and some folks actually like the structure, although it’s never really work for me.

Of bigger concern with esxcli, is locality. What I mean is that it was difficult to manage multiple systems. Of course you had options like the vMA or vCLI, but again you’re limited with connectivity options. And it was ugly to programatically try to get around these limitations.

Needless to say I was stoked to come across get-esxcli the other day.

Get-ESXCli! Oh My!!!

I’m a tinkerer. When I see a new toy, I just want to fiddle with it. That being said, I also know from experience (good and bad) what you can do with esxcli, so my first stop on Wednesday was get-help.

2018-03-02 21_27_50-Windows PowerShell

Not terribly helpful… I guess we have no choice but to dive in. To start off, I saved the output of get-esxcli to a variable.

$esxcli=get-esxcli -VMHost $vmhost -V2

Now that we’ve got esxcli (see what I did there?) stored as an object, let’s see what we can do. First things first, echo it back to the screen to see what we get right out of the gate.


Ok… well, this gives me something to go off of. It looks just like the structure of esxcli. I suppose that this shouldn’t be shocking, but it certainly is reassuring that we’re treading in semi-familiar territory. So it’s probably save to start moving down the tree.


Thank you Jeffrey Snover and team for allowing us to experience the wonders of things like tab complete and accessing sub-elements via the dot operator. So looking at a the output of $esxcli.network.nic we see that we can access things like TCP Segmentation Offload and VLAN’s. We get a few direct methods against our current level of the tree in addition to a Help() method.


OK, so this is starting to make a little more sense now. We’re basically taking the same namespaces and methods, but just making them look like PowerShell. At this point I figured I was good to go and just started firing off commands… which resulted in a lot of red text. After one of my frustrating attempts I actually paid attention to what tab-complete was showing me


Hey there “Invoke”? What are you doing there? How come I haven’t seen you in these parts before? After a few tests to get the syntax down, I learned that the .Invoke() method is how we actually get work done, and I can get down to the task I’d originally been so happy to automate.

Accessing the VAAI primitive

Ok, so I wasn’t quite ready to get to work yet.


After checking out help, just to make sure, I was ready to go…

Here’s my first script using the get-esxcli cmdlet. My whole intent was to programmatically go through my datastores, which resided on a thin-provisioned SAN, to free up any space that needed reclamation.

### Disclaimer. This one needs work, I was just so excited to learn about this cmdlet! To be continued....
Set-PowerCLIConfiguration -WebOperationTimeoutSeconds -1 -Scope Session -Confirm:$false

foreach ($cluster in $(get-cluster)){
  $dsarray=get-cluster $cluster|get-datastore
  $vmhost=get-cluster $cluster|Get-VMHost|select -First 1
  $esxcli=get-esxcli -VMHost $vmhost -V2
  foreach ($ds in $dsarray|where{$_.Type -eq 'VMFS'}){
      $esxcli.storage.vmfs.unmap.Invoke(@{reclaimunit = 60; volumelabel =$ds.name})

I went at this script knowing that my storage volumes were mapped to all hosts in each of my clusters. If you weren’t in a similar vanilla situation you’d have to get a bit tricksier with your selection logic. However with knowledge of my environment at hand it was pretty simple to:

line 4: Iterate through each of my clusters.
line 5: Get each datastore for the specified cluster
line 6: Get the first VMHost in the cluster. This is ok because I know that all datastores are associated with all hosts in a given cluster.
line 7: Use our new friend get-esxcli against the previously retrieved VMHost.
lines 8-10: Iterate through each datastore in the cluster and run leverage the invoke method against the VMFS UNMAP primative
$esxcli.storage.vmfs.unmap.Invoke(@{reclaimunit = 60; volumelabel =$ds.name})

With that I was able to return a not insignificant amount of freed up space back to my array. After writing PowerShell code leveraging PowerCLI for the last few years, I’m feeling like I have a new array of opportunities open to me after newly discovering the Get-ESXCli cmdlet.

Happy scripting!



Performance Reports via vCenter statistics

Leverage get-stat to pull statistics from vCenter, build performance report based on averages

$myCol = @()
$start = (Get-Date).AddDays(-30)
$finish = Get-Date

$objServers = get-cluster cluster | Get-VM
foreach ($server in $objServers) {
    if ($server.guest.osfullname -ne $NULL){
        if ($server.guest.osfullname.contains("Windows")){
            $stats = get-stat -Entity $server -Stat "cpu.usage.average","mem.usage.average" -Start $start -Finish $finish

            $ServerInfo = ""|Select vName, OS, Mem, AvgMem, MaxMem, CPU, AvgCPU, MaxCPU, pDisk, Host
            $ServerInfo.vName  = $server.name
            $ServerInfo.OS     = $server.guest.osfullname
            $ServerInfo.Host   = $server.vmhost.name
            $ServerInfo.Mem    = $server.memoryGB
            $ServerInfo.AvgMem = $("{0:N2}" -f ($stats | where {$_.MetricId -eq "mem.usage.average"} |Measure-Object -Property Value -Average ).Average)
            $ServerInfo.MaxMem = $("{0:N2}" -f ($stats | where {$_.MetricId -eq "mem.usage.average"} |Measure-Object -Property Value -Maximum ).Maximum)
            $ServerInfo.CPU    = $server.numcpu
            $ServerInfo.AvgCPU = $("{0:N2}" -f ($stats | where {$_.MetricId -eq "cpu.usage.average"} |Measure-Object -Property Value -Average ).Average)
            $ServerInfo.MaxCPU = $("{0:N2}" -f ($stats | where {$_.MetricId -eq "cpu.usage.average"} |Measure-Object -Property Value -Maximum ).Maximum)
            $ServerInfo.pDisk  =[Math]::Round($server.ProvisionedSpaceGB,2)


$myCol |Sort-Object vName| export-csv "VM_report.csv" -NoTypeInformation