Classic Snowboard Clips

Talking with someone recently I was reminded of an awesome backcountry adventure and thought it might be fun to compile some old snowboard vids for kicks.

Top 5 run on my life on this first one. I think it snowed about 6 inches just during our hike. Edit by my good friend Conor, please check out his photos.

Right after getting my GoPro I called in a “mental health” day and went on a solo mission. On the first run of the day I ran into a guy in the trees who showed me all kinds of spots at Bolton. It’s a bit long, but what a fun day.

Unfortunately last winter was basically a wash. So this last one is a little bit of a blast back in time as Brady’s gotten much better over the last two years. I expect to be updating this shortly

What a week!

What a crazy week it’s been. It all started off with a little swim to help some awesome folks…

And they did it!!!!! So proud of our plungers! #penguinplunge #specialolympicsVT

A post shared by NorthCountry FCU (@northcountryfcu) on

Followed by a climb to one of Vermont’s highest peaks with some friends

001

And *I thought* bookended by one of the greatest football games of all time.

BUT then today, I was honored to find out that I’ve been awarded #vExpert status from VMware.

vmw-logo-vexpert-2017-k

With all the uncertainty in the world these days I thought that I was going to sit down and hammer out some “Deep Thoughts by Jack Handy” type proverbs. But perhaps what we (and I really mean me) need these days is little more appreciation and gratitude.

I’m so happy that I got to help the Special Olympics in some meager way. If you’d like to help as well, please visit https://specialolympicsvermont.org/. Despite getting my derriere kicked climbing up Camel’s Hump, I’m appreciative for the friendship that brought me there and the beauty that we experienced. I am happy for the simple joys in life, like rooting on my favorite team and celebrating being an underdog. I’m thankful for my career and the opportunities it’s brought me. And last but almost certainly not least, I am appreciative for my family who’ve supported and encouraged me through all these endeavors.

I hope that you find the same joy and appreciate in those things that matter most to you.

Be well,
Scott

The Order of the Phoenix – The Prequel

170x170bbNo, unfortunately this is not about some recently found JK Rowlings manuscript, it would probably be much more captivating if it was, but rather I just finished reading The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps which you could call the prequel to the Phoenix Project. Although I don’t think it was the authors intention, you could look at the Phoenix Project as a case study and Implementing ITIL as the install guide. One seeks to create understanding via a narrative, while the other is a prescriptive method for implementation.

At first it may seem hard to believe that the same authors who in 2015 published The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win wrote a book about Implementing ITIL. After all, a poorly implemented ITIL strategy can result in layers of bureaucracy and a slowdown of all IT operations. Combine this with the image that many people have of DevOps as a cowboy or shadow IT movement and the two books at a glance appear to make for odd bedfellows.  In reality ITIL and DevOps can and should be partners in creating a more effective IT organization that serves to meet the needs of the business at increasing velocity. They both can lead to the same results of providing faster turnaround, greater visibility & security, lower failure rates and less firefighting, so why shouldn’t these two frameworks coexist?

Before going much further, I think it make sense to take a lesson from the authors and provide a few definitions for the sake of conversation.

  • 041
    My ITIL pin, awarded to those who’ve passed an ITIL qualification exam

    ITIL as stated in Wikipedia is “is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business.“. It’s a way to define a standard set of processes and controls around IT service. As the authors are fond of pointing out it can also be used as a universal IT language to define processes, but the true power of ITIL is in cleaning up a messy ITO.

  • DevOps is often looked at as more of a culture or movement than a framework, but it seeks to more tightly integrate the Development and Infrastructure (or Operations) side of IT organizations. Where ITIL is extremely metric driven, DevOps is often very focused on tooling. In the context of DevOps, the Development side of the equation includes traditional Developers as well as Test, QA, integration, etc. Operations refers to what I’d prefer to call the Infrastructure team: Sys Admins, DBA’s, Release and so on.
  • I think it also would also help to define an IT organization or ITO. In terms of the book, they reference Development, testing, release, QA, operations and the traditional Infrastructure groups all as parts of an ITO. In smaller companies these may be broken up into disparate groups, however you just as often see them combined as a larger ITO body. In full disclosure, my opinion and experience show that having these functions all combined within an ITO creates for better harmony (aka less finger pointing), and enhanced cooperation right out of the gate without taking into account the prescribed recommendations in the Handbook.

Now that we are speaking the same language, the Handbook presents a simple framework for how to turn your ITO into a highly functioning organization. The book is broken down into four steps to help guide you on your journey towards being a high performing IT organization.41fldeek8kl-_sy344_bo1204203200_

For those of us in the trenches it can be daunting to try and figure out how to start. When you are constantly fighting fires it can be hard to see a way forward. The first step prescribed in the Handbook is “Stabilize the Patient”. I prefer to call it “stop the bleeding” because often IT practitioners are prone to death by 1000 cuts. It’s hard to look at process improvement when you have to read 1000 emails a day (not an exaggeration), fix the latest crisis du jour, deal with the Executives pet project, manage vendors and the list goes on and on.

The premise of Step 1 is pretty simple and it’s stated pretty bluntly in the first sentence “Our goal in this phase is to reduce the amount of unplanned work as a percentage of total work done down to 25% or less.“. Now if you’re like me the immediate reaction is “HOW THE HELL CAN I DO THAT!”. And the simple answer is: Change Management. I’m pretty sure there was a collective groan from anyone who may be reading this page. The reason that many people react that way is because we’ve all seen change management done badly. I’ve seen change management run the gamut from honor system spreadsheets, to long droning and monotone CAB meetings. The reason they fail is because they are not focused on the spirit.

If you’ve read any of my previous posts, you’ll know that I’m a proponent of a business first and technology second mentality. In the Introduction to the guidebook a lot of attention is focused on the fact that to succeed you have to have a culture and belief system that supports and believes in three fundamental premises:

  • Change Management is paramount and unauthorized change is unacceptable. When you consider that 80% of failures are caused by change (human or otherwise), it’s quickly apparent that all change needs to be controlled.
  • A culture of Causality. How many outage calls have you been on when someone has suggested to reboot the widget “just to see if this works”? Not only does this approach burn out your first-responders and extend outages, but you never get to cause and therefore resolution with this approach. My favorite phrase in the book: “Electrify the fence” goes more into that, which we’ll discuss soon.
  • Lastly, you must instill the belief in Continual Improvement. It’s pretty self explanatory: you fought hard to get to this point and if you’ve already put in this level of effort you obviously want to see the organization continue moving forward. If you’re not moving forward, then everyone else is catching up.

umm-yeah-we-qmnvvwNow you can’t just have an executive come in state that “We have a new culture of causality” and expect everyone to just get on board. It’s a process, and by successfully demonstrating the value that the process brings, people will come on board and begin embodying the aforementioned culture. What you do need your Executives to get behind and state to the troops, is that unauthorized or uncontrolled change is not acceptable. They say it over and over in the book, the only acceptable number of changes is Zero.

But how do you get people to stop with the unauthorized changes?

  • You need a change management process. You must must MUST have a change process, and it can’t be burdensome.
  • There has to be a process  for emergency changes. Don’t just skip the whole change approval process for emergencies, because then you set a precedent that says you can avoid the process when it’s important enough. Once you’ve done this you’ve created an issue where everyone thinks their issue is an emergency.
  • Consider having a process for routine, everyday activities with normal levels of risk that get auto-approved. The important part is that they are tracked and that …
  • The people implementing the changes are accountable. They are accountable for following the process as well as for executing on the change. Many people don’t want to follow change processes, so they won’t. You have to have a means to monitor, detect and report on changes. Once you have this in place, people can truly be held accountable.

And why do you want to go through this process of reducing unplanned and unauthorized change? The number cited in the book is that 80% of all outages are caused by change. Reducing unplanned changes reduces outages and the duration of outages. Less outages means less firefighting. A formal process that MUST be followed, drops the number of drive-by’s that your engineers have to appropriately deal with as all changes are going through the process.

51eypkmgx6l-_sy346_Lastly, going through a change process forces the change initiators to think intentionally. Someone I respect immensely had me read Turn the Ship Around! last year (another book I’d highly recommend), and in this book there is a concept of acting intentionally. You state to your teammates “I intend to turn the speeder repeater up to 11.” By stating the actions you are about to take, you are forced to think about them and what their results may be. It can also slow you down a touch when you’re about to just make a “harmless little update.” By acting intentionally through the change process, you consider (and hopefully document) exactly what you’re going to do, what the outcome will be, how you’ll test, and what your rollback plan is. All of these acts will provide for higher quality changes, fewer outages and ultimately provide more time for your engineers to focus on the remaining steps in the handbook.

Now by the time you’ve gotten through step one, I’d argue that the heaviest lifting is done, and you’re hopefully learning more about ITIL as you go. The remaining steps that the authors detail in the Handbook share a lot of commonalities and are where you can really find opportunities to start blending the best parts of ITIL and DevOps into your own special IT smoothie.

Step 2 is pretty straightforward. You have to know what you have in order to effectively support your business. You must create an inventory of your assets and your configurations. Honestly this step can be summed up pretty succinctly: Go build a CMDB and and asset DB, otherwise you’re subject to drift and you can’t hold people accountable for their changes. It’s the bridge between Step 1 where you have to know how the environment is setup and configured, and between Step 3 where you begin to standardize.

Now bear in mind that when this book was originally written, DevOps hadn’t been coined as a term yet, but you can see it as a precursor with the title of Step 3 “Establish a repeatable build library”. In 2017 the benefits of this are pretty obvious. If you have a standard build, you can hand that release process off to more junior members, or ideally have the process automated. By having your builds standardized and your configurations documented, your environments are not pets, they are cattle. With a standardized environment your outages are likely to be more infrequent, but when they do occur the time to resolution will be dramatically smaller because you have a known footprint.

I did struggle a little bit with section 2 & 3. Section 2, “Catch and Release” is six pages long and consists mostly of the benefits having a known inventory will provide. It’s obvious that the authors find this point important enough to break it, but if it were an easy task everyone would already have the information and documents the authors specify.

This isn’t necessarily a knock on the authors, as it’s a twelve year old book, but section 3 “Establish a Repeatable Build Library” is a bit dated and heavily focused on the ITIL processes. No doubt having your process repeatable is very important, but as we’ve already pointed out in this day and age velocity matters and for this you have to have tooling and automation in place. Again it’s certainly not a knock on the authors, it’s just that you may be able to find better, more modern guides on how to achieve a build system in 2017.

The final section is really interesting to me as it’s part summary, part recap, and part advisories on the pitfalls to watch out for. Any topic on “Continual Improvement”, the heading of section 4, will obviously have a focus on data and metrics. Typically in an ITO metrics revolve around system or availability metrics like is the system up, is the database running too hot, etc, whereas the authors advise looking at more qualitative and performance metrics. After all the goal is to control the environment, and reduce administrative efforts so that your knowledge workers can spend more time working on value-add efforts. As you read this section it’s easy to see that many of the ideas in the “Continual Improvement” section are the seeds which the Phoenix Project grew out of. The biggest takeaway for me is that to become a highly effective ITO, you need less six-shooters and cowboy hats and more process roles and controls. Only by controlling the environment can you actually expect predictable results.

The book effectively wraps up with a summary of the objective “As opposed to management by belief, you have firmly moved to management by fact.” If you’re struggling to obtain this goal, The Visible Ops handbook may be a good place to start, just be prepared to augment it with up to date technologies and data.

Another day, another PowerCLI report

Another day another reason to love PowerShell.

 I have to come up with a list of all of my Windows machines, their OS versions and editions. My first thought being nearly 100% virtualized is “WooHoo, thank you PowerCLI”…

Except that they don’t include the edition for each VM… Sad face.

image001

However, one of my favorite elements of the PowerCLI tool is the Invoke-VMScript cmdlet contained within the VMware.VimAutomation.Core module. For more about modules, see my post Getting Started with PowerCLI. This script does exactly what it sounds like; it allows you to run a script in the guest OS. Now there’s obviously a number of pre-requisites to leveraging this tool. The big ones are as such.

  • VMtools must be running within the VM
  • You must have access to vCenter or the host where the machine resides
  • You must have Virtual Machine.Interaction.Console Interaction privilege
  • And of course you must have the necessary privileges within the VM.

There could also be some security concerns, allowing your VMware administrators the ability to run scripts within the virtual Operating System Environment, but this opens a whole other bag of worms that we’ll put aside for another conversation.

Once you’ve comfortable with the pre-req’s and any potential security elements, you can get started.

get-vm vm-vm | `
Invoke-VMScript -ScriptType Powershell -ScriptText "gwmi win32_Operatingsystem" 

So what are we doing here? We get the VM object and pipe it to the Invoke-VMScript commandlet where we are running the Powershell script “gwmi win32_Operatingsystem” within the context of the virtual OSE! What you get back is another PowerShell object containing the ExitCode of the script and the output within the ScriptOutput property.

Now just a quick sidenote. If you write powershell scripts, then inevitably you know about Get-member (aliased to: GM), but that only shows you methods and properties, not the values. If you’re not sure what you’re looking for and you’d like to see all the property elements of the object, you can just use $ObjectName|select -property * to output.

Back to the task at hand, I know I need a count of each OS type. I’d also ideally like that broken down by cluster. It would also be nice to know the machines that weren’t counted, so I can go and investigate them manually. So here we go.

$daCred=$host.ui.PromptForCredential("Please enter DA credentials","Enter credentials in the format of domainname\username","","")
foreach($objCluster in Get-Cluster){
    write-host "~~~Getting Window OS stats for $objCluster~~~"
    $arrOS=@()
    foreach ($objvm in $($objCluster|get-vm)){
        if($objvm.guestid.contains("windows")){
            $status=$objvm.extensiondata.Guest.ToolsStatus
            if ($status -eq "toolsOk" -or $status -eq "toolsOld"){
                $arrOS+=$(Invoke-VMScript -VM $objvm -ScriptType Powershell -ScriptText '$(gwmi win32_operatingsystem).caption' -GuestCredential $daCred -WarningAction SilentlyContinue).ScriptOutput
            }else{
                Write-Host "Investigate VMtools status on $($objvm)   Status = $status" -BackgroundColor Red
            }
        }
    }
    $arrOS|group |select count, name |ft -AutoSize -Wrap
    Write-Host
}

You may say, what’s happening here? Let me tell you

After we enter in credentials that we know will work, we are going to iterate through each cluster and as we do such we are going to create an array of each OS that we find in our journey. As we iterate through each VM in the cluster we’ll check on VMtools status as we go, and if necessary flag the VM’s for check later. Then we are going to run Invoke-VMScript within a variable so that we can only capture the ScriptOutput property that’s returned within our array. Finally we can do a little sorting and counting on the array, output to the screen, and go investigate why we have so many darn red marks dirtying up our screen!

image002

Until next time, be well!

Get Out of I.T. While You Can.

With a little conscious deliberation, the next book I decided to read after The Phoenix Project was Get Out of I.T. While You Can.  I guess the first clue about this book should have been that there is no description of the book on amazon, only bite size snippets of praise(aka name drops). It’s a very quick read at about ~100 pages of actual content. The first half of the book is fairly decent, but quickly devolves into strategies for advancing your career instead of advancing your organization. The message that I most deeply associated with from The Phoenix Project, that of taking an Outside-In approach to IT, is supposed to be the central theme of this book.

It’s a concept that IT has struggled with, IMHO. Often people with a background in IT rely on their technological skills, their intelligence, their ability to understand a facet of our digital world that many struggle with. When at a social engagement and asked what they do the response is typically “I’m in IT.”

Unfortunately that answer is wrong. It’s holding both the individual and the organization back. The person who says “I’m in IT.” doesn’t identify with their org, they identify with technology. Now don’t get me wrong I can’t think of anyone I’ve interacted with in this field who doesn’t like to geek out on some widget, BUT if their primary priority isn’t the success and growth of their organization, then they are missing opportunities.

My friend Scot Barker (@sbarker) is someone whom I’ve gone to on multiple occasions for advice and guidance. As providence should have it he recently relayed his experiences about exercising this concept in a very eloquent fashion.  He relays the story of how engineers at at a company he worked for “.. spent 2-3 months, on-site at the customer, learning nothing about engineering or how the products were built. Nope, they learned how to do the job the customer does every day.” Through this experience “They always had customer input on what was needed and how a certain feature needed to work” and therefore hit what should be the #1 priority of the organization: solve the problems of our customers and make their lives better.

Now this is not an easy task for many classical IT folks. Disruption is the industry term dujour these days, and it applies not just to software or industries, but also to IT. Those who can accept that IT needs to evolve past a traditional rack and stack, keep the lights on mentality will find themselves furthering themselves and their organizations. Taking an Outside-In approach is a critical foundational element to being successful on this journey. Only by knowing where your Organization has been, where it is going and what it’s aspirations are, can you be most valuable.

As I mentioned before it’s not an easy path to walk, but once you’re on it I think you’ll find it to be rewarding. I know I have. If you have thoughts or stories to relay on this topic, I’d love to hear from you.

The Phoenix Project

From the moment that I arrived in Vegas for VMworld 2016 I started hearing about this book The Phoenix Project. At first I thought that my ears were playing tricks on me when I heard that it was a DevOps novel. This weird reality sunk in when during the opening day keynote address John Spiegel,  IT manager at Columbia Sportswear spoke about the virtues of this book. (segment begins right around 51min)

Given all the chatter around this book, I ordered it from my seat before Mr. Spiegel had even left the stage. The primary message from Mr. Spiegel and the session in general was “treat IT as a factory, focusing on efficiency’s, optimizations”. This is obviously a very important message, but I’d argue that anyone who works in IT and hasn’t recognized, learned, embodied this message, or at a minimum isn’t working towards it…  well… there’s probably other fundamental messages that should be more relevant to them.

There is an underlying theme to the presentation, Mr. Spiegel’s talk and in this book that resonated very strongly with me and that is is to take an Outside-In approach to IT. Instead of focusing on a technology or a framework as many in IT are prone to do, we need to look at the problems (and successes) that people throughout the Organization experience. Take that newfound knowledge to figure out how we can use technology to positively affect their experiences and therefore positively drive the goals of the business. Once articulated it’s a pretty simple concept to internalize: if you don’t know the business, its positives and its problems then how can you possibly be most effective in helping the Organization move forward?

One particular individual in The Phoenix Project recognizes this reality in a rather dramatic fashion and goes from the stereotypical vision of “IT as the department of ‘no'” to one who actively seeks engagement. He takes the empathetic approach of trying to understand both the pains and successes of his business and how he can use his technological skills to affect change for the positive. There is a realization that by attempting to apply strict dogmatic InfoSec principles he just may slow things down. Once his mindset shifts to an Outside-In approach, he’s able to get a far greater level of cooperation, able to implement more of the principles he cherishes, all the while moving the business and his personal/career objectives forward at a faster pace!51eie0testl-_sx333_bo1204203200_

The inside out approach is just one piece of this fantastic book. The novel format is one that I haven’t seen in IT improvement books before, and it certainly makes for an engaging read. Don’t mistake this book for a deep-dive into any frameworks or technologies. Rather it creatively addresses many of the common challenges which need addressing in order for you to develop a high performing IT organization. If you’re looking for a guide on how to begin implementing a DevOps framework and culture in your organization, then disregard the sub-title as this probably isn’t the best book for you.

If you’ve ever been bogged down in the quagmire of firefighting, been unable to break the cycle of finger pointing, struggled to come up with fresh approaches to  the struggles of working in a large IT org or even if you’re just someone who works with IT, then this book should be a must read for you.

PS: If you’ve found this interesting, perhaps you’d like to check out my thoughts on Implementing ITIL written by the same authors.

Sweep up that mess!

I have a perhaps daunting task in front of me, and that’s to clean up a network block, and I don’t know what’s on it. So what’s a person to do? Well, I took a couple of minutes to write my own simple ping sweep.

Thankfully Microsoft made this nice and easy with the Test-Connection cmdlet. In the past you’d have to parse the output from our good old ping, and honestly nobody wants to spend their time doing that.

Throw in a loop and a test condition, and you’re almost all the way there. I decided that I wasn’t happy not knowing whether an IP was reachable or not, but not knowing who was at the receiving end, so I decided to use the .NET DNS class to do a lookup.  Now when this class doesn’t find a result it throws an ugly and unfriendly error message. For this reason you may have noticed that I set preferences for ErrorAction and WarnAction at the top of the script.

As written it won’t scan more than a class C, but you could pretty easily alter it to fit your needs.

And there you go! Simple ping sweeps.

</pre>
$ErrorActionPreference ="SilentlyContinue"
 $WarningPreference = "SilentlyContinue"

$range = "192.168.42."
 $firstip=1
 $lastip=15
 $numpings =3

write-host "Please be patient.
 Based on the IP range entered, the script will take at least $(($lastip - $firstip) * ($numpings+1)) seconds to complete.
 Information that appears hereafter are hosts that are able to be reached"

$count=0
 For ($ip=$firstip; $ip -lt $lastip; $ip++){
 $testip=$range
 $testip+=$ip
 if(Test-Connection $testip -Quiet -count $numpings ){
 write-host $testip -NoNewline
 $nsresult = [System.Net.Dns]::gethostentry($testip)
 if ($nsresult){
 write-host ", "$nsresult.HostName
 }else{
 write-host ""
 }
 $count++
 }
 #ping $testip -n 1
 if ($nsresult){$nsresult = $null}
 Clear-Variable testip
 }

Write-Host "You found $count reachable addresses in this range"
 write-host "Fin!" -BackgroundColor Cyan -ForegroundColor Black
<pre>

2016-12-20-14_04_38-administrator_-windows-powershell-ise

Let’s Hash it out

In the past I always found it to be a giant PITA computing HASH values for files on Windows.

Why do you need to compute HASH values? Once you have the fun experience of trying to deploy a solution from an iso or ova image that got corrupted during download, you’ll never ask that question again.

What does the hash (or checksum) do? It simply is a computation of the bits in a file and is commonly used as an integrity check.

Why is an article about hash values on a VMware blog? Because the kind folks at VMware provide you with the MD5/SHA1/SHA256 calculations for all of their downloads. And if you have at least PowerShell 4.o, Microsoft gave you a little cmdlet for calculating file hashes. And since I’m about to install an eval of vRO as a proof of concept/value, here’s the handy dandy code you’d use to calculate a simple checksum:

 Get-FileHash .\vRO6_4.ova -Algorithm SHA256  

get_filehash

As with any other cmdlet there’s way more that you can do, but for 90% of my needs this simple commandlet is all you need.

Getting started with PowerCLI

I’m setting up a new computer and thought to myself “hey self, this might be a good time to write your first PowerCLI post”

The first post was going to be about how to customize your profile, but why recreate the wheel. Check out awesome post on how to customize your PowerShell profile.

With that being said if you’re actually reading this, chances are you’re really bored or you’re newish to PowerCLI/PowerShell, so we are going to start back at basics. If it’s the later the first thing you’re going to want to do is update PowerShell to the latest version. PowerShell 2 was interesting, and PowerShell 3 was a huge improvement but at this point you really shouldn’t be running anything less than PowerShell 4.0.  To see what version you’re running, you could simply enter $PSVersionTable and hit enter. However what you’re really interested in is the major version that can be accessed by entering the following and hitting enter.

$PSVersionTable.PSVersion.Major

The output from this variable is the primary version of PowerShell that you’re running

sidenote: If you’re paying close attention, you probably noticed that PowerShell has tab complete functionality that will be one of your best friends as you move along with your PowerShell adventures.

2016_11_29_13_14_39_administrator_windows_powershell_ise

We just established that you’re paying attention, so you’ve also noticed that the above screenshot is not just the PowerShell command interpreter. What you’re looking at it is PowerShell ISE (Integrated Scripting Environment). It’s a scripting pane (the top part where you write the script), along with the command interpreter and a command window, but if you know how to use the help commandlet (we’ll get there) the command window is largely useless IMHO.

Now since you are an astute reader you’d probably first ask yourself “how do I get this ISE thing?”. Why it’s not installed by default, I’ll never know, but if you want to use it (you do) then you’ll need to add from the Windows feature menu.

Secondly you’re probably saying, isn’t this supposed to be about PowerCLI? We’re getting there, be patient! These are a couple of the items that have helped me be successful when I was first starting to write PowerShell/PowerCLI scripts.

OK, PowerCLI. You’ll need to logon to your my.vmware.com account to get the latest release, 6.5 as of this writing. You should check out the VMware PowerCLI blog as well. Lots of great information about the releases, and you can pick up some good tips along the way. IF you already have scripts written for earlier versions of PowerCLI, be cautious about the version you install. Why? Let me tell you.

The kind folks at VMware have been working to convert from snap-ins to modules and as of a few days ago that conversion appears to be complete. This is great for you because the scripts that you’re going to write, just became a whole lot more portable and therefore useful. BUT if you’re using scripts that are already built,  that leverage snap-ins AND you upgrade to a PowerCLI version that has converted that functionality to modules… uh oh…

Now that you’ve finally installed PowerCLI you fire it up and …. wait, where’s my ISE window? VMware does not give you a PowerCLI_ISE window, so you fire up good old PowerShell_ISE and … what? you don’t have the PowerCLI commandlets available to you??? Who’s idea of a cruel prank is this? So you enter the command

Get-Module -ListAvailable | Where-Object {$_.name.Contains(“VMware”)}

and when you hit enter, phew, you’ve finally got some stuff to work with.

posercli_modules

To load any of the modules, you simply enter the command import-module . For example to leverage many of the common vCenter tasks, you’d load the core module via the command:

import-module VMware.VimAutomation.Core

I don’t know about you, but I don’t like repeating myself. The way we avoid having to run the commands to load modules every time we want to write a script is via our PowerShell profile. (see what I did there, connected it all together I did)**   I’m going to presume that you at least skimmed the article referenced at the top of this post, so you already have a PowerShell profile that can be accessed by running notepad $profile in a PowerShell window. Simply enter the commands that you want run when you launch PowerShell hit ctrl-s and BLAMMO you get your PowerCLI modules loaded automagically every time that you launch PowerShell!

If you find this useful, please let me know. I’m planning on writing a series of PowerShell/PowerCLI posts for beginners, so your feedback is really appreciated.

Have a great one!

**read in your best yoda voice

Twas the night before

 

This is one of my favorite nights of the year.

Yes I know it’s only December 3rd, but tomorrow is the first day of my snowboarding season. I’ve spent months thinking about this, planning it, DREAMING about it. The one thing that I enjoy above all others (obv. excluding family and friends) will be here TOMORROW!

img_0778Whether you ride or not, it’s a feeling I think we can all relate to. Instead of snowboarding, it could be a meeting, interview, concert, conference, introduction, or a new job. The list goes on and on, but the feelings are the same. You get out your gear/tickets/clothes, make sure that everything is in order. Polish up that resume/data/snowboard. Get everything lined up. Visualize how it’s going to go. Perhaps ruminate on past mistakes, like that broken wrist, and how you won’t repeat them again. Reminisce about that presentation you absolutely slayed, the unbelievable trip, that powder day

Above all else, run through the endless outcomes that will follow from this day. All of the powder/sales/connections that will come after this encounter. Imagine all of the possibilities…

Whatever “IT” is for you, I hope you absolutely shred it!

Man I can’t wait for tomorrow…

img_0830