The Full Stack paradox

This whole blog thing is still new to me, but I’ve tried to have a message or at least a coherent thought as I’ve gone about it. That ends now! I’ve had a question that I’ve been trying to sort out for a while and I thought that there’s a chance I may come to some sort of a resolution by writing this out…

Even though I changed jobs not that long ago, I still get emails from the job search sites. Most of them just get trashed as I’m quite happy in my new role, but sometimes I skim through them for giggles. One listing caught my eye recently for a “full stack developer”. Around the same time I came across a really well thought out article on the myth of the full stack developer. The fundamental premise of the article was that what the world really needs now is more full stack integrators. Unfortunately I can’t find the specific article, but a quick google for “death of full stack engineers” will result in a number of similar items.

I’ve also been catching up on podcasts during my commute and while the Datanauts, whose key catch phrase seems to be “silo-busting”, were wrapping up a recent episode they mentioned in passing the concept of a full stack engineer. Knowing that one of the hosts is a network engineer and the other a virtualization engineer, I can only assume that by engineer they meant someone from the infrastructure realm. And it’s this comment which stuck me the most and is a big part of my questioning today.

IMG_2808To give a little context, I started my professional career in IT almost exactly 17 years ago. I also am writing this post from my basement, listening to Phish, while wearing a Chewbacca snuggie. You may ask yourself what the hell that has to do with anything, and honestly nothing other than trying to establish my geek creds. TMI? Maybe? But we continue onward. The real point is that I think I’ve seen a fair representation of IT shops from small to large, local to global. One consistent thing I’ve seen across all of them is that the IT generalist, aka the full stack engineer, has never been a badge of honor.

In my experience IT professionals typically want to become the Subject Matter Experts (SME). In order to become a SME, you have to study, work and focus on particular technologies. Time is a limited commodity, so by taking this directed approach your ability to become well rounded and exposed to more technologies is restricted right out of the gate. To state it a different way: As you go further down the rabbit hole of specialization, breadth is naturally sacrificed for depth.

This model has worked for IT professionals for years. If I become an expert in a technology that is valuable to my organization/career/community, then I can achieve higher levels of pay/title/prestige. All of these results fire off my reward system and engage my dopamine receptors. But what if these short term rewards run counter to a greater purpose? What if by focusing on one minute area of IT infrastructure, I miss the forest for the trees?

And it’s this world that I think we find ourselves in in 2017. Technology is changing and roles change with it. If a technology like mainframes falls out of favor, and you’ve focused only on this aspect for the last N-years, well what happens next? (I’m just kidding mainframes, we all know that you’ll never die. Please don’t hurt me…). In the world of SDDC, Cloud, serverless, IoT and whatever the next emergent technology is, how do you translate these focused skills into a world that is becoming more diverse and generalized?  Playing the scenario out, perhaps it’s the forest we should have been focusing all along. The forest is the business. It’s the goal, not the methods.

And finally we come to the question. And perhaps it’s less or a paradox and more ironic that as developers trend away from Full Stack, that we as IT professionals have to ask ourselves: Is the time of the SME past? Have we entered an age where the generalists, integrators and full-stack engineers have finally come into their own? Where these full stack engineers are the primary engines that enable their organizations to succeed?

I certainly don’t know the answer, but maybe Cracker had it right when they said  “I’m sure as hell that it starts with me”

It’s a bird, it’s a plane, it’s… Invoke-VMScript

I was at work today and a need came across my desk for a solution that requires SNMP. For some reason which I can’t fathom, SNMP is not installed as a service on the majority of the servers. Who do we turn to in tumultuous times like these? PowerShell and his mighty sidekick PowerCLI!

michael-corleone-pull-me-back-in-just-when-i-thought-i-was-out-they-pull-me-back-inFirst things first I wanted to know the scope of what I was dealing with. When I dove into this problem I had every intention of trying to broaden my horizons and move away from PowerCLI, but it’s so easy to get sucked back into what you know. Besides, I knew I was only targeting a couple of clusters, so it only made sense to go back to PowerCLI, right? Right???

If you ignore the ugly formatting, what I did below was load all of the VM’s I needed to target into an object and then iterate through each of them to make sure they were windows machines and that they were powered on. In hindsight I knew that I was probably going to use invoke-vmscript to get the job done, so I probably should have checked for vmTools status (ExtensionData.Guest.ToolsRunningStatus) while I was at it.  snmp1

So now we’ve got a nice neat little hashtable full of servers that need a little TLC. You’d think that we could immediately get rocking, but without going into details things unexpectedly got a little dodgy at this point. I mentioned earlier that I originally intended to try and break away from PowerCLI just to broaden my horizons. Unfortunately as an Infrastructure person you don’t always have the opportunity to do things the way you’d like, and you have to sacrifice elegance for just getting things done. Luckily as VMware admins when we need to get $hit done, we have a very handy and very powerful tool available to us and that is invoke-vmscript.

b436ea981cd43a8a244370d95fa3f343_super-troopers-better-fix-meow-super-troopers-meme_250-131If you’ve heard me talk, reviewed my scripts or spent any time around me you’d know that I think invoke-vmscript is the cat’s meow. It is without a doubt my favorite cmdlet as it lets you get away with some pretty awesome stuff. At it’s root, invoke-vmscript allows you to run a script via VMtools within the context of the local VM. Now this is different from PSexec or PowerShell remoting; you are actually running the a script within the local OS where VMtools and PowerCLI are just the mechanisms to enable this super hero activity.

Quick sidebar: With great power comes great responsibility. I said above that invoke-vmscript “lets you get away with some pretty awesome stuff.” Many people in this world just deploy VMtools and vCenter with default permissions and credentials. If you are a security person, you need to ensure that your roles and privileges are setup appropriately, of you could have exposure due to what you can accomplish with VMtools.

But I digress. We are here to get things done and at the center of it this whole exercise boils down to a one liner:

<strong>Invoke-VMScript -VM $client.Name -ScriptText "DISM /online /enable-feature /norestart /featurename:SNMP" -ScriptType Powershell

 

If you refer back to the original snip, we stored all of the servers into an array, which is being iterated through. We invoke the script targeting $client.name. The parameter for ScriptText is where we pass in the script that we would like to run on the remote system. In this case we are using the Microsoft DISM tool to add the SNMP feature to our Windows installation. Lastly is the parameter for ScriptType. You have three ScriptType options available to you as of today: Bat for you old school Windows Cats, Bash for the nix kittens and PowerShell for the up and coming cubs.

When you put it all together, here’s the code to get it done:

$serverset=$(get-cluster cluster1|Get-VM) + $(get-cluster cluster2|Get-VM)

$ArrRemediate=@()

foreach ($client in $serverset){

if($client.powerstate -eq "PoweredOn" -and $client.guestid.contains("windows")){

if(!$(get-service -ComputerName $client -Name SNMP -ea silentlycontinue)){
$ArrRemediate+=$client
Invoke-VMScript -VM $client.Name -ScriptText "DISM /online /enable-feature /norestart /featurename:SNMP" -ScriptType Powershell
get-service -ComputerName $client.Name -Name SNMP|Select-Object -Property name, status, starttype |ft

}

}

}

&nbsp;

$ArrRemediate.Count

I hope for today you’ll excuse the formatting and less than efficient code, as the mission was to get things done. We achieved our mission and escaped certain doom due to our friendly neigboorhood hero Invoke-VMScript. I hope to have a deeper expose into our masked super hero soon, but until then if you have any thoughts or would like to contribute to the conversation, please reach out.

Why you need more PowerShell

or: How I stopped worrying and learned to love the CLI

I recently gave a Tech Talk at our spring Champlain Valley VMUG on PowerShell and PowerCLI. The talk definitely was more of an introductory instructional, but one of our attendees expressed that they wanted to hear more about the value that can be delivered back to the organization by scripting with PowerShell. Hopefully I can give you a solid overview of the immense value of PowerShell here today.

Why?

The only constant is change and that holds true for IT infrastructure folks as well. Terms like DevOps, distruptor, and Shadow IT have become firmly established in our lexicon. And with good reason! We are in a world that is moving faster and faster everyday and you often see where it’s not the best product that corners a market, but rather it’s the first/fastest to market that gets a stranglehold. If you come from a classical IT role with silos and legacy processes/policies that slow your Organization down… well is it any wonder that you have disruptors changing the model?!? But what if there was a way that you could help accelerate your business, work collaboratively with the Developers, combat against Shadow IT and be the disruptor yourself? Powershell can be the tool that enables this transformation by delivering Time and Consistency to your organization.

Time

This one is simple. Time is money and by investing a little bit of effort up front  scripting a solution, you will save time moving forward. Here is the no-brainer part of the value prop: Do you want to take on the timely task of building environments by hand? Of course you don’t! You want something that’s fast and easy. There’s a take on the old adage that I’ll paraphrase here “Do it once, ok. Do it twice, shame on me. Do it three times, why haven’t you scripted it yet?”

Let’s suppose for a second that you have to install a widget dozens, hundreds or thousands of times. This activity takes hours. Once you script & automate that install, you turn it into a hands off activity freeing up your engineers to do more of the activities that will drive value instead of just watching the progress bar. Simply by the act of writing that script, you’ve saved your business time/money, and honestly you’ve probably gained a bit of expertise and employee engagement as well. Extrapolate this out to all of the infrastructure elements you need to manage: people, policy, applications, servers, storage, network, security and the list goes on and on. Even if you can only automate part of a process, you’re still going to see dividends.

51oYzgTCiyL._SX427_BO1,204,203,200_[1]A less intuitive reason for starting with PowerShell is that it has a pretty quick learning curve, especially if you come from a windows environment. If you have any programming/scripting background you can likely dive right in. This means that your team can be scripting sooner, and can start ensuring that they are driving the non-value added operations out of your day to day. Many infrastructure folks don’t have a background in development activities and as such scripting can be a bit of a hard sell. PowerShell was meant to build upon and extend the foundation of items like Batch and VBscript, but in a way that is intuitive to learn and become efficient with quickly. One of my go to guides for learning PowerShell is the Learn Windows PowerShell in a Month of Lunches guide. This book is so successful in large part because it demonstrates just how easily accessible PowerShell really is.

2017-03-28 11_47_02-Windows PowerShell ISEI mentioned earlier that you can create collaborative opportunities and combat against Shadow IT. PowerShell is built on top of the .NET framework and has support for RestAPI’s baked in. This means that you can share code, speak the same language and have smoother hand-offs. By using PowerShell you have an opportunity to increase the amount of collaboration between your groups. If you can harness this opportunity you’re likely going suffer from less finger pointing, and be able to cut out some unnecessary meetings.

Consistency

Time and consistency (and money) go hand in hand in IT. Having inconsistent environments results in more frequent issues and longer times to resolution. When you start scripting out your activities you will have a much more predictable environment, outages will decrease in frequency and your time to resolution will also drop. This all yields in greater up-time. More up-time means happier customers and happier engineers. Your business is winning!

tom-brady-goat[1]
One is a goat. The other is the GOAT.
Speaking of winning, do you know why Tom Brady is one of the Greatest Of All Time? It’s not because of his ugg’s or his supermodel wife. It’s because he has put in the work up front to ensure that no matter who he is working with, he will have a predictable and consistent outcome. This is what you should be aiming for with your environment: consistent and predictable.

Having a consistent repeatable infrastructure makes that environment easier to rebuild. If you can kick off a PowerShell script that results in a fresh server in a matter of minutes, why would you spend hours troubleshooting a problem? The saying “treat your servers like cattle, not like pets” became popularized for a reason. Wikipedia states that “The term commodity is specifically used for an economic good or service when the demand for it has no qualitative differentiation across a market.” Your servers SHOULD have no qualitative differences, and are therefore inherently commodities, and replaceable. Diving into PowerShell and PowerCLI can help get you there.

PowerCLI

I’ve mentioned it a number of times but some of you may be going, what is this PowerCLI thing? PowerCLI is VMware’s implementation of PowerCLI modules which allow you to “automate all aspects of vSphere management, including network, storage, VM, guest OS and more.” To put it short, it’s a super efficient and reliable (not to mention fun) way to manage your vSphere environments. It’s also incredibly powerful.  There are over 500 separate commandlets in the modules which make up PowerCLI. By some accounts VMware has approximately 80% of the hypervisor market, which means the majority of the worlds infrastructure run’s on vSphere and can be managed with PowerCLI.

Using PowerCLI just allows you to further expand on the amount of Time and Consistency that you can deliver back into your business. With PowerCLI you can automate/manage the network, hypervisors, storage and all of the elements that encompass your “infrastructure”. You can also take it one step further and thanks to the security models built into vSphere you can let your users do it too! With a little bit of thought and design, you can give your developers the ability to spin up and spin down their own VM’s. No more test/dev headaches for you and your developers are happier! The winning doesn’t stop!

As I said to start my VMUG presentation, I’m not an expert in PowerShell or PowerCLI, but I have used it very effectively in my day to day. It’s also a topic that I’m passionate about, otherwise you’d never catch me voluntarily speaking in front of 100 people! I’ve also managed to write some fairly complex scripts that have helped my Organizations reach goals. I hope this post helps you understand some of the value PowerShell & PowerCLI scripting. If you’d like to keep the conversation going or if you have any questions I’d love to hear from you.

Classic Snowboard Clips

Talking with someone recently I was reminded of an awesome backcountry adventure and thought it might be fun to compile some old snowboard vids for kicks.

Top 5 run on my life on this first one. I think it snowed about 6 inches just during our hike. Edit by my good friend Conor, please check out his photos.

Right after getting my GoPro I called in a “mental health” day and went on a solo mission. On the first run of the day I ran into a guy in the trees who showed me all kinds of spots at Bolton. It’s a bit long, but what a fun day.

Unfortunately last winter was basically a wash. So this last one is a little bit of a blast back in time as Brady’s gotten much better over the last two years. I expect to be updating this shortly

What a week!

What a crazy week it’s been. It all started off with a little swim to help some awesome folks…

And they did it!!!!! So proud of our plungers! #penguinplunge #specialolympicsVT

A post shared by NorthCountry FCU (@northcountryfcu) on

Followed by a climb to one of Vermont’s highest peaks with some friends

001

And *I thought* bookended by one of the greatest football games of all time.

BUT then today, I was honored to find out that I’ve been awarded #vExpert status from VMware.

vmw-logo-vexpert-2017-k

With all the uncertainty in the world these days I thought that I was going to sit down and hammer out some “Deep Thoughts by Jack Handy” type proverbs. But perhaps what we (and I really mean me) need these days is little more appreciation and gratitude.

I’m so happy that I got to help the Special Olympics in some meager way. If you’d like to help as well, please visit https://specialolympicsvermont.org/. Despite getting my derriere kicked climbing up Camel’s Hump, I’m appreciative for the friendship that brought me there and the beauty that we experienced. I am happy for the simple joys in life, like rooting on my favorite team and celebrating being an underdog. I’m thankful for my career and the opportunities it’s brought me. And last but almost certainly not least, I am appreciative for my family who’ve supported and encouraged me through all these endeavors.

I hope that you find the same joy and appreciate in those things that matter most to you.

Be well,
Scott

The Order of the Phoenix – The Prequel

170x170bbNo, unfortunately this is not about some recently found JK Rowlings manuscript, it would probably be much more captivating if it was, but rather I just finished reading The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps which you could call the prequel to the Phoenix Project. Although I don’t think it was the authors intention, you could look at the Phoenix Project as a case study and Implementing ITIL as the install guide. One seeks to create understanding via a narrative, while the other is a prescriptive method for implementation.

At first it may seem hard to believe that the same authors who in 2015 published The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win wrote a book about Implementing ITIL. After all, a poorly implemented ITIL strategy can result in layers of bureaucracy and a slowdown of all IT operations. Combine this with the image that many people have of DevOps as a cowboy or shadow IT movement and the two books at a glance appear to make for odd bedfellows.  In reality ITIL and DevOps can and should be partners in creating a more effective IT organization that serves to meet the needs of the business at increasing velocity. They both can lead to the same results of providing faster turnaround, greater visibility & security, lower failure rates and less firefighting, so why shouldn’t these two frameworks coexist?

Before going much further, I think it make sense to take a lesson from the authors and provide a few definitions for the sake of conversation.

  • 041
    My ITIL pin, awarded to those who’ve passed an ITIL qualification exam

    ITIL as stated in Wikipedia is “is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business.“. It’s a way to define a standard set of processes and controls around IT service. As the authors are fond of pointing out it can also be used as a universal IT language to define processes, but the true power of ITIL is in cleaning up a messy ITO.

  • DevOps is often looked at as more of a culture or movement than a framework, but it seeks to more tightly integrate the Development and Infrastructure (or Operations) side of IT organizations. Where ITIL is extremely metric driven, DevOps is often very focused on tooling. In the context of DevOps, the Development side of the equation includes traditional Developers as well as Test, QA, integration, etc. Operations refers to what I’d prefer to call the Infrastructure team: Sys Admins, DBA’s, Release and so on.
  • I think it also would also help to define an IT organization or ITO. In terms of the book, they reference Development, testing, release, QA, operations and the traditional Infrastructure groups all as parts of an ITO. In smaller companies these may be broken up into disparate groups, however you just as often see them combined as a larger ITO body. In full disclosure, my opinion and experience show that having these functions all combined within an ITO creates for better harmony (aka less finger pointing), and enhanced cooperation right out of the gate without taking into account the prescribed recommendations in the Handbook.

Now that we are speaking the same language, the Handbook presents a simple framework for how to turn your ITO into a highly functioning organization. The book is broken down into four steps to help guide you on your journey towards being a high performing IT organization.41fldeek8kl-_sy344_bo1204203200_

For those of us in the trenches it can be daunting to try and figure out how to start. When you are constantly fighting fires it can be hard to see a way forward. The first step prescribed in the Handbook is “Stabilize the Patient”. I prefer to call it “stop the bleeding” because often IT practitioners are prone to death by 1000 cuts. It’s hard to look at process improvement when you have to read 1000 emails a day (not an exaggeration), fix the latest crisis du jour, deal with the Executives pet project, manage vendors and the list goes on and on.

The premise of Step 1 is pretty simple and it’s stated pretty bluntly in the first sentence “Our goal in this phase is to reduce the amount of unplanned work as a percentage of total work done down to 25% or less.“. Now if you’re like me the immediate reaction is “HOW THE HELL CAN I DO THAT!”. And the simple answer is: Change Management. I’m pretty sure there was a collective groan from anyone who may be reading this page. The reason that many people react that way is because we’ve all seen change management done badly. I’ve seen change management run the gamut from honor system spreadsheets, to long droning and monotone CAB meetings. The reason they fail is because they are not focused on the spirit.

If you’ve read any of my previous posts, you’ll know that I’m a proponent of a business first and technology second mentality. In the Introduction to the guidebook a lot of attention is focused on the fact that to succeed you have to have a culture and belief system that supports and believes in three fundamental premises:

  • Change Management is paramount and unauthorized change is unacceptable. When you consider that 80% of failures are caused by change (human or otherwise), it’s quickly apparent that all change needs to be controlled.
  • A culture of Causality. How many outage calls have you been on when someone has suggested to reboot the widget “just to see if this works”? Not only does this approach burn out your first-responders and extend outages, but you never get to cause and therefore resolution with this approach. My favorite phrase in the book: “Electrify the fence” goes more into that, which we’ll discuss soon.
  • Lastly, you must instill the belief in Continual Improvement. It’s pretty self explanatory: you fought hard to get to this point and if you’ve already put in this level of effort you obviously want to see the organization continue moving forward. If you’re not moving forward, then everyone else is catching up.

umm-yeah-we-qmnvvwNow you can’t just have an executive come in state that “We have a new culture of causality” and expect everyone to just get on board. It’s a process, and by successfully demonstrating the value that the process brings, people will come on board and begin embodying the aforementioned culture. What you do need your Executives to get behind and state to the troops, is that unauthorized or uncontrolled change is not acceptable. They say it over and over in the book, the only acceptable number of changes is Zero.

But how do you get people to stop with the unauthorized changes?

  • You need a change management process. You must must MUST have a change process, and it can’t be burdensome.
  • There has to be a process  for emergency changes. Don’t just skip the whole change approval process for emergencies, because then you set a precedent that says you can avoid the process when it’s important enough. Once you’ve done this you’ve created an issue where everyone thinks their issue is an emergency.
  • Consider having a process for routine, everyday activities with normal levels of risk that get auto-approved. The important part is that they are tracked and that …
  • The people implementing the changes are accountable. They are accountable for following the process as well as for executing on the change. Many people don’t want to follow change processes, so they won’t. You have to have a means to monitor, detect and report on changes. Once you have this in place, people can truly be held accountable.

And why do you want to go through this process of reducing unplanned and unauthorized change? The number cited in the book is that 80% of all outages are caused by change. Reducing unplanned changes reduces outages and the duration of outages. Less outages means less firefighting. A formal process that MUST be followed, drops the number of drive-by’s that your engineers have to appropriately deal with as all changes are going through the process.

51eypkmgx6l-_sy346_Lastly, going through a change process forces the change initiators to think intentionally. Someone I respect immensely had me read Turn the Ship Around! last year (another book I’d highly recommend), and in this book there is a concept of acting intentionally. You state to your teammates “I intend to turn the speeder repeater up to 11.” By stating the actions you are about to take, you are forced to think about them and what their results may be. It can also slow you down a touch when you’re about to just make a “harmless little update.” By acting intentionally through the change process, you consider (and hopefully document) exactly what you’re going to do, what the outcome will be, how you’ll test, and what your rollback plan is. All of these acts will provide for higher quality changes, fewer outages and ultimately provide more time for your engineers to focus on the remaining steps in the handbook.

Now by the time you’ve gotten through step one, I’d argue that the heaviest lifting is done, and you’re hopefully learning more about ITIL as you go. The remaining steps that the authors detail in the Handbook share a lot of commonalities and are where you can really find opportunities to start blending the best parts of ITIL and DevOps into your own special IT smoothie.

Step 2 is pretty straightforward. You have to know what you have in order to effectively support your business. You must create an inventory of your assets and your configurations. Honestly this step can be summed up pretty succinctly: Go build a CMDB and and asset DB, otherwise you’re subject to drift and you can’t hold people accountable for their changes. It’s the bridge between Step 1 where you have to know how the environment is setup and configured, and between Step 3 where you begin to standardize.

Now bear in mind that when this book was originally written, DevOps hadn’t been coined as a term yet, but you can see it as a precursor with the title of Step 3 “Establish a repeatable build library”. In 2017 the benefits of this are pretty obvious. If you have a standard build, you can hand that release process off to more junior members, or ideally have the process automated. By having your builds standardized and your configurations documented, your environments are not pets, they are cattle. With a standardized environment your outages are likely to be more infrequent, but when they do occur the time to resolution will be dramatically smaller because you have a known footprint.

I did struggle a little bit with section 2 & 3. Section 2, “Catch and Release” is six pages long and consists mostly of the benefits having a known inventory will provide. It’s obvious that the authors find this point important enough to break it, but if it were an easy task everyone would already have the information and documents the authors specify.

This isn’t necessarily a knock on the authors, as it’s a twelve year old book, but section 3 “Establish a Repeatable Build Library” is a bit dated and heavily focused on the ITIL processes. No doubt having your process repeatable is very important, but as we’ve already pointed out in this day and age velocity matters and for this you have to have tooling and automation in place. Again it’s certainly not a knock on the authors, it’s just that you may be able to find better, more modern guides on how to achieve a build system in 2017.

The final section is really interesting to me as it’s part summary, part recap, and part advisories on the pitfalls to watch out for. Any topic on “Continual Improvement”, the heading of section 4, will obviously have a focus on data and metrics. Typically in an ITO metrics revolve around system or availability metrics like is the system up, is the database running too hot, etc, whereas the authors advise looking at more qualitative and performance metrics. After all the goal is to control the environment, and reduce administrative efforts so that your knowledge workers can spend more time working on value-add efforts. As you read this section it’s easy to see that many of the ideas in the “Continual Improvement” section are the seeds which the Phoenix Project grew out of. The biggest takeaway for me is that to become a highly effective ITO, you need less six-shooters and cowboy hats and more process roles and controls. Only by controlling the environment can you actually expect predictable results.

The book effectively wraps up with a summary of the objective “As opposed to management by belief, you have firmly moved to management by fact.” If you’re struggling to obtain this goal, The Visible Ops handbook may be a good place to start, just be prepared to augment it with up to date technologies and data.

Another day, another PowerCLI report

Another day another reason to love PowerShell.

 I have to come up with a list of all of my Windows machines, their OS versions and editions. My first thought being nearly 100% virtualized is “WooHoo, thank you PowerCLI”…

Except that they don’t include the edition for each VM… Sad face.

image001

However, one of my favorite elements of the PowerCLI tool is the Invoke-VMScript cmdlet contained within the VMware.VimAutomation.Core module. For more about modules, see my post Getting Started with PowerCLI. This script does exactly what it sounds like; it allows you to run a script in the guest OS. Now there’s obviously a number of pre-requisites to leveraging this tool. The big ones are as such.

  • VMtools must be running within the VM
  • You must have access to vCenter or the host where the machine resides
  • You must have Virtual Machine.Interaction.Console Interaction privilege
  • And of course you must have the necessary privileges within the VM.

There could also be some security concerns, allowing your VMware administrators the ability to run scripts within the virtual Operating System Environment, but this opens a whole other bag of worms that we’ll put aside for another conversation.

Once you’ve comfortable with the pre-req’s and any potential security elements, you can get started.

get-vm vm-vm | `
Invoke-VMScript -ScriptType Powershell -ScriptText "gwmi win32_Operatingsystem" 

So what are we doing here? We get the VM object and pipe it to the Invoke-VMScript commandlet where we are running the Powershell script “gwmi win32_Operatingsystem” within the context of the virtual OSE! What you get back is another PowerShell object containing the ExitCode of the script and the output within the ScriptOutput property.

Now just a quick sidenote. If you write powershell scripts, then inevitably you know about Get-member (aliased to: GM), but that only shows you methods and properties, not the values. If you’re not sure what you’re looking for and you’d like to see all the property elements of the object, you can just use $ObjectName|select -property * to output.

Back to the task at hand, I know I need a count of each OS type. I’d also ideally like that broken down by cluster. It would also be nice to know the machines that weren’t counted, so I can go and investigate them manually. So here we go.

$daCred=$host.ui.PromptForCredential("Please enter DA credentials","Enter credentials in the format of domainname\username","","")
foreach($objCluster in Get-Cluster){
    write-host "~~~Getting Window OS stats for $objCluster~~~"
    $arrOS=@()
    foreach ($objvm in $($objCluster|get-vm)){
        if($objvm.guestid.contains("windows")){
            $status=$objvm.extensiondata.Guest.ToolsStatus
            if ($status -eq "toolsOk" -or $status -eq "toolsOld"){
                $arrOS+=$(Invoke-VMScript -VM $objvm -ScriptType Powershell -ScriptText '$(gwmi win32_operatingsystem).caption' -GuestCredential $daCred -WarningAction SilentlyContinue).ScriptOutput
            }else{
                Write-Host "Investigate VMtools status on $($objvm)   Status = $status" -BackgroundColor Red
            }
        }
    }
    $arrOS|group |select count, name |ft -AutoSize -Wrap
    Write-Host
}

You may say, what’s happening here? Let me tell you

After we enter in credentials that we know will work, we are going to iterate through each cluster and as we do such we are going to create an array of each OS that we find in our journey. As we iterate through each VM in the cluster we’ll check on VMtools status as we go, and if necessary flag the VM’s for check later. Then we are going to run Invoke-VMScript within a variable so that we can only capture the ScriptOutput property that’s returned within our array. Finally we can do a little sorting and counting on the array, output to the screen, and go investigate why we have so many darn red marks dirtying up our screen!

image002

Until next time, be well!

Get Out of I.T. While You Can.

With a little conscious deliberation, the next book I decided to read after The Phoenix Project was Get Out of I.T. While You Can.  I guess the first clue about this book should have been that there is no description of the book on amazon, only bite size snippets of praise(aka name drops). It’s a very quick read at about ~100 pages of actual content. The first half of the book is fairly decent, but quickly devolves into strategies for advancing your career instead of advancing your organization. The message that I most deeply associated with from The Phoenix Project, that of taking an Outside-In approach to IT, is supposed to be the central theme of this book.

It’s a concept that IT has struggled with, IMHO. Often people with a background in IT rely on their technological skills, their intelligence, their ability to understand a facet of our digital world that many struggle with. When at a social engagement and asked what they do the response is typically “I’m in IT.”

Unfortunately that answer is wrong. It’s holding both the individual and the organization back. The person who says “I’m in IT.” doesn’t identify with their org, they identify with technology. Now don’t get me wrong I can’t think of anyone I’ve interacted with in this field who doesn’t like to geek out on some widget, BUT if their primary priority isn’t the success and growth of their organization, then they are missing opportunities.

My friend Scot Barker (@sbarker) is someone whom I’ve gone to on multiple occasions for advice and guidance. As providence should have it he recently relayed his experiences about exercising this concept in a very eloquent fashion.  He relays the story of how engineers at at a company he worked for “.. spent 2-3 months, on-site at the customer, learning nothing about engineering or how the products were built. Nope, they learned how to do the job the customer does every day.” Through this experience “They always had customer input on what was needed and how a certain feature needed to work” and therefore hit what should be the #1 priority of the organization: solve the problems of our customers and make their lives better.

Now this is not an easy task for many classical IT folks. Disruption is the industry term dujour these days, and it applies not just to software or industries, but also to IT. Those who can accept that IT needs to evolve past a traditional rack and stack, keep the lights on mentality will find themselves furthering themselves and their organizations. Taking an Outside-In approach is a critical foundational element to being successful on this journey. Only by knowing where your Organization has been, where it is going and what it’s aspirations are, can you be most valuable.

As I mentioned before it’s not an easy path to walk, but once you’re on it I think you’ll find it to be rewarding. I know I have. If you have thoughts or stories to relay on this topic, I’d love to hear from you.

The Phoenix Project

From the moment that I arrived in Vegas for VMworld 2016 I started hearing about this book The Phoenix Project. At first I thought that my ears were playing tricks on me when I heard that it was a DevOps novel. This weird reality sunk in when during the opening day keynote address John Spiegel,  IT manager at Columbia Sportswear spoke about the virtues of this book. (segment begins right around 51min)

Given all the chatter around this book, I ordered it from my seat before Mr. Spiegel had even left the stage. The primary message from Mr. Spiegel and the session in general was “treat IT as a factory, focusing on efficiency’s, optimizations”. This is obviously a very important message, but I’d argue that anyone who works in IT and hasn’t recognized, learned, embodied this message, or at a minimum isn’t working towards it…  well… there’s probably other fundamental messages that should be more relevant to them.

There is an underlying theme to the presentation, Mr. Spiegel’s talk and in this book that resonated very strongly with me and that is is to take an Outside-In approach to IT. Instead of focusing on a technology or a framework as many in IT are prone to do, we need to look at the problems (and successes) that people throughout the Organization experience. Take that newfound knowledge to figure out how we can use technology to positively affect their experiences and therefore positively drive the goals of the business. Once articulated it’s a pretty simple concept to internalize: if you don’t know the business, its positives and its problems then how can you possibly be most effective in helping the Organization move forward?

One particular individual in The Phoenix Project recognizes this reality in a rather dramatic fashion and goes from the stereotypical vision of “IT as the department of ‘no'” to one who actively seeks engagement. He takes the empathetic approach of trying to understand both the pains and successes of his business and how he can use his technological skills to affect change for the positive. There is a realization that by attempting to apply strict dogmatic InfoSec principles he just may slow things down. Once his mindset shifts to an Outside-In approach, he’s able to get a far greater level of cooperation, able to implement more of the principles he cherishes, all the while moving the business and his personal/career objectives forward at a faster pace!51eie0testl-_sx333_bo1204203200_

The inside out approach is just one piece of this fantastic book. The novel format is one that I haven’t seen in IT improvement books before, and it certainly makes for an engaging read. Don’t mistake this book for a deep-dive into any frameworks or technologies. Rather it creatively addresses many of the common challenges which need addressing in order for you to develop a high performing IT organization. If you’re looking for a guide on how to begin implementing a DevOps framework and culture in your organization, then disregard the sub-title as this probably isn’t the best book for you.

If you’ve ever been bogged down in the quagmire of firefighting, been unable to break the cycle of finger pointing, struggled to come up with fresh approaches to  the struggles of working in a large IT org or even if you’re just someone who works with IT, then this book should be a must read for you.

PS: If you’ve found this interesting, perhaps you’d like to check out my thoughts on Implementing ITIL written by the same authors.

Sweep up that mess!

I have a perhaps daunting task in front of me, and that’s to clean up a network block, and I don’t know what’s on it. So what’s a person to do? Well, I took a couple of minutes to write my own simple ping sweep.

Thankfully Microsoft made this nice and easy with the Test-Connection cmdlet. In the past you’d have to parse the output from our good old ping, and honestly nobody wants to spend their time doing that.

Throw in a loop and a test condition, and you’re almost all the way there. I decided that I wasn’t happy not knowing whether an IP was reachable or not, but not knowing who was at the receiving end, so I decided to use the .NET DNS class to do a lookup.  Now when this class doesn’t find a result it throws an ugly and unfriendly error message. For this reason you may have noticed that I set preferences for ErrorAction and WarnAction at the top of the script.

As written it won’t scan more than a class C, but you could pretty easily alter it to fit your needs.

And there you go! Simple ping sweeps.

</pre>
$ErrorActionPreference ="SilentlyContinue"
 $WarningPreference = "SilentlyContinue"

$range = "192.168.42."
 $firstip=1
 $lastip=15
 $numpings =3

write-host "Please be patient.
 Based on the IP range entered, the script will take at least $(($lastip - $firstip) * ($numpings+1)) seconds to complete.
 Information that appears hereafter are hosts that are able to be reached"

$count=0
 For ($ip=$firstip; $ip -lt $lastip; $ip++){
 $testip=$range
 $testip+=$ip
 if(Test-Connection $testip -Quiet -count $numpings ){
 write-host $testip -NoNewline
 $nsresult = [System.Net.Dns]::gethostentry($testip)
 if ($nsresult){
 write-host ", "$nsresult.HostName
 }else{
 write-host ""
 }
 $count++
 }
 #ping $testip -n 1
 if ($nsresult){$nsresult = $null}
 Clear-Variable testip
 }

Write-Host "You found $count reachable addresses in this range"
 write-host "Fin!" -BackgroundColor Cyan -ForegroundColor Black
<pre>

2016-12-20-14_04_38-administrator_-windows-powershell-ise