VMware library

2017-06-25_203353The entire vSphere community (myself included) seems to be in a flutter over the release of the long awaited Host Resources Deep Dive by Frank Denneman and Niels Hagoort. For me this recently resulted in a tweet-thread to see who had bought the most copies (I lost). The upside to this whole thing is I came across Mr. Ariel Sanchez Mora’s (you should follow him ) VMware book list. I love book reviews, so with Ariel’s permission I’m stealing his idea! In fairness you should really go check out his page first, but make sure to come back! Without further ado, here’s the best of my VMware library.



Like many people, this is where I started. I’d heard horror stories about about the VCP, so after taking Install, Configure, Manage I bought my first book, coincidentally (or not) written by the instructor of my class. I immediately followed it up by adding the second piece to my library.

The Official VCP5 Certification Guide (VMware press) by Bill Ferguson

VCP5 VMware Certified Professional on vSphere 5 Study Guide: Exam VCP-510

(Sybex) by Brian Atkinson

I think that in terms of the earlier books, I’d give the edge to the Sybex version. It covers the same fundamentals as the official guide, but goes much deeper.

Just last year while at 2016 I was wandering around the expo floor at VMworld 2016, bumming from failing my VCP6 (it was expected, but disappointing nonetheless), and I walked straight into a book signing for the new VCP6-DCV cert guide. It was destiny or something like that

VCP6-DCV Official Cert Guide (Exam #2V0-621) (VMware Press) by John Davis, Steve Baca, Owen Thomas

Here’s the thing with certification guides; the majority of the material doesn’t change from version to version. DRS is DRS is DRS (not really, but the fundamentals are all there). If you’re just getting started, or able to get a hand-me-down version of an earlier version you’ll still be leaps and bounds ahead of folks who haven’t read the books. They can be a good way to get a grasp on the fundamentals if all you’re looking to do is learn. To that end, if you’re goal is to pass the test, you can’t go wrong with picking up an official certification guide. I know the VCP6-DCV guide provided an invaluable refresher for me.

For more on the VCP-DCV, please check out my study resources.

Hands On

Learning PowerCLI (Packt publishing) by Robert van den Nieuwendijk

I didn’t realize until just right now that there was a second edition of this book released in February of this year! Regardless, this book is a great way to get started with PowerCLI, however it’s more of a recipe cookbook than a tutorial. If you need a getting started with PowerShell book, look no further than:

Learn Windows PowerShell in a Month of Lunches by Donald Jones and Jeffrey Hicks.

This is the guide to get started with PowerShell. Honestly I think the authors should give me commission for how many people I’ve told to go buy this book. It’s not a vSphere book, but if you want to be effective with PowerCLI, this book will help you on your way. It breaks the concept up into small manageable chunks that you can swallow on your daily lunch break.

DevOps for VMware Administrators (VMware press) Trevor Roberts, Josh Atwell, Egle Sigler, Yvovan Doorn

“DevOps” to me is like “the cloud”. It means different things to different people. In this case the book focuses solely on the tools that can be used to help implement the framework that is DevOps. Nonetheless, it’s a great primer into a number of the most popular tools that are implemented to support a DevOps philosophy. If you’re struggling with automation and/or tool selection for your DevOps framework, there are far worse places to start.


Mastering VMware vSphere series

The gold standard for learning the basics of vSphere. This title could just as easily appear under the certification section, as it appears on almost every study list. A word of warning, this is not the book to learn about the latest features in a release. That’s what the official documents are for. You may notice that I linked an old version above, and that’s because the latest version was conspicuously missing nearly all of the new features in vSphere 6. That being said, it’s another standard that should be on all bookshelves.

VMware vSphere Design (Sybex) Forbes Gurthrie and Scott Lowe

Age really doesn’t matter with some things, and while that rarely pertains to IT technologies, good design practices never go out of style. This thought provoking book will help you learn how to design your datacenter to be the most effective it can be. I’d recommend this book to anyone who’s in an infrastructure design role, regardless of whether they were working on vSphere or not.

It Architect: Foundation in the Art of Infrastructure Design: A Practical Guide for It Architects John Yani Arrasjid, Mark Gabryjelski, Chris Mccain

And then on the other hand you have a design book that’s built for a specific purpose and that’s to pass the VCDX. Much of the content is built and delivered specifically for those who are looking to attain this elite certification. This is a good book, but as someone who has no intention of ever going after a VCDX, I expected something a bit less focused on a cert, and a bit more focused on design principles. If unlike me you have VCDX aspirations, you definitely need to go grab a copy.

VMware vSphere 5.1 Clustering Deepdive (Volume 1) Duncan Epping & Frank Denneman

I really don’t care if this was written for a five year old OS or not. If you want to learn about how vSphere makes decisions and how to work with HA/DRS/Storage DRS/Stretched Clusters, this is an essential read. Put on your swimmies because you’re going off the diving board and into the deep end!

I’m just going to go ahead and leave a placeholder here for the aforementioned VMware vSphere 6.5 Host Resources Deep Dive. Having heard them speak before and read their other works, I expect this book to be nothing less than mind blowing.

If you liked this, please check out my other book reviews.

Thanks for visiting!


Why you need more PowerShell

or: How I stopped worrying and learned to love the CLI

I recently gave a Tech Talk at our spring Champlain Valley VMUG on PowerShell and PowerCLI. The talk definitely was more of an introductory instructional, but one of our attendees expressed that they wanted to hear more about the value that can be delivered back to the organization by scripting with PowerShell. Hopefully I can give you a solid overview of the immense value of PowerShell here today.


The only constant is change and that holds true for IT infrastructure folks as well. Terms like DevOps, distruptor, and Shadow IT have become firmly established in our lexicon. And with good reason! We are in a world that is moving faster and faster everyday and you often see where it’s not the best product that corners a market, but rather it’s the first/fastest to market that gets a stranglehold. If you come from a classical IT role with silos and legacy processes/policies that slow your Organization down… well is it any wonder that you have disruptors changing the model?!? But what if there was a way that you could help accelerate your business, work collaboratively with the Developers, combat against Shadow IT and be the disruptor yourself? Powershell can be the tool that enables this transformation by delivering Time and Consistency to your organization.


This one is simple. Time is money and by investing a little bit of effort up front  scripting a solution, you will save time moving forward. Here is the no-brainer part of the value prop: Do you want to take on the timely task of building environments by hand? Of course you don’t! You want something that’s fast and easy. There’s a take on the old adage that I’ll paraphrase here “Do it once, ok. Do it twice, shame on me. Do it three times, why haven’t you scripted it yet?”

Let’s suppose for a second that you have to install a widget dozens, hundreds or thousands of times. This activity takes hours. Once you script & automate that install, you turn it into a hands off activity freeing up your engineers to do more of the activities that will drive value instead of just watching the progress bar. Simply by the act of writing that script, you’ve saved your business time/money, and honestly you’ve probably gained a bit of expertise and employee engagement as well. Extrapolate this out to all of the infrastructure elements you need to manage: people, policy, applications, servers, storage, network, security and the list goes on and on. Even if you can only automate part of a process, you’re still going to see dividends.

51oYzgTCiyL._SX427_BO1,204,203,200_[1]A less intuitive reason for starting with PowerShell is that it has a pretty quick learning curve, especially if you come from a windows environment. If you have any programming/scripting background you can likely dive right in. This means that your team can be scripting sooner, and can start ensuring that they are driving the non-value added operations out of your day to day. Many infrastructure folks don’t have a background in development activities and as such scripting can be a bit of a hard sell. PowerShell was meant to build upon and extend the foundation of items like Batch and VBscript, but in a way that is intuitive to learn and become efficient with quickly. One of my go to guides for learning PowerShell is the Learn Windows PowerShell in a Month of Lunches guide. This book is so successful in large part because it demonstrates just how easily accessible PowerShell really is.

2017-03-28 11_47_02-Windows PowerShell ISEI mentioned earlier that you can create collaborative opportunities and combat against Shadow IT. PowerShell is built on top of the .NET framework and has support for RestAPI’s baked in. This means that you can share code, speak the same language and have smoother hand-offs. By using PowerShell you have an opportunity to increase the amount of collaboration between your groups. If you can harness this opportunity you’re likely going suffer from less finger pointing, and be able to cut out some unnecessary meetings.


Time and consistency (and money) go hand in hand in IT. Having inconsistent environments results in more frequent issues and longer times to resolution. When you start scripting out your activities you will have a much more predictable environment, outages will decrease in frequency and your time to resolution will also drop. This all yields in greater up-time. More up-time means happier customers and happier engineers. Your business is winning!

One is a goat. The other is the GOAT.
Speaking of winning, do you know why Tom Brady is one of the Greatest Of All Time? It’s not because of his ugg’s or his supermodel wife. It’s because he has put in the work up front to ensure that no matter who he is working with, he will have a predictable and consistent outcome. This is what you should be aiming for with your environment: consistent and predictable.

Having a consistent repeatable infrastructure makes that environment easier to rebuild. If you can kick off a PowerShell script that results in a fresh server in a matter of minutes, why would you spend hours troubleshooting a problem? The saying “treat your servers like cattle, not like pets” became popularized for a reason. Wikipedia states that “The term commodity is specifically used for an economic good or service when the demand for it has no qualitative differentiation across a market.” Your servers SHOULD have no qualitative differences, and are therefore inherently commodities, and replaceable. Diving into PowerShell and PowerCLI can help get you there.


I’ve mentioned it a number of times but some of you may be going, what is this PowerCLI thing? PowerCLI is VMware’s implementation of PowerCLI modules which allow you to “automate all aspects of vSphere management, including network, storage, VM, guest OS and more.” To put it short, it’s a super efficient and reliable (not to mention fun) way to manage your vSphere environments. It’s also incredibly powerful.  There are over 500 separate commandlets in the modules which make up PowerCLI. By some accounts VMware has approximately 80% of the hypervisor market, which means the majority of the worlds infrastructure run’s on vSphere and can be managed with PowerCLI.

Using PowerCLI just allows you to further expand on the amount of Time and Consistency that you can deliver back into your business. With PowerCLI you can automate/manage the network, hypervisors, storage and all of the elements that encompass your “infrastructure”. You can also take it one step further and thanks to the security models built into vSphere you can let your users do it too! With a little bit of thought and design, you can give your developers the ability to spin up and spin down their own VM’s. No more test/dev headaches for you and your developers are happier! The winning doesn’t stop!

As I said to start my VMUG presentation, I’m not an expert in PowerShell or PowerCLI, but I have used it very effectively in my day to day. It’s also a topic that I’m passionate about, otherwise you’d never catch me voluntarily speaking in front of 100 people! I’ve also managed to write some fairly complex scripts that have helped my Organizations reach goals. I hope this post helps you understand some of the value PowerShell & PowerCLI scripting. If you’d like to keep the conversation going or if you have any questions I’d love to hear from you.

The Order of the Phoenix – The Prequel

170x170bbNo, unfortunately this is not about some recently found JK Rowlings manuscript, it would probably be much more captivating if it was, but rather I just finished reading The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps which you could call the prequel to the Phoenix Project. Although I don’t think it was the authors intention, you could look at the Phoenix Project as a case study and Implementing ITIL as the install guide. One seeks to create understanding via a narrative, while the other is a prescriptive method for implementation.

At first it may seem hard to believe that the same authors who in 2015 published The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win wrote a book about Implementing ITIL. After all, a poorly implemented ITIL strategy can result in layers of bureaucracy and a slowdown of all IT operations. Combine this with the image that many people have of DevOps as a cowboy or shadow IT movement and the two books at a glance appear to make for odd bedfellows.  In reality ITIL and DevOps can and should be partners in creating a more effective IT organization that serves to meet the needs of the business at increasing velocity. They both can lead to the same results of providing faster turnaround, greater visibility & security, lower failure rates and less firefighting, so why shouldn’t these two frameworks coexist?

Before going much further, I think it make sense to take a lesson from the authors and provide a few definitions for the sake of conversation.

  • 041
    My ITIL pin, awarded to those who’ve passed an ITIL qualification exam

    ITIL as stated in Wikipedia is “is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business.“. It’s a way to define a standard set of processes and controls around IT service. As the authors are fond of pointing out it can also be used as a universal IT language to define processes, but the true power of ITIL is in cleaning up a messy ITO.

  • DevOps is often looked at as more of a culture or movement than a framework, but it seeks to more tightly integrate the Development and Infrastructure (or Operations) side of IT organizations. Where ITIL is extremely metric driven, DevOps is often very focused on tooling. In the context of DevOps, the Development side of the equation includes traditional Developers as well as Test, QA, integration, etc. Operations refers to what I’d prefer to call the Infrastructure team: Sys Admins, DBA’s, Release and so on.
  • I think it also would also help to define an IT organization or ITO. In terms of the book, they reference Development, testing, release, QA, operations and the traditional Infrastructure groups all as parts of an ITO. In smaller companies these may be broken up into disparate groups, however you just as often see them combined as a larger ITO body. In full disclosure, my opinion and experience show that having these functions all combined within an ITO creates for better harmony (aka less finger pointing), and enhanced cooperation right out of the gate without taking into account the prescribed recommendations in the Handbook.

Now that we are speaking the same language, the Handbook presents a simple framework for how to turn your ITO into a highly functioning organization. The book is broken down into four steps to help guide you on your journey towards being a high performing IT organization.41fldeek8kl-_sy344_bo1204203200_

For those of us in the trenches it can be daunting to try and figure out how to start. When you are constantly fighting fires it can be hard to see a way forward. The first step prescribed in the Handbook is “Stabilize the Patient”. I prefer to call it “stop the bleeding” because often IT practitioners are prone to death by 1000 cuts. It’s hard to look at process improvement when you have to read 1000 emails a day (not an exaggeration), fix the latest crisis du jour, deal with the Executives pet project, manage vendors and the list goes on and on.

The premise of Step 1 is pretty simple and it’s stated pretty bluntly in the first sentence “Our goal in this phase is to reduce the amount of unplanned work as a percentage of total work done down to 25% or less.“. Now if you’re like me the immediate reaction is “HOW THE HELL CAN I DO THAT!”. And the simple answer is: Change Management. I’m pretty sure there was a collective groan from anyone who may be reading this page. The reason that many people react that way is because we’ve all seen change management done badly. I’ve seen change management run the gamut from honor system spreadsheets, to long droning and monotone CAB meetings. The reason they fail is because they are not focused on the spirit.

If you’ve read any of my previous posts, you’ll know that I’m a proponent of a business first and technology second mentality. In the Introduction to the guidebook a lot of attention is focused on the fact that to succeed you have to have a culture and belief system that supports and believes in three fundamental premises:

  • Change Management is paramount and unauthorized change is unacceptable. When you consider that 80% of failures are caused by change (human or otherwise), it’s quickly apparent that all change needs to be controlled.
  • A culture of Causality. How many outage calls have you been on when someone has suggested to reboot the widget “just to see if this works”? Not only does this approach burn out your first-responders and extend outages, but you never get to cause and therefore resolution with this approach. My favorite phrase in the book: “Electrify the fence” goes more into that, which we’ll discuss soon.
  • Lastly, you must instill the belief in Continual Improvement. It’s pretty self explanatory: you fought hard to get to this point and if you’ve already put in this level of effort you obviously want to see the organization continue moving forward. If you’re not moving forward, then everyone else is catching up.

umm-yeah-we-qmnvvwNow you can’t just have an executive come in state that “We have a new culture of causality” and expect everyone to just get on board. It’s a process, and by successfully demonstrating the value that the process brings, people will come on board and begin embodying the aforementioned culture. What you do need your Executives to get behind and state to the troops, is that unauthorized or uncontrolled change is not acceptable. They say it over and over in the book, the only acceptable number of changes is Zero.

But how do you get people to stop with the unauthorized changes?

  • You need a change management process. You must must MUST have a change process, and it can’t be burdensome.
  • There has to be a process  for emergency changes. Don’t just skip the whole change approval process for emergencies, because then you set a precedent that says you can avoid the process when it’s important enough. Once you’ve done this you’ve created an issue where everyone thinks their issue is an emergency.
  • Consider having a process for routine, everyday activities with normal levels of risk that get auto-approved. The important part is that they are tracked and that …
  • The people implementing the changes are accountable. They are accountable for following the process as well as for executing on the change. Many people don’t want to follow change processes, so they won’t. You have to have a means to monitor, detect and report on changes. Once you have this in place, people can truly be held accountable.

And why do you want to go through this process of reducing unplanned and unauthorized change? The number cited in the book is that 80% of all outages are caused by change. Reducing unplanned changes reduces outages and the duration of outages. Less outages means less firefighting. A formal process that MUST be followed, drops the number of drive-by’s that your engineers have to appropriately deal with as all changes are going through the process.

51eypkmgx6l-_sy346_Lastly, going through a change process forces the change initiators to think intentionally. Someone I respect immensely had me read Turn the Ship Around! last year (another book I’d highly recommend), and in this book there is a concept of acting intentionally. You state to your teammates “I intend to turn the speeder repeater up to 11.” By stating the actions you are about to take, you are forced to think about them and what their results may be. It can also slow you down a touch when you’re about to just make a “harmless little update.” By acting intentionally through the change process, you consider (and hopefully document) exactly what you’re going to do, what the outcome will be, how you’ll test, and what your rollback plan is. All of these acts will provide for higher quality changes, fewer outages and ultimately provide more time for your engineers to focus on the remaining steps in the handbook.

Now by the time you’ve gotten through step one, I’d argue that the heaviest lifting is done, and you’re hopefully learning more about ITIL as you go. The remaining steps that the authors detail in the Handbook share a lot of commonalities and are where you can really find opportunities to start blending the best parts of ITIL and DevOps into your own special IT smoothie.

Step 2 is pretty straightforward. You have to know what you have in order to effectively support your business. You must create an inventory of your assets and your configurations. Honestly this step can be summed up pretty succinctly: Go build a CMDB and and asset DB, otherwise you’re subject to drift and you can’t hold people accountable for their changes. It’s the bridge between Step 1 where you have to know how the environment is setup and configured, and between Step 3 where you begin to standardize.

Now bear in mind that when this book was originally written, DevOps hadn’t been coined as a term yet, but you can see it as a precursor with the title of Step 3 “Establish a repeatable build library”. In 2017 the benefits of this are pretty obvious. If you have a standard build, you can hand that release process off to more junior members, or ideally have the process automated. By having your builds standardized and your configurations documented, your environments are not pets, they are cattle. With a standardized environment your outages are likely to be more infrequent, but when they do occur the time to resolution will be dramatically smaller because you have a known footprint.

I did struggle a little bit with section 2 & 3. Section 2, “Catch and Release” is six pages long and consists mostly of the benefits having a known inventory will provide. It’s obvious that the authors find this point important enough to break it, but if it were an easy task everyone would already have the information and documents the authors specify.

This isn’t necessarily a knock on the authors, as it’s a twelve year old book, but section 3 “Establish a Repeatable Build Library” is a bit dated and heavily focused on the ITIL processes. No doubt having your process repeatable is very important, but as we’ve already pointed out in this day and age velocity matters and for this you have to have tooling and automation in place. Again it’s certainly not a knock on the authors, it’s just that you may be able to find better, more modern guides on how to achieve a build system in 2017.

The final section is really interesting to me as it’s part summary, part recap, and part advisories on the pitfalls to watch out for. Any topic on “Continual Improvement”, the heading of section 4, will obviously have a focus on data and metrics. Typically in an ITO metrics revolve around system or availability metrics like is the system up, is the database running too hot, etc, whereas the authors advise looking at more qualitative and performance metrics. After all the goal is to control the environment, and reduce administrative efforts so that your knowledge workers can spend more time working on value-add efforts. As you read this section it’s easy to see that many of the ideas in the “Continual Improvement” section are the seeds which the Phoenix Project grew out of. The biggest takeaway for me is that to become a highly effective ITO, you need less six-shooters and cowboy hats and more process roles and controls. Only by controlling the environment can you actually expect predictable results.

The book effectively wraps up with a summary of the objective “As opposed to management by belief, you have firmly moved to management by fact.” If you’re struggling to obtain this goal, The Visible Ops handbook may be a good place to start, just be prepared to augment it with up to date technologies and data.

The Phoenix Project

From the moment that I arrived in Vegas for VMworld 2016 I started hearing about this book The Phoenix Project. At first I thought that my ears were playing tricks on me when I heard that it was a DevOps novel. This weird reality sunk in when during the opening day keynote address John Spiegel,  IT manager at Columbia Sportswear spoke about the virtues of this book. (segment begins right around 51min)

Given all the chatter around this book, I ordered it from my seat before Mr. Spiegel had even left the stage. The primary message from Mr. Spiegel and the session in general was “treat IT as a factory, focusing on efficiency’s, optimizations”. This is obviously a very important message, but I’d argue that anyone who works in IT and hasn’t recognized, learned, embodied this message, or at a minimum isn’t working towards it…  well… there’s probably other fundamental messages that should be more relevant to them.

There is an underlying theme to the presentation, Mr. Spiegel’s talk and in this book that resonated very strongly with me and that is is to take an Outside-In approach to IT. Instead of focusing on a technology or a framework as many in IT are prone to do, we need to look at the problems (and successes) that people throughout the Organization experience. Take that newfound knowledge to figure out how we can use technology to positively affect their experiences and therefore positively drive the goals of the business. Once articulated it’s a pretty simple concept to internalize: if you don’t know the business, its positives and its problems then how can you possibly be most effective in helping the Organization move forward?

One particular individual in The Phoenix Project recognizes this reality in a rather dramatic fashion and goes from the stereotypical vision of “IT as the department of ‘no'” to one who actively seeks engagement. He takes the empathetic approach of trying to understand both the pains and successes of his business and how he can use his technological skills to affect change for the positive. There is a realization that by attempting to apply strict dogmatic InfoSec principles he just may slow things down. Once his mindset shifts to an Outside-In approach, he’s able to get a far greater level of cooperation, able to implement more of the principles he cherishes, all the while moving the business and his personal/career objectives forward at a faster pace!51eie0testl-_sx333_bo1204203200_

The inside out approach is just one piece of this fantastic book. The novel format is one that I haven’t seen in IT improvement books before, and it certainly makes for an engaging read. Don’t mistake this book for a deep-dive into any frameworks or technologies. Rather it creatively addresses many of the common challenges which need addressing in order for you to develop a high performing IT organization. If you’re looking for a guide on how to begin implementing a DevOps framework and culture in your organization, then disregard the sub-title as this probably isn’t the best book for you.

If you’ve ever been bogged down in the quagmire of firefighting, been unable to break the cycle of finger pointing, struggled to come up with fresh approaches to  the struggles of working in a large IT org or even if you’re just someone who works with IT, then this book should be a must read for you.

PS: If you’ve found this interesting, perhaps you’d like to check out my thoughts on Implementing ITIL written by the same authors.