IT Maturity Part 1: What Does a Successful IT Organization Look Like?
For years, analysts have put forward models of IT maturity -- even major vendors like Microsoft got into the game. Basic. Proactive. Agile. Remember agile? Some big, big companies got behind that word.
I recently worked with a group of consultants who completed a 300-company survey of IT maturity. Rather than trying to apply fancy terms to different levels of maturity, we focused on two things: What do companies want from IT, and of those companies who are getting it, what are some of their other characteristics? In other words, what does a successful IT team look like?
I'll present our findings in several parts; what you'll read is essentially a summary of everything we learned. Keep in mind that we didn't actually make anything up for this -- we simply surveyed companies to define success, and then observed companies who were, by that criteria, the most successful in IT. You're welcome to contact me if you or your organization would like to learn more about our findings.
What Does a Successful IT Organization Look Like?
This was obviously the first and most important part of the study: What does a successful IT organization look like? We started at a fairly high business level with executive statements revolving around delivery and overhead, but then took those and started to drill into specific numbers. Mind you, most of the companies we spoke with were not successful according to their own criteria; we based our final findings on statistical averages and means across what every company told us. We then tried to abstract the numbers just a bit so that they could potentially apply to every organization, not just ones of a certain size. Finally, we re-scaled the numbers to make sure that what we were seeing in terms of criteria truly did match back to what companies told us, once we adjusted for company size.
So here's what we're told a successful IT organization looks like:
- The organization's highest-paid (what most people would call Tier 3) staff spends less than 10 percent of their time on break/fix problems, approximately 20 percent of their time training and mentoring lower-tier staff, and the remainder of their time on design and implementation of new projects and of upgrades.
- No single help request topic handled by the organization's Tier 1 support (what most people call the help desk) staff comprises more than 20 percent of that tier's total workload. In other words, there are no major recurring problems that can't be solved through self-service solutions. Password resets, for example, will never go away, but they shouldn't occupy people's time.
- Most problems are solved from documentation or knowledge bases, even when the fix must be implemented by IT staff and not an end user. Specifically, less than 5 percent of the fix-time for most problems is spent on research and information-gathering. This was a tough number to ferret out, as few orgs track resolution times with this kind of detail, but we conducted a few interviews that seem to all point to roughly this number. "Most problems" turns out to be something like 90 percent, meaning about 10 percent of problems took longer to research.
- Unplanned downtime -- and this could be an entire system or an entire user -- compromises less than 1 percent of the organization's total time. That is, no one user is down for more than 1 percent of their workday, no one system is down for more than 1 percent of its service period, and so on.
- Fewer than 10 percent of trouble tickets relate to the performance of production systems. In other words, either users perceive performance to be fine, or they've been educated (often via SLAs) on the difference between "good" and "bad" performance, and don't call in when they know things are in the "good" range.
- Tickets tended to be fixed in more or less the shortest time possible for the problem at hand. In other words, we observed very little "procedural overhead" around IT. If a user called and needed access to some folder, and had permission from the data owner (the means by which that permission was communicated and evidenced varied a lot), then the problem was fixed in minutes, not days.
Those are the main points. Not surprisingly, they all revolve around downtime and fixes. True, most IT organizations do a heck of a lot more than just fix problems, but problems are obviously pretty high-profile within an organization, and so they tend to drive perception of success or failure. These "issue-based metrics" were interesting to us, because they reflect an organization's definition of success once the SLAs have failed. That is, teams already have SLAs in place for routine things like creating new mailboxes; the metrics we've outlined are for how well things are handled once the SLA isn't met.
Our next step was to start looking at other, more detailed behaviors within organizations that were meeting this criteria.
Read the rest of this blog series here:
Posted by Don Jones on 04/09/2012 at 1:14 PM