Windows Server 2012: An Upgrade Only a Geek Could Love

Microsoft has a problem. A marketing problem. That problem's name is Windows Server 2012.

You see, as an operating system, Windows is pretty dang robust already. There's not a lot that we need it to do that it doesn't do. So Windows Server 2012 doesn't come with a flash-bang set of features. There are no massive changes to AD. Printing is still printing. Clustering works fine. Sure, it's probably "the most secure version of Windows ever," but I don't think anyone's dumb enough to try and sell that line anymore.

This means a lot of organizations -- a lot of decision makers --  are going to look at Windows Server 2012, say "meh" and ignore it.

Bad move. Windows Server 2012's improvements aren't skin-deep -- they're geek-deep. They're critical, yet evolutionary changes that make this a more robust, more stable and infinitely more usable operating system.

SMB 3.0
Yeah, maybe it's really Server Message Blocks 2.2, but it should be 3.0, and I'm glad MS is positioning it that way. Massively re-structured, this is a SAN-quality protocol now, capable of moving close to 6 gigaBYTES per second. Yes, gigabytes, not the usual gigabit measurement of bandwidth. It's got built-in failover, too, meaning clustered file servers are now a no-brainer. It lets file servers scale out, too -- something which has never before been possible. There's a geek-speak explanation of all the new hotness in this Microsoft blog, and you gotta believe this is going to be a game-changer for the OS.

Dynamic Access Control
While this will be limited, initially, to the access controls on shared folders (rather than on files, AD or something else), this is showing us what the foundation for ACLs looks like in the future. Imagine complex ACE definitions like "must be a member of Admins, but NOT of HR, or can be a member of Execs" -- and that statement is evaluated on the fly. This truly enables claims-based access control, because it doesn't have to be built on user groups any more. "User must be in the department 'Sales' in AD, and must not be in the Denver office." Keep your AD attributes up to date and suddenly access control got easier -- and much more centralized. This will still layer atop existing NTFS access controls, as share permissions always have, but it's a big deal. Start wrapping your head around this now, because it's a model you'll see creeping further in future releases.

PowerShell
This is the version of Windows we were told six years ago was coming. Almost completely manageable via PowerShell (if not completely completely; it hasn't shipped as I'm writing this, so it's tough to say), this is the version of Windows that starts to deliver on the PowerShell promise: Automate Everything. Combined with PowerShell v3 foundation features like more robust Remoting, Workflow creation and more. Windows Server 2012 is taking a page from the Unix book and rewriting it for the Windows world. That's a good thing because it truly enables enterprise-class sensibility in our server OS.

Server Core
Explain it to me as many times as you want, and I'll never understand why folks RDP into a server to perform basic day-to-day management rather than just installing the GUI consoles on their admin workstation. But Win2012 raises the stakes, providing a "GUI-free" server console that doesn't have the limitations and caveats of the old Server Core installation mode. Take heed: Whether this excites you or not, it's Microsoft's direction. Start thinking about managing your servers from your clients, because that's going to be the only option in the not-too-distant future. Oh, and as for installing all of those admin consoles on your client? Maybe not: PowerShell Remoting means the admin tools can "live" in the server, but "present" on your client.

Get You Some Win 2012
"The right tool for the right job" is the mantra all of IT should live by, and Win 2012 is shaping up to be a better tool for many jobs. It's worth looking at. Even if you think your organization won't have any 2012 goodness well into 2014, at least familiarizing yourself with the tool's capabilities will put you in the driver's seat. You'll be prepared to help make recommendations about this new OS, speak knowledgably about its capabilities (and about what it can't yet do) and be the one to lead its adoption and deployment. Better to be driving the bus than to be run down by it, eh?

Posted by Don Jones on 06/22/2012 at 1:14 PM9 comments


Are Microsoft's New Certifications Worth the Effort?

I'm sure you've seen endless analysis and opinion about Microsoft's re-re-revamped certification program, so I'll avoid adding any more to the pile. However, I do want to ask some questions -- because ultimately the value of these certifications comes from decision makers in organizations. If the boss cares, then the employees care, HR cares, and so forth.

First, one minor bit of opinion: "MCSE for Private Cloud" does, I have to admit, make me puke in my mouth. Just a tiny bit. I'm so sick of the "C" word, and this certification -- simply some Windows Server 2008 exams added to a couple of System Center 2012 exams  -- seems to be no "cloudier" than a nice day in Phoenix. But whatever. The marketing people probably could help themselves.

Microsoft's new certification program stacks into three tiers: The Associate level, the Expert level, and the Master level. These each break into two categories: "Certifications" and "Cloud-Built Certifications" (deep breath, hold, out the nose).

So... do you care?

In the beginning, these certification programs -- and I'm talking Windows NT 3-era here -- were largely a play by Microsoft to say, "Look, there are tons of people who can support our products, so why doesn't your business just send us a check for some software, hmmm?" Microsoft's certifications, like most IT certifications, have never been an attempt to protect businesses, to protect the public, and so on -- not in the way other professional certifications, like those in the medical or legal industries, are intended to do (whether they do it or not is, I'm sure, debatable).

So does the large body of Microsoft-certified human beings make you sleep more easily at night?

Do you find that a Microsoft certification acts as anything more than a bare-minimum filter for HR to hone in on when sorting through incoming resumes?

Knowing all about the "paper MCSE" syndrome, the scores of brain-dump Web sites, the certification cheats and all of that, would you still rather hire a certified individual over a non-certified one?

Would you discard, out of hand, the resume of someone claiming eight years of IT experience who doesn't have a certification over someone with less experience who does have a Microsoft title?

If you were to offer some advice to an IT person who doesn't have a certification but who's worked in a lower-tier IT position for a year or so, would you advise them to the exams needed to earn the new MCSE, MCSA or whatever? Or not? Why?

In short, how does Microsoft's certification program affect your business? I'm genuinely curious, and I'd love your comments. Drop 'em in the box below.

Posted by Don Jones on 06/05/2012 at 1:14 PM25 comments


Are Datacenters Transforming into Private Clouds? Sort Of ...

I've given up on being frustrated and annoyed with the IT marketing industry's profligate use of the word "cloud." I now am a happy user of cloud mail (formerly "Outlook Web App"), cloud storage (formerly "FTP server"), cloud computing (formerly "hosted virtual machine") and cloud services (formerly "Web site"). My last point of resistance -- the "private cloud" -- has been whittled away by the incessant efforts of Microsoft and other vendors' marketing machines.

Fine. Private cloud it is.

There's actually an emerging definition for what that means, and how it is distinct from what we used to call "datacenter."

  • Automated provisioning. When you spin up a new Amazon EC2 virtual machine, you don't have to call someone on the phone or file a support ticket. Heaven forfend -- the idea of engaging a human being in the process is anathema to Amazon's entire business model. No, you click a few buttons, you type in your credit card number and a virtual machine is born. In the private cloud, this simply means automation -- something Microsoft is making excellent strides with through its growing integration of PowerShell. This is a good benefit for organizations because it helps reduce error and improve business agility, lower response times and alleviate plain old administrator fatigue.

  • Pay as you go. Another excellent analogy that uses Amazon EC2: You pay for what you use. Now, in the private cloud we perhaps don't need to be as nitpicky as to get down to the compute cycle, disk block, RAM usage and bandwidth utilization -- but certainly the idea of internal charge-backs isn't new. The main reason IT hasn't done charge-backs on everything to this point is that we haven't had the tools to do so. Anyone promising to help you implement "the private cloud" needs to make charge-backs easy -- and some vendors like IBM and HP are actually doing just that. Allocating the costs of IT to its actual consumers is beneficial, even if the company doesn't do the funny-money internal transfer of numbers. Simply knowing, rather than guess, where your costs are going is a good thing.

  • Abstracted. This is a big deal. I buy an Amazon EC2 virtual machine, I have no idea what physical host it runs on. I don't care. Amazon can shift it around all day to balance their workload, and that's fine with me. Ditto the private cloud: Ubiquitous hypervisor availability lets us treat server hardware as Giant Pots O' Resources, and we shift VMs around to suit our needs and whims. Again, a good thing. Server hardware lasts longer (old servers can just run less-needy VMs) and we can start to think of our datacenters in terms of capacity rather than of functionality.

So will your entire datacenter become one fluffy white private cloud? Doubtful. But pieces can. Whatever pieces are frequently requested by, and granted to, individuals or departments; VMs for software developers to test with, VMs for department or project file servers, you name it. Heck, with the right front-end, you could make some of those things entirely self-service. Why not let devs spin up their own VMs based perhaps on predefined, easy-to-select templates when they need to test something? It's basically the premise of commercial ventures like CloudShare.com, and there's no reason you couldn't offer it as part of your private cloud.

Plus, you'd get to use the phrase private cloud a lot. And it's a hip phrase, so everything would have to pretend you were cool.

There's another bonus to cloud-ifying the right bits of your datacenter: mobility. Suddenly decide that you don't want to own some particular piece of functionality? Find a way to outsource it more cheaply? Fine -- you've already abstracted the important bits away from the users, so does it matter that a bit of "your" cloud lives in  to own some particular piece of functionality? Find a way to outsource it more cheaply? Fine -- you've already abstracted the important bits away from the users, so it doesn't matter if that bit of "your" cloud lives in your datacenter or not.

What's this do for your IT team? Well, if I were an IT person who'd taken the initiative to learn automation techniques and technologies -- say, PowerShell or something like it -- I'd be sittin' pretty. I'd be building the private cloud (the cloudy bits, anyway), and I'd be a key player in the organization. If, on the other hand, my main job in IT was clicking the buttons that are about to stop existing in favor of better automation, I'd be against the cloud. In any form. Which is certainly why some IT pros (not all, just some) are against the cloud -- they fear for their jobs.

So be prepared. Build the cloud and you'll have a job for the next decade or two. Get run over by the cloud (see, they're not always so fluffy) and you obviously weren't planning ahead. I'm a bet-hedger, so even if this whole private cloud thing is a bust (and it won't be -- the business reasons for an intelligently built datacenter to become at least partly cloudy are too compelling), I'm spending the time to make sure I'm up to speed on the enabling technologies: Automation. Virtualization. Systems management. Because when the cloud does come, I want to be sitting on Number 9, not huddled in a rainstorm.

Posted by Don Jones on 05/24/2012 at 1:14 PM0 comments


Windows Server 2012: IT Pros Will Need WS-MAN Remoting Skills (And Not Just for PowerShell)

I'm seeing a worrying trend in the world of Microsoft IT. Let's politely call it the "head in the sand" phenomenon. My theory is that it comes from such a long period -- around a decade, really -- of relatively few major OS-level changes, especially in the Server version of Windows. Not that Windows 2008 didn't feature improvements over 2003, or that R2 didn't improve upon that, but they were largely incremental changes. They were easy to understand, easy to incorporate, or if they didn't interest you, easy to ignore.

That's not the case with Windows Server 2012, and I'm worried because I'm not seeing IT decision makers and IT teams really engaged with what's coming. The "oh, we're not moving to 2012" argument doesn't hold a lot of water with me because you never know. It's easy to have one or two servers creep in, often to support some other need, and before long you've got a lot of 'em.

Specifically, I'm worried about the lack of attention being paid to WS-MAN.

WS-MAN: Not Just for PowerShell
WS-MAN is the protocol that underlies PowerShell Remoting, and it's been available for Windows XP, Windows Vista, Windows 7, Windows Server 2008, Windows Server 2003 and Windows Server 2008 R2 for a few years now. I think many IT shops have felt comfortable ignoring it because it didn't push itself on you. If you wanted it, you learned about it before using it; if you didn't want it, you just ignored it.

That goes away for Windows Server 2012. It enables PowerShell Remoting -- and thus WS-MAN -- by default, because it needs it. Server Manager, you see, has been rebuilt to run on top of PowerShell. And even if you open Server Manager right on the server console, it still needs Remoting to "talk to itself" and make configuration changes. That pattern will grow more and more common as Microsoft starts shifting management tools to PowerShell. In earnest, Remoting makes it much easier for developers to create rich GUIs, built on PowerShell, that manage remote servers. By not distinguishing between "local" and "remote," developers ensure a consistent experience either way -- and help enable headless servers, a direction in which Microsoft is most assuredly heading.

So the idea of, "well, we don't use Remoting, so we'll shut it off" doesn't work anymore --it'd be as effective to just shut of Ethernet. You can't manage new servers without it -- so it's time to start focusing on understanding WS-MAN, and creating a place for it in your environment. Now, while you've got time to plan, rather than later when it's a forgone conclusion and it's just snuck its way -- uncontrolled and unmanaged -- into your environment.

Learning WS-MAN
Start by reading "Secrets of PowerShell Remoting," a free guide I put together with the help of fellow MVP Tobias Weltner. There's even an entire chapter on WS-MAN's security architecture, and answers to common security-related questions.

Practice setting up Remoting on your existing machines, even in a lab, so that you can become familiar with it. After all, if Win2012 is going to make you use Remoting, you might as well take advantage of it for other servers too -- and reduce your management overhead.

Don't think of WS-MAN as another protocol to deal with -- think of it as enabling fewer protocols, as it starts to phase out Remote Procedure Calls (RPCs) and the other scattershot protocols that Windows has relied upon for years.

Will there be security concerns about WS-MAN? Assuredly. Interestingly, many of the questions and concerns I've heard raised have has substantially poorer answers when it comes to our existing management protocols. When it comes to WS-MAN, people ask about the security of credentials, the privacy of the communications, and so on -- but I've never heard those questions raised about RPCs, which is what's mainly running your network right now. Keep that in mind, it's completely reasonable to ask the hard questions, but don't set a bar for security that you've never, ever met before, without at least acknowledging that you're doing so.

And keep in mind that WS-MAN isn't optional. I've had folks tell me that their "IT security will never allow it." Doesn't matter what IT security thinks: This thing is coming and it's mandatory for server management. Wrap your head around it now or later – although "now" will let you learn the protocol and make it a welcomed part of your environment.

Is Microsoft Crazy?
Maybe. Have you seen Ballmer jumping around at conferences? That's crazy. But more to the point, is Microsoft crazy in introducing a new management protocol that supports encryption, compression, delegated authentication, secure delegation of credentials, mutual authentication and that only requires a single HTTP(S) port rather than entire ranges?

Um... doesn't sound crazy.

Is Microsoft crazy for replacing a set of 20-year-old protocols with something newer, more manageable and more extensible? Yes -- in much the same way that replacing MS-DOS with Windows was crazy.

I'm not here to justify what MS is doing with the product; that's up to MS. I'm here to help people understand where they're going, so that we can be prepared. You don't have to like it, or agree with it, but you will have to deal with it. Better, I think, to start understanding it now than to wait until it's snuck in and is an uncontrolled part of the environment.

 

Posted by Don Jones on 05/14/2012 at 1:14 PM9 comments


IT Maturity Part 4: Conclusions for a Successful IT Organization

You have to be pretty careful in trying to draw conclusions from our little survey, because we deliberately didn't look at some of the things which absolutely impact an organization's ability to succeed with IT. We didn't look at managerial experience. We ignored their operating system choices and other vendor decisions. We ignored important things like time-in-profession for top-tier IT staffers. We thought those were all pretty obvious in terms of tactics to achieve success; we were looking for less obvious markers.

If we had to put our finger on one major success precursor, it's hiring. Should be obvious, but we think most organizations do a terrible job of hiring, and that the labor pool simply doesn't supply enough good candidates in all the right places.

We'd also say that a culture of support was in evidence within our successful companies. Administrators got the tools they needed, instead of being asked to do everything manually -- but had no hesitation to hack out something home-grown if that was the only way to get the job done. "I am doing this manually, once, and never again," was such a clear, consistent and resounding sentiment that we can't help but feel it must have some positive impact on those organizations. Employees in successful organizations seemed to feel -- and we hate using this word, but here it is, straight from Oprah -- empowered. They knew what needed to be done, and didn't have to fuss around much to get it done.

We definitely saw more successful organizations having more flexible policies and processes, but we think that's pretty obvious, too. There's a very fine line between using processes to control and manage change, and using them as a blunt weapon to slow things down and make people miserable. Some managers are better at walking that line than others, and it does seem to correlate to a more successful IT team -- but not as strongly as some of the other correlations we observed.

So, your thoughts? Are you in a successful IT organization? If not, what's the problem? If so, what's your secret?

Read the rest of this blog series here:

Posted by Don Jones on 05/07/2012 at 1:14 PM0 comments


IT Maturity Part 3: What We Didn't See in a Successful Organization

What was really interesting is a total lack of correlation in the one place we expected it most: salary. The staffers in our successful organizations were not necessarily the most highly paid folks we looked at -- they were almost always hovering right around the mean. Money is clearly important to retaining the right people, but you can't necessarily spend more to get better people. You have to get great people and then pay them what they need.

We also saw no correlation around the topic of certifications. Some organizations cared, some didn't; some employees cared, others didn't. It didn't seem to matter, as these attitudes were evenly spread throughout all of the organizations we looked at, including both successful and unsuccessful ones.

We also didn't see correlation in Human Resources practices. As nota big fans of HR ourselves (we were all independent contractors, so it's to be expected), we thought the successful organizations would be the ones that kept HR out of the way. Not necessarily; some of our more successful organizations (again, by the metrics we calculated) set aside up to 10 percent of their employees' time for HR duties like reviews, goal and commitment development (and other tasks).

Budgets didn't seem to be a correlating factor, either. We looked at the overall IT budget, adjusted it for the size of the organization and found that it didn't matter how much you spent per person on IT -- you didn't necessarily get better results. While it's true that we saw more tools in use within successful organizations, so things like software spend would be higher, we also saw more use of self-service and other cost-reducing measures which probably helped offset that additional spend.

We did not see -- and this really surprised us -- a lack of religious fervor within IT. Some organizations were passionate about Windows, others about Unix, and they could be perfectly successful while maintaining that fealty. Certainly, some successful organizations had a little bit of everything, but the striving for homogeneous technology didn't seem to hurt.

We also didn't see an emphasis on cross-training or matrix management. Some of the most successful organizations we looked at were built with incredibly strict disciplinary silos (I'm the Unix guy, you're the database guy, you're the C# guy, etc.); so long as they maintained an appropriate level of cooperation between teams, it didn't seem to hold them back from being incredibly successful.

Read the rest of this blog series here:

Posted by Don Jones on 05/01/2012 at 1:14 PM0 comments


IT Maturity Part 2: Behaviors of a Successful IT Organization

Having figured out how our survey organizations define success, we needed to look and see what else we could tell about those organizations. Our goal was not necessarily to figure out how they were successful, and that's something important to keep in mind as you read. Correlation does not always equal causation, and we were mainly looking at correlation here: Observed characteristics of an organization that just so happens to be successful under our metrics. We specifically tried to avoid cracking into how these organizations achieved their success; we wanted instead to look at the less-visible things that helped to organically support that success. Here are some of the major things we noticed:

  • Automation is endemic to the culture. People in successful organizations do not run graphical wizards for anything they need to do more than once; they find a way to automate it. Sometimes, those ways are hack-y and ugly, but they're there.

  • Any problem, which has occurred more than twice, has a documented response plan and, when possible, an automated first-response plan. Frankly, we'd expected this to be "more than once," but there appears to be a bit of tolerance overall.

  • There are concrete criteria around performance, and these are often communicated to and visible by end users. In many cases, these are intranet-based "dashboards" that display rolled-up performance data (think simplistic meters). Users understand not to complain about anything that's "in the green," and know that anything else is already under alert and being looked at.

  • There's remarkably little "throw it over the wall" with tickets. Tickets get escalated only when it's absolutely necessary; higher tiers handle the problem and tend to never drop things back to a lower tier. Getting back to the automation bit, we saw higher-tier people solve problems in ways that could then be handed off to lower tiers, or turned into self-service solutions. The general feeling in this organizations is, "I'm only going to fix this once, and it'll be fixable by other people from then on."

  • Nobody goes into the datacenter unless they're repairing broken hardware or installing new hardware. Period. We were actually surprised at the degree to which this held true. Data centers were literally "lights out" almost all the time.

  • Above the first tier, where this varied quite a lot, we saw a lot of consistency around training. Most IT staffers got two to three classes per year on top of a conference, and they self-selected almost all of that based on their interests and where they felt their skills needed supplementing.

  • Tools were in use. Our successful organizations utilized something like three to four times as many third-party support tools as less-successful ones, and utilized something like 10 times as many home-grown tools (which plays back to the automation thing, in large part).

Interestingly, we saw little formal structure to make any of these behaviors happen. People's job descriptions never mentioned automation. There were no long, complex processes detailing how to handle every possible situation. In many cases, we didn't even observe a managerial mandate to document problems. We simply had to conclude that these organizations had kick-butt IT employees who just wanted to do a good job. Management certainly supported that, and likely drove it to some degree, but having enthusiastic employees who want to do a great job is clearly important.

We also noticed that about two-thirds of our "successful" organizations had a remarkably lower management-to-staffer ratio than other "successful" organizations, and that ratio was also lower than most "unsuccessful" organizations. We saw ratios like 1:20, which flies in the face of a lot of management theory (this was direct reports to their immediate manager; we didn't look at middle tiers). Again, this is correlation, not causation, and we certainly saw a third of our successful organizations with more traditional ratios of 1:5, 1:8, and so forth.

Read the rest of this blog series here:

Posted by Don Jones on 04/22/2012 at 1:14 PM1 comments


IT Maturity Part 1: What Does a Successful IT Organization Look Like?

For years, analysts have put forward models of IT maturity -- even major vendors like Microsoft got into the game. Basic. Proactive. Agile. Remember agile? Some big, big companies got behind that word.

I recently worked with a group of consultants who completed a 300-company survey of IT maturity. Rather than trying to apply fancy terms to different levels of maturity, we focused on two things: What do companies want from IT, and of those companies who are getting it, what are some of their other characteristics? In other words, what does a successful IT team look like?

I'll present our findings in several parts; what you'll read is essentially a summary of everything we learned. Keep in mind that we didn't actually make anything up for this -- we simply surveyed companies to define success, and then observed companies who were, by that criteria, the most successful in IT. You're welcome to contact me if you or your organization would like to learn more about our findings.

What Does a Successful IT Organization Look Like?

This was obviously the first and most important part of the study: What does a successful IT organization look like? We started at a fairly high business level with executive statements revolving around delivery and overhead, but then took those and started to drill into specific numbers. Mind you, most of the companies we spoke with were not successful according to their own criteria; we based our final findings on statistical averages and means across what every company told us. We then tried to abstract the numbers just a bit so that they could potentially apply to every organization, not just ones of a certain size. Finally, we re-scaled the numbers to make sure that what we were seeing in terms of criteria truly did match back to what companies told us, once we adjusted for company size.

So here's what we're told a successful IT organization looks like:

  • The organization's highest-paid (what most people would call Tier 3) staff spends less than 10 percent of their time on break/fix problems, approximately 20 percent of their time training and mentoring lower-tier staff, and the remainder of their time on design and implementation of new projects and of upgrades.

  • No single help request topic handled by the organization's Tier 1 support (what most people call the help desk) staff comprises more than 20 percent of that tier's total workload. In other words, there are no major recurring problems that can't be solved through self-service solutions. Password resets, for example, will never go away, but they shouldn't occupy people's time.

  • Most problems are solved from documentation or knowledge bases, even when the fix must be implemented by IT staff and not an end user. Specifically, less than 5 percent of the fix-time for most problems is spent on research and information-gathering. This was a tough number to ferret out, as few orgs track resolution times with this kind of detail, but we conducted a few interviews that seem to all point to roughly this number. "Most problems" turns out to be something like 90 percent, meaning about 10 percent of problems took longer to research.

  • Unplanned downtime -- and this could be an entire system or an entire user -- compromises less than 1 percent of the organization's total time. That is, no one user is down for more than 1 percent of their workday, no one system is down for more than 1 percent of its service period, and so on.

  • Fewer than 10 percent of trouble tickets relate to the performance of production systems. In other words, either users perceive performance to be fine, or they've been educated (often via SLAs) on the difference between "good" and "bad" performance, and don't call in when they know things are in the "good" range.

  • Tickets tended to be fixed in more or less the shortest time possible for the problem at hand. In other words, we observed very little "procedural overhead" around IT. If a user called and needed access to some folder, and had permission from the data owner (the means by which that permission was communicated and evidenced varied a lot), then the problem was fixed in minutes, not days.

Those are the main points. Not surprisingly, they all revolve around downtime and fixes. True, most IT organizations do a heck of a lot more than just fix problems, but problems are obviously pretty high-profile within an organization, and so they tend to drive perception of success or failure. These "issue-based metrics" were interesting to us, because they reflect an organization's definition of success once the SLAs have failed. That is, teams already have SLAs in place for routine things like creating new mailboxes; the metrics we've outlined are for how well things are handled once the SLA isn't met.

Our next step was to start looking at other, more detailed behaviors within organizations that were meeting this criteria.

Read the rest of this blog series here:

Posted by Don Jones on 04/09/2012 at 1:14 PM0 comments


What Color Are Your IT Team Members' Collars?

We're accustomed to thinking of IT as "white collar" positions -- office jobs with little or no manual labor. But I've started revisiting that presumption in recent customer engagements. Many of my customers' IT decision makers are struggling to motivate their team, to update their skills and to get critical projects underway -- and sometimes, a portion of that struggle comes from not appropriately understanding/managing "blue collar" and "white collar" employees.

So what's the difference?

First, I'm not making a value judgment between "white collar" and "blue collar." I'm not presenting one as more valuable or more desirable than the other. But there are certain characteristics associated with both -- and both offer pros and cons to the organization. Understanding what your team consists of is crucial to motivating them, and to getting the most from those resources. In some cases, you may find that you have "white collar" expectations, but that you're treating your team like "blue collar" workers -- meaning the disconnect between what you want and what you get is at least partially your fault as a leader.

The following table summarizes some of the key, observable traits for each type of employee. Note that these are pretty atomic -- you can't pick and choose from Column A and Column B. These all tend to go together, so getting the results you want means making sure a given employee can line up completely in one column or the other. In other words, if you're providing the resources a "blue collar" person would need, don't act surprised when they don't exhibit the behavior of a "white collar" colleague!

WHITE COLLAR

BLUE COLLAR

Goes to trade conferences

Doesn't go to trade conferences

Regularly receives training to update and expand skills; training is as likely to be aligned with long-term career development as with immediate project needs

Doesn't receive training as regularly; training may be more aligned with immediate projects than with long-term development

Is passionate about what they do -- often has hobbies that relate to their job

Is less passionate about the job per se; may still love tech, but doesn't have hobbies that mirror their work

Constantly looks for, and suggests, ways to improve processes, tools and products

Less likely to suggest improvements; may primarily suggest improvements that reduce their own work effort

Wants to learn new technology often for the sake of knowing it

Less interested in learning new work-related technologies unless there's a specific and immediate production requirement to do so

Maintains an updated resume, even when they're not looking for a job or promotion

Tends to update the resume only when it's time to go job-hunting, or when a promotion is available

Steps up quickly when new, strategic skills are needed -- and may in fact already be up on what the organization needs

Usually willing to learn new skills, but only because you've asked them to -- tends to be a bit less initiative when it comes to breadth of knowledge outside the scope of their job

I'll acknowledge that the terms "white collar" and "blue collar" are pretty loaded, so I'll tell you how I've been thinking of them.

White collar, to me, is someone who has a career. Maybe they won't always be with the same company, but they'll always be doing the same type of job -- and they want to invest in it. They're concerned about the state of their resume, and always looking to improve it -- even if some skill they see as important isn't something their current employer needs. They maintain a resume, even for internal promotion purposes.

Blue collar, by contrast, is someone who has a job. Maybe they haven't been doing IT all their lives, and maybe they don't plan on doing it for the rest of their working lives. It's a job, hopefully one they enjoy, but it's just a job. They're not in love with it. Learning new skills to support some production requirement is fine, but they're not going to run around randomly learning stuff just for the sake of doing so.

Neither categories in any way implies a better employee. Some job positions are, by their nature, one or the other; other job positions could be filled by someone with either attitude. This isn't about how productive someone is, how dedicated they are or how hard they work. This is about underlying attitudes on the part of both employee and employer.

You see how this goes? If you've got folks who seem to just not care, who just want to do their job and then go home and play Xbox -- well, look and see if they're being treated like a blue-collar worker. Are you investing in their skills? Are they passionate about what they do? Are you treating them like a career resource or a replaceable cog in the machine? You can't dish out blue-collar treatment and expect white-collar behavior.

As with all stories, there are two sides. For some folks in IT, it's just a job. They don't understand why you'd expect it to be anything else. They don't do it because they can't help doing it; they do IT because it allows them to support their families and lifestyle. That's "blue collar," and there's nothing in the world wrong with it -- so long as you and your employees both agree on what you expect from them.

It could also be that you've got passionate, career-minded people on your staff who just act like they don't care -- because they're not feeling the love from you. A career is a long-term thing; to get in the career mindset, you've got to be willing to invest in that career yourself. That means sending folks to conferences, to training classes and so on --and not just so they can learn a new skill that supports and immediate production requirement. That's not investing, it's responding to a need.

There are pros and cons, as I've said, to either category of worker. White collar workers are often at the lead of new projects because they've been building the necessary skills and knowledge on their own time. Having them means you can move more quickly, and leverage skills that, technically, you haven't paid for yet. But white collar workers require more maintenance, and can be higher-overhead. Blue collar workers are the dedicated, get-it-done folks who work hard and don't always demand as much from their employer -- but don't be upset when they haven't been studying up on new technologies on their own time, just so they can be ready when and if you suddenly need them.

In reality, most employers will want a good mix of attitudes in their environment -- and will provide the right mix of benefits, investment and support to create and nurture both those attitudes.

Posted by Don Jones on 03/23/2012 at 1:14 PM1 comments


IT's Take on Help Desk Software

I closed out last year with several articles that prompted you to complete a short online survey. Several of you were kind enough to speak with me on the phone for some follow-up questions, and I'm ready to share some results.

This time I'll focus on my questions about help desk management software. My interested was prompted by the fact that help desk software seems to be so prevalent today, compared to a decade or so ago when there were only a few major commercial solutions and a lot of home-grown ones floating around (I wrote one myself when I was at Bell Atlantic Network Integration). I'm also seeing more and more solutions being released that incorporate help desk software -- which struck me as odd, because I kind of thought everyone already had something in place by now.

Most of you do have something in place -- BMC, Remedy and HEAT were some of the brands I expected to see, and I wasn't disappointed. Other brands, like ManageEngine's offering and ScriptLogic's product, also cropped up. Interestingly, about a third of you said your solution wasn't well-implemented in your organization, and a fifth that the product was too complex. I see and hear that a lot. It seems like help desk solutions can easily become the IT equivalent of SAP -- extensive implementation times without a lot of results. A fifth of you also said that your solution doesn't offer Web/mobile or self-service interfaces, which in this day and age should be unforgivable. A fifth of you also said that your solution takes "too long to use," which tends to keep people from using it. What a waste!

Every single person who answered the survey, however, indicated that a help desk solution that was tightly integrated with monitoring and configuration management tools would be absolutely appreciated and essential -- even though I know most of you don't have such a solution today. I'm actually a bit surprised that Microsoft hasn't bought someone and released "System Center Service Desk" or something, which would, of course, tightly integrate with Configuration Manager and Operations Manager. There are companies playing in that "integrated" space, though, including Nimsoft and ManageEngine, so there's at least some promise for growth in that space.

All in all, this survey suggests that you're all working harder than ever -- with a lot of manual effort -- to keep your users connected and productive. It's the shoemaker's kids, right? We provide great services and tools for the business, but we seem to get so few ourselves!

 

Posted by Don Jones on 03/08/2012 at 1:14 PM5 comments


Survey Results: IT Teams' Essential Skills Lacking

Last year I asked you to complete a short survey on your team's essential IT skills. A huge number of you took a few minutes to answer those questions -- thank you! Thanks also to those of you who agreed to speak with me on the phone for some follow-up questions.

The news, unfortunately, is not good. I'd asked about your team's grasp of basics like network troubleshooting, AD basics, and so forth, and almost 50 percentof you said that less than half of your team (but more than a quarter) really understood those basics. Another 25 percent of you said that only about a quarter or less of your team grasped the foundations. That means we're seriously lacking some basic skills -- and I think I know why.

Take a look at nearly any IT-level computer course these days and you won't see much of the basics. They're missing from certification exams as well, even though 80 percent of you said that these skills were "very valuable," with another 15 percent checking in at "valuable." The fundamentals have been pushed out by the ever-increasing number of features that have to be taught, and by some economic realities.

Look, let's just admit that Windows Server 2008, as one example, is a lot more complicated than Windows NT 3.51. Earning an MCSE in Windows NT 3.51 required you to pass six exams, which were supported by about four or five weeks' worth of Microsoft Official Curriculum training. Earning an equivalent certification in Windows Server 2008 requires basically the same number of exams, supported by somewhat less classroom training. That means you're mostly being taught, and tested on, new features -- while the basics have been squeezed out. Nobody's going to accept a 12-exam certification or send their folks to 10 weeks of training, but in reality today's products are probably complex enough to warrant it. So the exams and classes we get have to stuff more into the same amount of time, and so the foundation stuff just doesn't make the cut.

It's a shame. A huge 90 percent of you said your IT teams would be more effective is essential skills were better-understood by a majority of the team, but it's damnably hard to even find training on networking basics, for example. It's like sending your kid to school and having them learn trig without learning basic addition and subtraction, because they just use calculators for that low-end stuff.

Posted by Don Jones on 03/01/2012 at 4:59 PM4 comments


Clearing the Cloud Part 4: Tweak Your Résumé

If you're working in the IT industry and not prepared to position yourself as a useful resource in the era of the cloud... well, I hope you have a copy of "What Color is Your Parachute?" sitting around.

Whether the cloud is the right thing or not for your company isn't really important. Some companies will make the right decision regarding the cloud, and many won't. The short-term attractiveness of the cloud's pricing model, if nothing else, will make many organizations take the plunge whether it's the right thing to do or not. Fight that decision when it's a bad call for the company; be prepared to benefit from the cloud whether it's the right thing to do or not. In other words, don't be caught flat-footed.

Start by making yourself a semi-expert on the cloud. What's the cost model? How does it compare to your internal costs? The fact is that most organizations haven't the foggiest notion why their IT department costs what it does -- they just see a giant number on the P&L every quarter. Show them a smaller number from "the cloud" and things start to look interesting, at least to a certain kind of manager. Money will always be the first driver in a cloud adoption, so if you think the cloud is a bad call for your organization, know the money answers. Know what IT costs and why, so that you can help promote a real apples-to-apples comparison.

Then, assume your organization will make the decision to push something out to the cloud anyway. Maybe they won't -- in which case you're fine. But maybe they will. And if you can't fight the decision with solid logic, then be prepared to benefit from the situation anyway. Get yourself skilled-up. No cloud-based solution offers zero management overhead; make sure you're the one who can be the hero.

Here's a simplistic example: Your company decides to go with Office 365 for e-mail and much of its SharePoint-based collaboration. Does that put you, the Exchange or SharePoint admin, our of a job? Possibly. But O365 still has management requirements. Someone has to create mailboxes and manage the service, which actually requires an unexpected amount of PowerShell command expertise. Make sure you're prepared to be indispensible when the cloud comes, and you'll be the one to keep your job.

In case you missed it:

 

Posted by Don Jones on 02/13/2012 at 1:14 PM0 comments


Clearing the Cloud Part 3: 'Private Cloud' Is Just a Different Way of Thinking

All of today's talk of "clouds" is often accompanied by "private cloud," a phrase that's nearly as overused and useless as "cloud" itself. Isn't a "private cloud" just what we used to call "our datacenter?"

From a technical perspective, yes. The private cloud is just the stuff that's always been in your datacenter. What's different is in how you manage that stuff, and in how you offer it to your organization.

The public cloud has some very specific characteristics that differentiate it from the type of outsourcing we've used in the past:

  1. Self-service. You spin up new services as you need them, and they come online almost instantly.
  2. Pay as you go. You're typically billed for what you use: Bandwidth, disk space, processing power, number of users, and so forth.
  3. Abstractedness. You don't typically have a technology-centric view, meaning you're not necessarily dealing with servers and disks. You get your resources from a giant pool, which from your perspective is infinite.

A private cloud is simply the old datacenter, re-tweaked to offer those characteristics. Being able to create charge-backs based on individual departments' or users' actual utilization is one characteristic that starts to make your datacenter look cloudy. Spinning up virtual machines more or less on demand is privately cloudy. Being able to shuffle VMs around to whatever physical host has the necessary resources to run it, all invisible to your "customers," is private cloudishness.

As with the concept of "cloud," the private cloud isn't some amazing new set of technologies: It's just a different way of managing your technologies.

In case you missed it:

Posted by Don Jones on 02/09/2012 at 1:14 PM1 comments


Clearing the Cloud Part 2: The Cloud Is an Evolution, not a Revolution

Part of what frustrates me about "the cloud" is that it isn't anything entirely new. There are really only two major things that are driving this new wave of cloudiness in IT:

Multi-tenancy. Software vendors are now offering products that have a built-in understanding that multiple customers will be sharing the same infrastructure. The software thus builds walls between those customers, so they each feel as if they have a service dedicated to them. Service providers have been doing this for years by hacking together custom management consoles -- Web hosts being the leader in doing so. The cloud has taken off in large part because the first-party vendors are now building that intelligence into their products.

Bandwidth. The wide availability of cheap bandwidth -- both wired and wireless, including cell-based bandwidth -- means it's easier and easier to get to your data and services regardless of where it lives. In the past, we used dial-up to get everywhere. It made sense to dial into your company's datacenter, so it made sense to keep all of your services there. Nowadays, you're using the public Internet as your "dial-up." That means you're always on the Internet, and then you use it to reach your data. So your data might as well live anywhere, not just in your company datacenter. Security issues aside, there's no connectivity reason to keep data in the datacenter.

It's the confluence of these two directions that's making the cloud (a) possible and (b) attractive. Ten years ago it was ridiculous to think of outsourcing your Exchange or SharePoint services to someone else; today, it's a question of cost, security and availability -- but it's certainly not a ridiculous question.
For an extreme example, take Microsoft's Windows Azure platform. I'm obviously oversimplifying in the extreme, but Azure is (on paper) a hacked-up version of Windows and SQL Server designed for multi-tenancy. Toss in the ubiquitous availability of Internet bandwidth, and Azure becomes positioned for success. Amazon, Google and other cloud computing options are, at their heart, much the same: Always-on, easily-accessible, multi-tenant services that you could build in your own data center, if you wanted to. But why bother?
There's a third factor that really clinches the cloud deal:

Bargaining. Computing resources have become so cheap that bulk purchases are practically pocket change. Amazon doesn't buy servers one at a time, it buys them by the truckload, and it's getting a per-unit price that's astonishingly low. It buys electricity in bulk, too, often positioning datacenters close to the electrical grid and negotiating transmission rates down to comparatively nothing. Thus, the price that a cloud host like Microsoft or Amazon can offer you is markedly lower than you could ever buy yourself.

This is really just good old Moore's Law taken to an extreme. These three points simply prove that the cloud isn't some revolutionary new tactic; it's just a natural evolution of different lines of thinking and technology, finally coming together in what is quite frankly an almost inevitable conclusion.

In case you missed it:

Posted by Don Jones on 02/08/2012 at 1:14 PM1 comments


Clearing the Cloud Part 1: Embrace or Die

I think it's a good time for IT Decision Makers to face some stark realities. This "cloud" thing is creeping up on us, and many analysts claim that 2012 will be the year that cloud computing and cloud services really take off. That means, like it or not, you're going to be dealing with something "in the cloud," if you're not already. What's that mean?

Within a couple of years, every single business with more than a couple of employees will, in some fashion, be using "the cloud." Whatever "the cloud" means. Smaller businesses will likely be using cloud-based e-mail services like Gmail or Office 365; many are already beginning to do so. Some businesses will get their cloud-based services -- like e-mail and collaboration -- from a Managed Service Provider (MSP), whose datacenter can now officially be called a "cloud." Even massive enterprises with huge infrastructure investments will, in some way or another, be using something from "the cloud," even if that's nothing more than the cloud-based Web site analytics called "Google Analytics."

I'm seeing an awful lot of IT professionals beginning to live in Cloud Denial. They've spent years honing their skills as Exchange admins, SharePoint admins, SQL Server admins and so forth, and they're full of reasons why this "cloud thing" shouldn't be used in their environments. In some cases, they're correct: For some businesses, certain functions should be in-sourced and not out-sourced. That reason, however, should never revolve around an IT person who fears their job will go away. The decision to in-source or out-source a given service or function should be a 100 percent business-related decision, based upon costs, benefit, control, security, and more.

I recently got an e-mail from a fellow who had recently lost his job as an Exchange administrator. His company had been using Exchange Server 2003 (!!!), and when faced with the costs of upgrading to 2010 -- new servers, new software, new training, new architecture and more -- decided it was easier and more financially efficient to outsource its 5,000 mailboxes to someone else. It still had a degree of administration, such as mailbox adds/changes/deletes, that need to be done, and so it retained the portion of the Exchange admin staff that was needed to perform those tasks. My correspondent, however, had been in denial about the coming of the cloud, and didn't have up-to-date skills (or an interest in obtaining them, from what I could read). So he was let go.

It's unfortunately, but it's going to happen. There may be a zillion legitimate reasons why your company can't outsource some particular function to the cloud, and you should be prepared to make that argument in business terminology. You should absolutely help your company do the right thing. However, you should also be prepared for the "right thing" to include outsourcing, and make sure you're positioned to still have a job if that happens. Frankly, I think it's only practical to also assume that your organization might outsource something even if it's the wrong decision. Companies do make bad decisions, after all, often when looking only at short-term goals. That being the case, make sure you're well-positioned to be retained even if your company does make a bad decision about outsourcing. Don't just fight the tide – be prepared to swim with it.

In case you missed it:

Posted by Don Jones on 02/06/2012 at 1:14 PM3 comments


Subscribe on YouTube