I can't remember a time when IT decisions were being driven more by users' love of gadgets than is the case with today's smartphone landscape.
Yeah, I guess in the past you'd always have a user or two who wanted a specific Dell laptop because it had a new-fangled DVD burner, a bigger screen, or whatever. But for the most part, users' preferences could be easily accommodated by a corporate standard. Not so with smartphones.
I've never liked the term "PC" when it comes to business computers. It isn't your personal computer, it's the company's computer. Call it a CC or a BC (Business Computer), but it certainly isn't personal. I'm going to configure it, lock it down and do whatever else the business wants me to do with it. You'll take the model you're given, and you'll like it, because you didn't have to pay for it.
Try pulling that off with a smartphone.
Yes, we're getting away with that in the BlackBerry space. Research in Motion was the first to produce business-friendly smartphones, and in a lot of ways it's still doing the best job. But a phone is a bit more personal. Users are carrying it around outside of work, and a lot of them don't want to carry two or more devices. So they're asking to use their iPhones, or asking the company to buy them an Android handset or a Windows Phone 7 device. More and more businesses are finding it difficult -- or even impossible -- to keep those "unofficial" devices out, and plenty of companies are just letting users pick whatever device they want. Increasingly, the "business computer" argument is falling on deaf ears. It's like we live in a free society or something!
Some organizations will find it necessary to fight this trend, and to stick with a single, business-sanctioned device. There are obviously industries and organizations where that makes sense. Others, feeling the pressure form their users to offer cooler gadgets, will decide to open the field a bit and see what happens. It is, if nothing else, an interesting time, as the concept of the "Business Phone" gives way a bit.
What are the policies in your organization? What's coming in your future? Will you stick with a standard, or let users choose? Or do you plan to just stay out of the smartphone market entirely and rely on your BCs to handle your users' official computing needs?
Share your thoughts in the comment section below or reach me at ConcentratedTech.com.
- Read more of Don Jones' Mobile Device blog series:
Posted by Don Jones on 10/11/2011 at 1:14 PM1 comments
More than a decade ago, Microsoft Windows became the best and safest bet for a client operating system. The old "nobody ever got fired for buying IBM" sort of transformed into "nobody ever got fired for putting Windows on the desktop." Today, despite the availability of alternatives, Windows is still the best bet for most business desktops. Yes, we'll probably always have a little Mac or Linux or something running around on the sidelines, but for most organizations there's little downside in not having a homogenous desktop environment.
Not so with mobile devices. Sure, Blackberry remains a popular choice for businesses -- but the company's future is far from assured. The growing consumerization of the mobile device marketplace means that users aren't always satisfied with Blackberry's offerings, and they're increasingly wanting to bring their own smartphones onto the network. Some organizations are evaluating Android handsets, iPhone models, and Windows Phone 7 units to see which one they want to adopt.
That's a mistake.
You can't afford to adopt and support just one. Unfortunately, no matter what you think of any of these devices, the smartphone industry is the furthest thing from stable. Patent wars, hardware fragmentation, a volatile developer market -- everything's playing against a decision maker's instinct to make a decision. Today's "perfect" business smartphone could be tomorrow's lawsuit victim, forcing you to abandon your "corporate standard" phone.
Right now, smartphones are a lot like your investment portfolio: You need diversity. Yes, that will cost more to manage, but it's what's best for your business. Look at a variety of devices, and look for cross-platform management tools that will help make managing them easier and more efficient.
I'm curious, what devices does your organization permit? Are any of them "officially" supported, as opposed to being "unofficially tolerated?" Is your organization trying to standardize on a single platform -- and does all the volatility in this marketplace worry you?
Share your thoughts in the comment section below or reach me at ConcentratedTech.com.
- Read more of Don Jones' Mobile Device blog series:
Posted by Don Jones on 10/07/2011 at 1:14 PM0 comments
At TechEd 2011, Microsoft announced that System Center would begin supporting mobile device management, including management of Apple iOS and Google Android devices. I couldn't be happier with that news, and it's an area where IT decision makers should be paying close attention.
Mobile devices represent one of the biggest changes to hit the corporate IT landscape since the personal computer. Even laptops weren't as big of a deal, because they were really transportable more than truly mobile, and because laptops could be managed using pretty much exactly the same techniques as desktops. Mobile devices, on the other hand, are always-on, always in users' hands, and are being used for a wider and wider variety of business tasks.
Mobile devices are harder to manage because, in many cases, they aren't well-built for management yet. iOS and Android, in particular, have pretty minimal enterprise management capabilities; Windows Phone 7 benefits from a decade-long effort in mobile operating systems and Microsoft's experience in the endpoint management space.
s you might, you will not keep mobile devices out, nor will you keep them disconnected from your network and its services. Not for long. The business will demand they be made a part of the landscape. Unfortunately, few major vendors -- until Microsoft's System Center announcement -- have really tackled this space. I'm proud of Microsoft for recognizing that they're not going to own the mobile device space the way they do the desktop space, and for making a strong effort to cement control of the back-end and provide us with the management we need for mobile devices.
But keep an eye on this space. Other smaller vendors (Sophos, Tangoe, Zenprise, Averna, and tons more) are making an effort to lead this space. You don't need to rush out and buy anything right away, unless your business is really experiencing the pain of managing these devices already, but you do need to keep an eye on this space. Future IT decisions -- even those which seem initially unrelated to mobile devices -- need to be made with mobile devices in mind. For example, perhaps you're considering a single sign-on identity management solution -- make sure it supports mobile devices as a logon endpoint.
After years and years of false starts, smartphones and other mobile devices are here to stay. Don't think for a moment that your device inventory will be as homogeneous as your desktop OS; we'll be dealing with a variety of devices to take advantage of their various strengths. Start thinking about what kind of device management your business needs, and let that drive your technology decisions.
Posted by Don Jones on 06/28/2011 at 1:14 PM0 comments
You don't meet a lot of people who think their state's Department of Motor Vehicles (DMV -- or whatever it's called where you live) should be a model for how to run business. Don't get me wrong -- the Nevada DMV, where I live, is pretty awesome as far as DMVs go. But still. Long lines, arbitrary rules, surly employees who delight more in saying "no" than "here's your license, sir/ma'am."
Yet thousands of companies across the world are using a government agency as their model for how to run IT.
Campaign rhetoric aside, governments have a bit of a vested interest in slowing down change in the way government works. Governments are meant to be stable, reliable and predictable -- and change opposes those goals. When governments change, they do so very slowly, after much public and political debate, and after many periods of review and comment. Governments rarely have to worry about being first to market, since they kind of have a monopoly on governing. Governments don't seem to have any motive to maximize their profits or minimize their losses. Governments, in short, can afford to not pursue change too avidly.
Business, on the other hand, needs the ability to change rapidly. A new technology comes along that can double your margins? Use it. A new product offers the ability to reduce IT overhead? Get it. New techniques reduce downtime by half? Adopt them. Businesses -- good ones, at least -- thrive on change.
So why are so many businesses running themselves like a government agency? Four letters: ITIL.Yes, the Information Technology Infrastructure Library, the IT management framework you've all heard of and may even be using. Created by the United Kingdom's Office of Government Commerce, a department of the UK government.
No, I'm not trying to beat up on ITIL. It's actually a pretty solid, comprehensive framework for managing IT. Given that most of us weren't doing much better of a job, ITIL offers some universal structure. My problem is that ITIL pretty much abhors change. No, not on paper -- on paper, ITIL manages and controls change. In practice, IT organizations use ITIL as a blunt instrument to halt change.
Let's face it, IT loves saying "no." As far back as the earliest days of computers in academia, the robe-wearing dungeon-dwellers known as "sysadmins" reveled in telling people "no." No, you can't have more computer time. No, you can't have more punch cards. No, you can't touch that. We had (and still have) good reasons: Users break things. If it weren't for users, nothing would ever break. Our job is to keep things running, and users are the enemy of that goal. We also hate change, for exactly the same reasons: Change means broken things, and more work for us. With all those users running around, we're not exactly short of things to do, so change is just another unwelcome burden. A new application? No. A new server? No. New domain? No.
ITIL and other IT management frameworks can take our genetic tendency to say "no" and codify it. "You want a new application installed? Well, you're going to have to go through the Change Management Process." Dilbert's pointy-haired boss couldn't have come up with anything better. Users who ask for the simplest things can be told "no," simply because the Rules support that position. Worse, in many companies, admins who step out of the change management framework to help a user with something small are chastised, written up, and put at the bottom of the list for promotions and interesting projects.
Yes, we absolutely need to manage change -- which is what ITIL is all about. We don't need to bury change, which is what too many organizations use ITIL -- and frameworks like it -- to do. Take a few minutes and evaluate your IT team to see if you're using your change management process as a codified way of saying "no." Simple, obviously non-destructive changes should have a way of being expedited in your organization. Remember, IT is there for the business, not just to follow the rules in a framework. Managing change is something we do because it is allegedly good for the business; when the framework isn't helping the business, consider changing the framework a bit.
(For the record, I really do like ITIL -- when it's implemented with common sense and an eye toward what the business really needs).
Posted by Don Jones on 06/21/2011 at 1:14 PM16 comments
In early June, Citigroup acknowledged yet another major breach of confidential customer data. It was the 251st such public notification this year, and could put us on track to exceed the 597 improper disclosures from schools, government agencies, and businesses in 2010.
According to an article in USA Today, cybercriminals are now "actively probing corporate networks for weaknesses," and businesses face particular pressure to let the public know when they've been hacked. Citigroup, in fact, was criticized by US Representative Jim Langevin for taking a month to notify customers after noticing the most recent breach, which was discovered during routine monitoring. Customers' names, account numbers, and e-mail addresses were all compromised.
Citigroup joins major global companies like Sony, Epsilon, Nasdaq, PBS, Google, RSA, Lockheed Martin, L-3 Communications, and Northrop Grumman in being the victim of a cyberattack. Companies are more forthcoming about breaches due in part to data-loss-disclosure laws that are now in force in 46 US states. Public companies must be especially up-front with such disclosures: Data breaches can obviously create a negative impact on business, and failure to disclose such impacts can be a violation of SEC rules and invite shareholder lawsuits.
A recent survey by Ponemon Institute and Symantec estimates that data breaches cost, on average, $7.2 million to put right – and those costs continue to climb. They're in addition to fines and fees imposed by industry groups and government legislation, making data breaches tremendously expensive.
Let's face it: We tend to give a lot of lip service to security, but you and I both know that most organizations' security, under the hood, can be pretty haphazard. Are all the permissions on your files and folders truly accurate? Group memberships all up to date? Are you sure? Is your firewall configured properly – no unnecessary holes? Is the software up-to-date?
Look, having security flaws is almost unavoidable, simply because most products' native tools do a very poor job of letting us manage security. Go through every object in Active Directory and tell me if it has the correct permissions. Go ahead, I'll wait. You'll be a while if you're using Active Directory Users and Computers to check. Even Windows PowerShell offers fairly primitive tools for monitoring and modifying permissions, in part due to the highly-distributed and extremely-complex permissions structures that Windows products tend to use.
But the newspaper headlines make it clear that we'd better get on the ball. In general, you're going to need to implement three broad capabilities:
- Protect. You need to be able to apply the proper permissions to resources, proper configuration to security elements of their infrastructures, and maintain those settings over time.
- Inspect. You need the ability to continuously monitor and audit your environment to ensure that the proper permissions and configurations are in place.
- Detect. You need proactive monitoring and alerting to let you know when a problem does occur, so that you can take remediation steps and make the proper disclosures.
In many cases this is going to require the use of third-party tools from independent software vendors (ISVs). I know, nobody likes to spend money on those things. But you're not going to be able to write a PowerShell script that does it all – much as I wish that were the case. In many cases, you'll need software that gathers distributed permissions and configuration information into a single place, analyzes that to produce reports, and uses that to generate automated alerts when necessary.
Yes, I realize that "you've never been hit." I'm sure Citigroup, Sony, and PBS felt the same way – and they got hit. Hard. Sony along lost millions by having to take their network offline for weeks, not to mention the public relations disaster. And that was one attack. Oh, "you're not a big company, so you're not a target?" Sure, not yet. But you will be, once attackers figure out that you too have a few thousand bits of interesting information on your network and that you're a much easier target than Citigroup or Lockheed Martin.
It's probably time to give your security an quick review. Take your honest opinion to your executive team, along with a proposed plan to put things right. Have your numbers in place: This is what it's going to cost us, and this is what we stand to lose if we don't. Be able to explain why you can't fix it on your own – including, if necessary, a brief demo of why permissions and configurations are difficult to monitor and manage using the in-the-box tools. Most executives simply don't realize how difficult it is, so you'll need to educate them.
Be a security leader.
Posted by Don Jones on 06/13/2011 at 1:14 PM1 comments
There are two ways to judge the value of an IT professional -- specifically, administrators, network engineers and so forth.
The first way is to watch how they handle crises. Anytime something goes wrong is an opportunity to see how well an admin knows the technologies they're working with. In order to troubleshoot and fix something, you need to know how it works, and you need to know how (and from where) to collect diagnostic information. Admins who jump right into the job, start running diagnostic tools and quickly start eliminating possible causes are the ones you want to retain. Pay them a lot, because they're hard to find. Notice that I didn't emphasize how quickly they solve the problem. That's important, but it's largely a function of how quickly they can eliminate potential causes of the problem and narrow in on the one that's causing the issue.
The second way is to see how they handle day-to-day tasks, especially boring and repetitive tasks. Do you have admins who are still clicking next-next-finish in a wizard, for a task they've completed thousands of times before? If so, carefully consider something: Are you actually paying someone to run through a wizard in order to complete day-to-day tasks? Really? Button-clicking is the value they bring to the team? Only in the Microsoft IT world would someone even considering answering "yes" to those questions. I'm not talking about unusual, once-in-a-while tasks. Even in the Cisco world, the Unix world, the Linux world or the AS/400 world, administrators have to look up syntax or use a GUI for tasks that they perform only rarely. That's the benefit of a GUI: It can walk you through unfamiliar tasks. But for the everyday tasks, you'll find Unix admins in a Bash shell, Linux admins in a Bourne shell, Cisco admins at the IOS command-line and AS/400 operators running CL commands. The value of an admin for day-to-day tasks isn't that they can complete them by clicking a few buttons. The value is in an admin who can automate those tasks from the command-line. Wizards are for end-users, not for experienced IT professionals.
Things are going to start getting harder for "wizard jockeys." While Microsoft isn't going to eliminate GUIs by any stretch of the imagination, those GUIs are going to be less-emphasized, especially for day-to-day tasks. As organizations move select IT assets into hosted platforms (okay, call it "the cloud"), being able to manage via command-line, and able to automate repetitive tasks, is going to become a more crucial skill. I'm betting that Microsoft will eventually drop the GUI on the server OS entirely, making us rely on client-side GUIs and on the command line. Much of Microsoft's future directions are clearly indicated by technologies like Windows Remote Management (WinRM) and Windows PowerShell, which further emphasize the command-line.
It's time to start evaluating your team, educating them on new management techniques (well, new to the Windows world -- everyone else has been using them for decades), and letting them know that "next-next-finish" isn't going to be considered a value-add for very much longer.
Posted by Don Jones on 06/07/2011 at 1:14 PM2 comments
I'm finally back from TechEd North America 2011, following a brief stop in Denver and Seattle to promote my new book. My final session at TechEd was a Birds of a Feather discussion on Active Directory change auditing. There were around 50 IT pros and managers in the room, and there were some revelations that, to me, were truly astounding.
One gent said his company pretty much had auditing figured out. They consolidated their event logs into a single database, knew how to report from that database, generated near-real-time alerts from it, and so forth. This was all done using a home-grown solution, too – zero cost! Well, not zero. That solution has been under development and maintenance for 10 years. A decade. In terms of manpower, that has to have cost that company something like a million dollars (literally) in total.
Other folks aren't so fortunate: They don't have the resources for that kind of home-grown solution, so they're cobbling something together themselves.We talked about using Microsoft Audit Collection Service (ACS; hardly free since it requires you to buy System Center Operations Manager, but if you already have SCOM then ACS is at least bundled). We talked about Windows Server 2008 R2's event log forwarding capability (which nobody was using in production). We talked about third-party solutions, too, and the one common thread is that almost nobody in the room could buy a third-party solution. Images ran through my head of IT pros bounding away at stone tablets using stone hammers, huddled around a campfire in front of their cave. I mean, the sheer primitiveness of what these folks were being asked to do – all so the company could save a few bucks.
The highlight of the hour was when one fellow mentioned that his company wanted him and his team to provide auditing details about some specific event. "We couldn't do it," he said, "because we hadn't been capturing that information." I asked if they subsequently started capturing that information. "No," he told me, "we didn't. Cranking up that level of auditing on our domain controllers was a performance nightmare. We would have needed more DCs to spread the load, and nobody wanted to pay for them. So they just can't have what they want."
Finally, some reality: Everything in IT costs something. It either costs time, or it costs software, or it costs hardware. Sometimes, you can only purchase something in hardware or software – simply throwing time at the problem won't help. The fellow's situation was a perfect example: They knew how to capture what the company wanted, but the cost would have been more domain controllers. Weirdly, companies are often hesitant to buy hardware or software, but they're willing to spend time as if it springs from a never-ending supply.
Here's a little IT truth for you: Time, hardware, and software all cost about the same thing. That is, having your own on-staff developer produce a solution will cost about the same, in the long run, as buying something ready-made (provided what you bought will fill your need in the same way a custom solution would). If your developer has nothing better to be doing, then you spend time and have the developer write the solution. If your developer could be working on something that isn't available prepackaged, then that's a better use of that time – since buying software isn't an option in that case.
Here's another little IT truth: Admins aren't developers. You cannot have an IT pro produce something that would otherwise be available as third-party software without spending a lot more in the long run. You'll spend it in time, but you'll spend more.
I don't know of a single major company that would rather than their administrators custom-build servers using white-box parts from NewEgg or TigerDirect. Servers come from HP, or Dell, or IBM, or someone like that – even though that hardware costs more than the home-built version would, and even though that high-end hardware might have the same specs on paper as the DIY version. Why is this? Because the pro-made hardware is usually a better value in the long run. It's better-made, better-configured, and better-supported. So why do those same companies ask their IT Pros to build hacked-together, DIY, scripted "solutions" to things like change auditing, rather than buying pro-made software that's well-made, supported, and so forth? It boggles my mind.
Posted by Don Jones on 05/31/2011 at 1:14 PM1 comments
For more on the cloud by Don Jones, see "Please Stop Saying 'Cloud'"
I'm not a big fan of the word "cloud," because it's largely overused and overloaded. When I started hearing "private cloud" being floated around, I thought enough is enough! How is a "private cloud" any different from what I've traditionally called my "local area network?" Why in the world did we need another term for it?
Then I thought about it. Forget, for a moment, the overloaded use of the word "cloud," and think about the original concept of "cloud computing." I'm talking about services like Amazon's EC2 service, Microsoft's Azure service, or even SaaS offerings like SalesForce.com. These share a few very distinct characteristics:
- You provision the services you need yourself, more or less on-demand. You don't have to wait for someone to set up new services for you.
- You only pay for the services you actually need and are using, and you can get an accounting of that usage.
- You don't have any real view of the physical infrastructure -- servers and so forth. You only see the services you're using.
If those three characteristics are what set "cloud" apart from other "hosted services," then we're on to something when it comes to a "private cloud." Most corporate datacenters don't have all of these characteristics, and very few datacenters even have one of those characteristics. But think of the advantages you could get by having a datacenter that shared all three of those characteristics:
This is becoming more and more popular inside companies, and with good reason: Self-service provisioning helps take IT out of the loop for common resource requests. No, you're not going to let just anyone in your company be a "self-service customer," but you can certainly authorize selected individuals. This might be as straightforward as a self-service Web portal that lets managers set up new employees and their Exchange mailboxes, or as complex as a portal that lets authorized managers spin up new virtual machines on an as-needed basis.
IT still has to monitor this activity, because we still need to make sure there is sufficient physical infrastructure to support the self-provisioned resources, but IT doesn't need to be involved in the actual provisioning. Of course, this is only a viable plan if you also have...
IT is viewed as overhead by most companies, and it's because there's often little tie-back to the business itself. That should change. Many companies already do chargebacks for things like e-mail, and that's a great first step. In an ideal world, every IT service would allocate its cost back to the business activity that is using it. That's difficult today, but the emergence of "private cloud" tools and technologies is aimed squarely at making chargebacks more efficient and practical.
Think about it: Today, you probably wouldn't let line-of-business managers spin up new virtual machines on-demand through a self-service portal. Why? Because the cost of hosting, monitoring, and maintaining those things would quickly escalate out of control. But that's because today's line-of-business managers see IT as a basket of freebies. If they could self-provision VMs, and those VMs immediately showed up on the line of business' P&L statement, then those managers would make smarter decisions. Instead of IT saying, "no, that will cost too much," the business manager will say, "no, I can't afford that." It's no different than ordering desks, paperclips, or other physical goods.
Of course, for the idea to be practical, you have to...
Abstract the Infrastructure
Essentially, you become a "hosting provider" within your own company. That is, you become a cloud computing provider to your own lines of business -- thus the term "private cloud." Nobody but IT needs to worry about the infrastructure -- they just "buy" services like user accounts, mailboxes, databases, virtual machines, and the like. They "pay" for those services, thus giving IT the money needed to implement them. Every IT decision becomes tied to a direct business need, and allocated budget from the people who feel they need those services.
Yes, it's a Shift...and No, We're Not There Yet
This is definitely a shift in the way IT does business, and in how the business is managed. It isn't going to happen because one systems engineer suggests it; this kind of change starts right with the CEO, CTO and CFO. In small ways, many IT shops have been doing this for years -- and it's a good idea. Over the next decade or so, you'll start to see a gradual shift to the "private cloud," helping better connect IT decisions with business need and benefit. It doesn't need to happen quickly, but as the tools and technologies arrive that enable this kind of business structure, businesses will slowly start to adopt it.
Posted by Don Jones on 05/06/2011 at 1:14 PM0 comments
Here's a true story: I was once teaching a VBScript class (this was, obviously, years ago) when a student asked if there was a way to write a script that would enforce the membership of computers' local Administrators group. I smiled, knowing that I was about to make this person very happy. "You don't have to write a script," I said. "You can just use the Restricted Groups settings in a Group Policy object." The person shook their head. "We can't. Our Active Directory administrator doesn't like Group Policy, so we can't use it."
I was floored. I literally did not know what to say. I'm pretty sure I stood there with my mouth hanging open for a full minute, shook my head vigorously, and went on teaching as if nothing had happened. What else could I have done?
In the years since, I've run across a metric butt-tonne of similar situations, where folks couldn't do the right thing due to some political reason -- often a misinformed political reason. The most recent: "We can't use PowerShell remoting to remotely administer computers because our security policy won't let us open the necessary port." At the same time, these folks are allowed to use Remote Desktop, which imposes a massively greater performance burden on their servers. They are allowed to use technologies like Windows Management Instrumentation, which uses a much wider range of TCP ports and is somewhat less controllable than PowerShell Remoting. In other words, Remoting is verboten simply because it's new, and the organization's security officer or policymakers won't take the time to understand it.
Folks, this is ridiculous. If you're an IT decision maker in your environment, your main job should be to fight this kind of -- well, let's just call it BS, because that's what it is. This attitude is like saying, "we bought this new car, but we can't use it because we don't really like the idea of gasoline."
Products are built the way they are for a reason. Over time, those reasons will change and evolve, and the products will change and evolve to suit. You can't "just decide" to not use a product the way it was intended because you don't find that way aesthetically pleasing, or because you "don't like it," or because you haven't taken the time to understand it. I can accept, "we're not using it yet, because it's under review." In fact, that statement shows a level of maturity I applaud. You know a feature exists, you're not familiar with it yet, but you're taking the time to become familiar.
From now on, when people ask me how to do something, I'm going to tell them the right way (or ways, if there are choices). But I find myself increasingly unwilling to engage in elaborate hacks and manual workarounds just to accommodate ill-advised, uninformed policies. Use the products the way they're meant to be used, or stop using them and buy something that works the way you want.
Now, that's distinct from instances where there's a compelling, business-related reason. For example, if you told me, "we can't use Group Policy because we're in a highly distributed environment, and our tests show that replicating GPOs puts too great a strain on our WAN bandwidth," then fine. That's a legitimate reason and we can start looking for a workaround. That's a bad example, of course, because GPOs don't do any such thing...but you get my point. A well-informed, business reason to not use a product in a specific way is just fine.
What about you? What goofy policies do you have to deal with that just don't make any technical sense -- or even any common sense? Let me know in the comment section below.
Posted by Don Jones on 05/02/2011 at 1:14 PM4 comments
As I'm writing this, the Internet is finally recovering from the massive outage experienced by Amazon.com's cloud computing infrastructure. Wow, you just don't even realize who's hosting with them until something goes wrong: HootSuite, NetFlix, Reddit, FourSquare, and many more major names all experienced outages.
It's okay -- computers break. I'm sure the folks at Amazon are looking hard at why and coming up with ways to prevent that kind of failure again. Great -- let's move forward.
Actually, let's look back for just a second. Not at the failure or the fallout, but what this means to a business who has chosen to outsource to a cloud computing provider. I know that Amazon provides a solid Service Level Agreement (SLA) to their customers, and that the customers affected by this outage will doubtless be receiving a service credit. Of course, since you "pay as you go" with this kind of cloud computing model, you could argue that these customers aren't owed very much, because they weren't consuming any resources.
Therein lies a risk for cloud computing, and it's one that pundits have identified in the past: That SLA will never cover your lost business. I don't know how much business NetFlix lost, or HootSuite (which has paying customers to contend with, now), or any of the others -- but I know their business impact won't even begin to be covered by whatever Amazon gives them under their SLA.
This has been a major argument for avoiding cloud computing services. I think, however, that it's a bad argument. Consider the alternative: You host all of your services in your own data center. It goes down, like Amazon did, and your redundancies don't work either, which is what happened to Amazon. Who's going to compensate you for your business loss?
Nobody. Certainly not Dell, IBM, HP or whoever sold you your servers. Not the power company, if that was the source of the problem. Not your HVAC contractor, if that's what went wrong. Nobody. The only difference is that instead of having the possibility of controlling the situation and repairing it faster, with cloud hosting you just sit and wring your hands and cuss a lot. But notice that I wrote "the possibility." Just because your services are in your own datacenter doesn't mean you will definitely be able to impact the problem in any way. What if your datacenter flooded? Got hit by a meteor? All of your HVAC units burned out? There are plenty of things that go on in your own data center that are beyond your control, and when those things go wrong you're still tearing hair, cussing, etc.
Yes, you can build redundancies for almost anything -- up to and including a complete, redundant datacenter. Cloud hosting providers can do that, too, and it's often more economical for them. So I'd argue that you simply need to select a hosting provider that has built a more redundant system than you could afford to do yourself. The resources-on-demand model of cloud computing is really, really compelling for a number of businesses. There are still valid reasons not to outsource to a cloud computing provider, such as data security concerns, but I don't think the "well, the SLA won't pay us for our business loss" is a valid reason. Do your due diligence and make sure your provider is set up the way you like, and then hope for the best. That's all you do in your own data center.
Posted by Don Jones on 04/29/2011 at 1:14 PM0 comments
I'm an MVP Award recipient from Microsoft, primarily for my work with its PowerShell technology, and you don't often see me use the words "wasting time" and "scripting" in the same sentence. But now it's out there: I truly believe that a lot of IT teams -- if not most -- are wasting time messing around with the scripts.
Let me be clear and state that this is not a blanket condemnation of scripting or of PowerShell. Quite the contrary, in fact. I believe in the right tool for the right job. Scripting is the right tool when the job is one of two things: First, whenever you have to automate some business process that is unique to your organization, and which cannot be efficiently accomplished in any other way. Second, whenever you need to automate some process that is rarely performed. In that instance, you're not really automating something that's truly repetitive; you may be automating it because it's done so infrequently that nobody accurately remembers how to do it. Either scenario points directly to scripting.
So what doesn't point to scripting? The automation of tasks -- particularly complex ones -- that are common across our industry, and which could be better and often less expensively accomplished by a third-party tool
Some elaboration is in probably in order. First, you have to understand that I do not expect Microsoft to provide tools to accomplish every little task that every organization in the world might want to perform with the company's products. Microsoft's job, in my view, is to provide a baseline set of tools and a solid underlying platform, and to do so in a way that enables third parties (whether your team or ISVs) to build additional domain-specific tools on top of what Microsoft provides. If your organization, for example, frequently needs to scan through file and folder ACLs to see who has permission to what, then I don't necessarily expect Microsoft to put a tool for that in the box. It just isn't a universally needed task (although I admit that this particular example is becoming pretty widespread), and even amongst the organizations that do need to perform this task there's a lot of variation in how they need the results presented.
So, third-party tools are, and should be, a way of life. When should you not build them yourself, and instead go to a third-party ISV for them? When those tasks are a critical, almost daily part of your business. Let's switch examples for a moment and consider the need to collect Active Directory auditing events into a single, consolidated, searchable log. You could certainly write scripts to do that, and I have a number of consulting clients who've done just that. They're limited to the information in the event log, of course, and that information isn't always the most human-readable. Most of the examples I've seen required an administrator to spend about a week building, and then perhaps 10 hours a month maintaining and tweaking. So in the first year, that's about $7,000 in man-hours. Folks usually spend a lot more time consuming that data -- the two customers I've worked with who took the time to quantify that work spent an average of 40 man-hours per month sorting, filtering, searching and presenting data from those raw event logs. They spent another 20 hours per month doing what I call "dealing with" the data -- for example, translating event IDs into a meaningful message, looking up SIDs and GUIDs, and so forth. All work made necessary by the low-level nature of the data in the logs. They assigned a value of about $30,000 per year to that effort. Round number, $37,000 per year total.
That's a lot of money. These same organizations steadfastly refused to bring in a third-party change auditing solution for close to seven years, meaning they spent about $260,000 in man-hours. One of them is still doing so. The other got sick and tired of the additional hidden costs of scripting -- like when its Script Guru left the company, and someone else had to spend three weeks coming up to speed on the scripts ($5,000 more to the bill). That company runs a 6,000-user Active Directory domain, and their third-party auditing solution cost them $108,000 in the first year and about $20,000 in annual maintenance fees thereafter. Five-year cost: $188,000. It's in the first year now, and its time commit on those same tasks is nearly zero because it selected a tool that provides exactly what it needs, more or less automatically. Even reports are produced and mailed out automatically.
That's just one example, and the point isn't so much that one company's specific numbers as it is the general theme: Scripting isn't free. It requires man-hours, even just to maintain, and comes with a big risk that your "Script Guru" will leave you in a lurch at some point. You can mitigate that by educating more of your troops in scripting, of course, but do so strategically. Focus on the tasks that don't need to be run often (making script maintenance less expensive), or focus on tasks for which an affordable tool simply isn't available. The argument, "Well, if they weren't writing scripts, we wouldn't need those people" doesn't hold water -- there's always work for an admin. Let them be admins, not software developers. Look closely and you may find that tools to automate daily, mission-critical tasks are less expensive in the long run, provide better functionality than you could script yourself and do a better job of meeting all of your organization's business goals.
I'm not saying don't script. I'm saying script when it makes sense. Be a strategic decision maker.
Posted by Don Jones on 04/25/2011 at 1:14 PM6 comments
Okay, we need a serious dose of reality here. I just finished a customer engagement where the company's IT director, in no uncertain terms, told me that his company was having nothing to do with "the cloud." I nodded, and asked why. Turns out he'd recently attended a tech conference, where the keynote address (according to him) was basically summed up as, "no company is going to outsource their IT to the cloud." He agreed that outsourcing his IT was a bad idea, and so no cloud for him.
I blame the IT marketing sub-industry for this. Let's start by agreeing to never use the term "cloud" again. They co-opted that term from the telecommunications industry anyway, and the term makes a lot more sense there because nobody is going to run their own telecom infrastructure unless they are a telecom company.
What folks routinely refer to as the "cloud" in the IT industry is actually something very different. It's a huge variety of services and approaches, all of which offer to let you outsource some portion of your IT capabilities – things you might normally handle yourself, in your own datacenter. This is hardly a new concept: I've had a "cloud e-mail" address (it ends in yahoo.com) for close to a decade, now. I've been using "cloud computing" (a Web hosting service) for just as long. The idea of outsourcing bits of your IT environment to an offsite service provider is well-established; it's only recently that everyone suddenly wants it to be called "the cloud."
The only thing new in the more modern outsourcing model is the idea of on-demand provisioning and pay-as-you-go. My old-school Web hosting provider charges me a fixed fee every month; with a true "cloud" hosting arrangement, my fee might shrink and grow as more people visited my Web site. The site would dynamically expand and shrink to accommodate demand (or, in some instances, I could manually provision more resources to accommodate demand), and I'd pay for what I was using.
Saying that your company will never outsource your IT capabilities is fine. Most companies won't outsource everything, because it doesn't make any sense whatsoever. But that's not what this "cloud" model is all about. The idea is that you outsource the bits that make sense for your organization, creating a "hybrid IT" environment where some services come from your datacenter, and others come from offsite. In fact, I really prefer the term "hybrid IT" over "cloud," because I think it more accurately describes what the model is all about.
Take e-mail: Some companies could never, ever, ever outsource their e-mail. I get it. You need to have direct control to maintain your security, your availability, whatever. Fine. Other companies, however, view e-mail as a serious pain in the you-know-what, and would give anything to have it "just work." Those companies should outsource their e-mail, creating a hybrid IT environment where some services come from outside the company. Those same companies might well continue to handle their own in-house applications, databases, and so forth, because they need to in order to achieve their business goals. In other words, you outsource what makes sense. Often, that includes IT services that aren't crucial to the day-to-day operation of your company, that don't directly tie into what your business does for a living, and that cause you more stress and headaches in terms of keeping them up and running every day.
That's the other thing: People keep pitching "the cloud" as a way to save money. I call "foul" on that, because I've rarely seen it to be true. What hybrid IT does offer, however, is a way to remove some distractions. Don't want to spend months deploying a CRM solution, and then spend hundreds of man-hours a year supporting it? Fine, use SalesForce.com. Outsource that one distracting bit that your organization needs but doesn't directly want to own. That's hybrid IT. You might not save money either way, but you'll have less headache.
Please don't think for an instant that your company, for whatever reason, won't ever outsource anything to "the cloud." You will, eventually. In fact, a smart decision maker will keep his or her eyes open for the opportunities that make sense.
What do you think? Let Don know by adding your comment to this article, e-mail him here, or follow him on twitter @concentrateddon.
Posted by Don Jones on 04/13/2011 at 9:03 AM8 comments
Welcome to my new blog! As you may know, I write the monthly "Decision Maker" column for Redmond magazine. Much of that column's content is based upon my consulting and analysis experience. My main job at Concentrated Technology is to provide strategic consulting for our various business clients around the world; basically, that means I sit down with businesses, figure out what their challenges are, and help them decide which ones they can address – and how to do so. We also work with a variety of Independent Software Vendors (ISVs) to help them understand what the marketplace needs in terms of solutions, to properly focus their products features, and so on. All of that work generates a lot more information than I can include in a monthly column, so the folks at Redmond magazine and I decided to start this blog, where I can share information as I come across it.
This is not going to be news analysis -- I've heard over and over from my readers that they get more than enough of that in various places already. No, my goal here is to help you understand the industry's directions on a variety of subjects -- Microsoft's directions in particular. I'll show you where various ISVs are going with their solutions, and help you see the gaps that exist in Microsoft's native product functionality. I'll share intelligence on what other organizations like yours are doing to solve their day-to-day IT challenges, so that you can get a feel for what's emerging as "best practices" within our industry.
I hope you'll find it useful, and I hope -- from time to time -- that you'll take a moment to share your own experiences. Drop a comment at the end of an article, or contact me directly (use the form at http://concentratedtech.com/contact to do so). I may even contact you for more details on what you and your organization are up to from an IT perspective, and if you're interested in being part of one of our focus groups or surveys, definitely let me know.
Let's get started.
Posted by Don Jones on 04/10/2011 at 1:14 PM0 comments