After a smooth migration from Google Apps to Office 365, we're pretty pleased.
The Exchange side couldn't work more smoothly. SharePoint -- after a couple of hiccups and re-starts -- is also a blessing. Our lesson? Don't mess with the built-in groups that SharePoint creates when you set up a new site. Sure, you can change their membership, but don't delete them or SharePoint just doesn't work right any more.
Our biggest beef is in how external users get invited into a SharePoint site. As near as we can figure, they have to have a Hotmail address, since that's the only method external users have to authenticate to the O365 system. Seems weird. I realize authentication has to come from somewhere, but forcing folks to use Hotmail just feels awkward. Even requiring a generic "Live" account would be better.
We were pleasantly surprised to discover that O365 is extremely Mac-friendly. I use a Mac myself for my "knowledge worker" tasks, like checking e-mail and writing (Office for the Mac is a wonderful product), and have had zero problems. The Outlook Web App experience is fantastic. My Office Mac applications talk to SharePoint seamlessly. Even the latest build of the Lync Mac client works very well for online presence and chats, although that was the last piece of the puzzle to come together. I've even switch from using the Mac's native PIM apps over to Outlook Mac, and I'm pretty pleased with the experience, and with the feature-parity between it and the Windows version. O365 works flawlessly with my iPhone and with other users' Android phones -- my Outlook Tasks even show up in iOS 5's "Reminders" app.
I'm disappointed by the amount of PowerShell we had to use to get our O365 up and running, but those were one-time events for the most part. I do think Microsoft needs to provide a bit more GUI and a bit less PowerShell for those tasks, especially setting up a custom domain name. It's not that I dislike PowerShell (I'm a PowerShell MVP, after all), but O365 just isn't being marketed to businesses that would have any reason to even know what PowerShell is.
That does highlight a good point, though: A lot of Managed Service Providers (MSPs) are miffed at Microsoft for launching O365, because they view it as an end-run to the customer that would normally go to an MSP for outsourced e-mail. Those MSPs can relax. Most SMBs should at least consider buying O365 through an MSP, rather than directly from Microsoft. There are technical bits to set up and maintain, including custom domain names and a lot more. There are caveats in O365 than an MSP can help you understand, like the max-recipients-per-day cap. An SMB with a savvy "tech guy" can probably go O365 on their own; a business who doesn't want to have that on-staff geek is going to have to rent that geek, and he or she might as well come from an MSP that can also provide ongoing support.
In case you missed it:
Posted by Don Jones on 01/23/2012 at 1:14 PM0 comments
To migrate from Google Apps to Office 365, we decided to try a third-party migration tool. We'd been asked to write a competitive analysis of the available migration options, including the built-in one (which is mainly suitable for migrating from an Exchange Server), so this seemed like a good time to try the options. The process went off perfectly, using a self-service option that cost just $10/mailbox and let our users handle their own migrations simply by entering their old and new passwords into a Web page. Mailbox migrations can take a long, long time. Had we been smarter, we might have opted for an admin-controlled migration rather than the user self-service, because the tool would have enabled us to filter out all the old trash content that Google was hanging on to. As-is, we discovered that Office 365 starts applying bandwidth throttles after a few thousand messages are added to your inbox, meaning each mailbox took almost a full day to migrate. We later learned that office 365 support can suspend those throttles during a migration if you give them a call.
Once migrated, we started the process of adding our custom domain name (ConcentratedTech.com) to Office 365. This is where the process becomes slightly less-awesome. Right now, you can add the domain name easily enough, but it won't become your users' default send-as domain, meaning we were still sending e-mail from the "onmicrosoft.com" domain name associated with our account. Changing that default involves downloading the Office Live PowerShell cmdlets, opening a PowerShell Remoting session to our Office 365 server pod and running a couple of PowerShell cmdlets. This is something you should be aware of if you're considering an O365 move; frankly, given that O365 is largely targeted to SMBs who don't have a large IT staff, I think Microsoft should make this a bit easier and Web-based.
PowerShell cropped up again when we needed to mass-import contacts into the Global Address List (GAL) as external contacts. We used Excel to create a consolidated contact list, and then a couple of PowerShell commands brought that information into the GAL. Again, as big a PowerShell fan as I am, I think this is something that needs to have a Web front-end on it. O365 simply isn't being sold with the "by the way, hope you know how to use PowerShell" message as part of its marketing.
With the migration over, we settled in to using O365.
Up next: The Verdict
In case you missed it:
Posted by Don Jones on 01/20/2012 at 1:14 PM1 comments
My company, Concentrated Technology, recently made the leap from Google Apps to Office 365. We mainly work with Microsoft products and we figured "what the heck." In addition, we were really dissatisfied with Google Docs. The inability to create a true folder hierarchy, the difficulty of sharing sets of documents with external contractors...it was just a bit much. We all use phones that are Exchange Server-compatible, so we wanted to get some of the advantages of Exchange. While Google emulates Exchange pretty well, it doesn't have quite the same calendar sharing and other features that make Exchange great.
Deciding on an Office 365 plan was the toughest decision. The various "P" and "E" plans all offer different features. We initially leaned toward the P1 plan, but after reviewing its limitations, we worried that it might not be enough for us. Keep in mind that you can't ever migrate from a "P" plan to the larger "E" plans, or vice-versa, so if the biggest "P" plan might not suit you forever, then you need to up the ante and go "E."
The fine print was also a bit troubling. The "P" plans, for example, have a hardcoded e-mail limit of 500 recipients per day. Per day. If you've got a dozen users, that's actually not very many outgoing recipients -- just about 40 recipients per person, per day, which is something we felt we could easily exceed. The "E" plans have a much larger limit. This is actually the most troubling thing about Office 365; I understand why Microsoft does it (to help prevent O365 from becoming a spam source), but there has to be a better way to achieve the goal than an across-the-board cap.
1/19 UPDATE: I was contacted by a Microsoft spokesperson today who said this limit has been lifted; the number of recipients is no longer capped at 500.
A final caveat: Because two company's two owners are journalists, we'd both been involved in the O365 beta, as well as in a "P" plan trial. That meant the accounts we'd created for those purposes couldn't be used for our new, permanent account. So instead of getting "concentratedtech.onmicrosoft.com" as our account base name, we had to pick something else. Of course, once we migrated, nobody would see that base name because we'd use our own domain name, but it's something to be aware of: If you do a trial, either use a totally fake name, or use the name you plan to proceed with and stick with it.
Up next: The Migration
In case you missed it:
Posted by Don Jones on 01/18/2012 at 1:14 PM3 comments
I've written about it before, and I'm here to do it again -- but it's still worth a read. However, before I dive in, I want to remind y'all that I'm just the bearer of the news. While I personally think this is a good direction for Microsoft, much of that stems from my own IT background. Based on the comments in previous articles, some of you disagree -- and you should make that opinion heard at Microsoft, where it'll matter. In case there's any confusion, I don't work for the company.
You'll want to go here and read a short article from the Windows Server team. Go on, read that. I'll wait.
The points to take away:
- The full GUI experience on the Windows Server OS is now optional. Software meant to run on a server should not assume a GUI will be there, nor should it take for granted any of the many other dependencies that the full server OS has traditionally included. My analysis on this: It's Microsoft's shot across the bow. You'll see a stronger position on this sometime in the future -- maybe a few years off, but sometime. They want server apps to assume they're running on what we used to call "Server Core."
- The recommended way to run the Server OS is without the GUI. Didja see that? No, you don't have to think it's a good idea -- I'm not pushing acceptance. I'm pointing out what's happening. These are the facts on the ground.
- Microsoft has taken a (what I think is a very good) middle-ground step by introducing a "minimal GUI" mode in the server OS. That means you can have your GUI tools on the Server OS, as well as on your client computer, provided those GUI tools play by a few basic rules and don't assume too many dependencies (like the presence of IE). They'll have the full .NET Framework at their disposal, for example, which should help -- especially if they're tools based on the MMC architecture. So this gets you a "lighter" version of the Windows Server OS, but still lets you manage right on the console.
My opinion, for what it's worth: Anyone who thinks "minimal GUI" mode is anything more than a holding measure is crazy. To me, this clearly says Microsoft is trying to get us off the console for good. They know we're not ready to give it up completely, so this is them trying to wean us off. Maybe I'm wrong on this -- it's happened before -- but it sure seems that way when I look at the big picture.
- Notwithstanding the "minimal GUI" mode, Microsoft is recommending to software developers to not assume a GUI will be present. The full, rich GUI experience happens on the client. Not allowed connect to your servers from your client computer? The suggestion appears to be "rethink your architecture."
Again, folks, I'm not advocating the news -- I'm just reporting it and offering some interpretation. I'm well aware that many, many, many admins out there aren't looking forward to a GUI-less server operating system from Microsoft. I've heard and read many cogent and thought-out arguments against it. I'm sure Microsoft has too. They're proceeding anyway. We have two choices: adapt or die. I'm sure the NetWare folks felt exactly this way when they saw their first Windows NT 3.51 server. My point is that Microsoft is going in this direction, despite the fact that they surely know there will be disagreement. They're obviously taking baby steps -- take this "Minimal GUI" thing as a clue. We can either be prepared to manage our servers in the brave new world, get different jobs or get promoted out of harm's way.
Don't think Windows Server without a GUI is a good idea? I'm not judging you, and I'm not saying you're wrong. Think Microsoft is stupid for heading in this direction? Time will tell -- we'll wait and see, since that's all we can do. For myself, I'm going to make sure my job is secure by making sure I know how to operate in the world that's coming. It's like taxes: I don't have to like them, I just have to figure out what numbers to write into the little boxes.
(I'll admit that I'm personally biased to like the direction Microsoft is headed on this. I come from an IT background where going in the datacenter was practically verboten; I spent more time trying to make sure I could remotely do everything, without an RDC connection, then actually doing anything. But the environment I cam from was a minority back then, and it's probably a minority now. Although I'm glad it prepared me for what appears to be coming.)
My opinion is that Microsoft is pointed toward a world of "headless servers:" Minimal functionality from the console, rich management from a client computer. This is a step in that direction, and it's intended to put us, and software vendors, on notice. Me, I'm going to take the hint. I hope y'all do as well. Windows Server "8" is a chance to start getting on board with what Windows will become -- it's not throwing us directly into the fire, but I think we have to take the opportunity to start adapting to this new direction.
Posted by Don Jones on 01/12/2012 at 1:14 PM43 comments
There's a bit of confusion on the topic of Active Directory forest recovery, and I'll admit that I was caught up in the confusion as well. The confusion stems, I think, primarily from third-party software vendors who either have, or do not have, forest recovery tools to sell.
Vendors who don't sell forest recovery tools will tell you that they don't do so because you can only do a forest recovery on conjunction with Microsoft Product Support Services. If you attempt a forest recovery on your own, Microsoft may not support any further issues you have with Active Directory in the future.
Vendors who do sell forest recovery tools will tell you that their tools make a forest recovery faster and easier, and that their tools follow Microsoft's recommendations for completing the process safely.
They're both right.
This is a point I myself was confused about. It turns out that Microsoft is perfectly fine with you using third-party forest recovery tools to speed up the recovery process as you perform it with Microsoft's guidance.
You do have to be on the phone with Microsoft, but you can use tools to speed up the tasks they ask you to perform.
So do you need a forest recovery product?
It depends. I'm told that all of the Fortune 500 own one, and that makes sense to me. Giant companies tend to buy a lot of insurance for a variety of things, and owning a forest recovery product is really a form of insurance. A Fortune 500 company that's missing an AD forest could be losing millions of dollars per hour, and a tool that speeds up the process probably makes perfect financial sense.
On the other hand, a very small company with a single domain and just a handful of domain controllers might not find financial sense in a forest recovery tool. Keep in mind that such a tool will neither prevent a forest failure, nor make recovery instantaneous; there's still a Microsoft-guided process to go through. So for very small companies, the time saved might not outweigh whatever financial loss they could expect to incur during the outage.
Other companies are going to just have to look. How much time will an unassisted forest recovery take? How much will a recovery tool speed things up? How much money will you lose either way? If the additional time -- and thus financial impact -- of not having a tool is greater than the cost of such a tool, then you should probably consider buying the tool. Think of it as a specialized form of insurance policy, much like your flood insurance or worker's comp insurance.
All that said, you should absolutely separate forest recovery -- which is they very definition of "disaster recovery" -- from the day-to-day data restoration that your company also needs. Every company should have a product capable of doing single-item and attribute-level recovery for Active Directory, capabilities not natively offered in any version of Windows (the Win2008R2 "Recycle Bin" feature can't do attribute-level restoration). Such a restoration tool may or may not be part of a change auditing solution, depending on whether or not you need such a thing. Restoration tools exist that include the forest recovery capability, but as you're evaluating such tools try to consider those capabilities independently. You're not likely to need a forest recovery tool very often; that day-to-day object restoration capability, however, will come up frequently.
Posted by Don Jones on 12/29/2011 at 1:14 PM0 comments
Back in the day, everyone took Microsoft's "Network Essentials" certification exam, and many took associated training either in a class, through a book, or in a video of some kind. It covered really, really basic stuff: IP networking, subnetting, the idea of routers and bridges and so forth. Heck, even IPX/SPX was on there.
These days, certification training and exams, and even most non-certification training, skips many of those basics. The theory has been twofold: One, we've got so much more to learn now that products like Windows are so complex that we can't squeeze it all into a class. Two, the higher-level tasks in Windows incorporate these lower-level basics, so there's no need to teach them.
The latter point isn't true. Windows' convenient point-and-click interface doesn't encourage any understanding of ICMP, ARP, or other important low-level protocols. And, by not teaching them, we run the risk of losing a thorough understanding of how the network actually operates -- making troubleshooting a lot harder. I recently taught a class in which not one student had ever run Ping or Tracert from the command-line. None of them had the foggiest idea what ARP was, and therefore had serious problems troubleshooting certain kinds of network problems. They thought they knew what a VLAN was, but couldn't really explain it to each other or to me.
Are you seeing a similar effect in your IT team? Not amongst your senior staff, but perhaps amongst newcomers or career-changers who are new to IT? Do they really have a stranglehold on these "essentials," or do they get a bit wary when people start talking of subnet masks, DNS caches and MAC address resolution? How important are these essentials to your network, and what portion of your staff do you think should understand them?
I'd really appreciate your feedback on this topic. If you've got a moment, please take a very short survey. I'll share the results in an upcoming post.
PS: If you're wondering why I've been asking so many questions and offering so many surveys this month, it's because we're getting close to the end of the year. That's always a good time to reflect on what we might want to focus on in the year ahead, and so next month I'll be sharing all of the insight you've offered. Seeing what your peers and colleagues think might give you some creative ideas for your own IT team's direction in 2012.
Posted by Don Jones on 12/27/2011 at 1:14 PM2 comments
I know "cloud" and "outsource" are dirty words with some IT folks for a variety of reasons. But let's set aside, just for a moment, all of the baggage those two words carry and conduct a quick thought experiment:
We all have things we'd rather not have to do when it comes to IT. For me, it's always been backups. This comes from an unfortunate environment early in my IT career: We lacked a LAN, and PC backup literally meant lugging a Colorado DAT drive around to people's computers at night to run backups one PC at a time. I still have nightmares about it. If you'd told me, back then, that I could somehow outsource that onerous task to some mystical "cloud," I would have been all over it. I had lots of other projects that were more interesting and more value-added that could have more than occupied my time.
Just because something is "mission critical" doesn't mean we necessarily want it in our own datacenter. Take e-mail: Few organizations are professional e-mail hosting companies, yet we all (mostly) host our own e-mail services. We may not enjoy it, but we do, because in most cases we still have to. In most cases, we've all got a long backlog of projects that we'd like to work on, except that we can't get additional headcount these days and the staff we do have are overwhelmed.
It's the "build it vs. buy it" question, to a degree. Some of us won't outsource anything willingly, because we want the control that's offered by doing it ourselves. Nothing wrong with that! Others would prefer to just bring in as much as possible as a service, so that we can focus on the things we have to do ourselves, which aren't available from someone else -- usually business-specific stuff. Nothing wrong with that, either. Most of us wind up somewhere in between: There's a few things we could stand to have someone else manage, but some things we'd never, ever give up control of without a fight.
So indulge me for a second: Assuming that outsourcing one IT service would not endanger any jobs within your organization, that price was no object, and that concerns like security and performance were well-addressed, what one IT service would you gladly get rid of first? What's stopping you from outsourcing that service today?
Drop your comments below – or, if you've got a moment, answer in a one-question survey. I'll consolidate the results and share them in a future post.
Posted by Don Jones on 12/22/2011 at 1:14 PM2 comments
In the past few months, I've had the opportunity to speak with three or four dozen customers and to visit a half-dozen of those at their main locations. In every instance, out of growing curiosity, I asked about their help desk ticket-tracking software.
Now, I know most of us don't like the work that a help desk ticket represents. Too often, they're firefighting exercises, and we all have things we're more passionate about than fixing problems. Besides, nobody likes fixing as much as building, because fixing implies a failure that we usually wish hadn't happened in the first place. But a ticketing system's job is to help coordinate activity and keep balls from being dropped, so on paper such systems have value.
I've yet to run across a single organization that likes their ticketing system.
It seems like it's either (a) badly designed software, (b) badly implemented software, or (c) both. I've run across all the major brands, and I've yet to hear a kind word about any of them. This isn't just folks who dislike tickets per se; most of the IT teams I've spoken with completely understand the value a ticketing system should offer -- they just don't feel they're getting it from their system. Complex software that hasn't been properly deployed has been the most common complaint, which suggests there's really room for a competitor to come in and offer something compelling and useful -- which doesn't seem to have happened, yet.
At the same time, I'm seeing a spike in the number of ticketing systems offered for sale. Some of these are integrated with hybrid monitoring solutions, although I'm not seeing a lot of organizational demand for combo monitoring+help desk software, no matter how much sense such a combo might make in theory. I'm actually a bit surprised that Microsoft hasn't jumped into this space, either through development or (more likely) an acquisition in the System Center family. The integration of ticketing software with System Center Operations Manager, System Center Virtual Machine Manager, System Center Configuration Manager and System Center Orchestrator (in particular) seems like an easy argument to make.
What help desk software is your organization using -- and what do you like or dislike, about it? Would you prefer a help desk system that has tighter integration with your back-end management tools or does your current software handle what you need it to?
Drop your thoughts in a comment, and if you have a moment to answer a short three-question survey about your help desk software. I'll share the results in a future post.
Posted by Don Jones on 12/20/2011 at 1:14 PM6 comments
I used to co-own a company that did outsourced certification exam development, and we helped Microsoft on several projects a few years back. Like many of you, I also held the requisite certs: MCSE, MCDBA, MCT and so on. Like some IT professionals, certification has fallen a bit by the wayside for me. My position doesn't require it or, in fact, do much of anything to make me stand out from the crowd.
But certification is obviously still a factor. I know plenty of IT professionals are still taking first-party exams from Microsoft, VMware, Citrix, Cisco and so forth. Training materials focused on certification are still a big deal -- the SQL Server 2008 exam videos I did for CBT Nuggets, for example, still get pretty good viewership.
As an IT decision maker, however, what role does certification play in your life? Do you encourage -- or even demand -- that your IT team maintain the appropriate certifications? What certifications, if any, do you look for on resumes when you're bringing in headcount? Does a certified individual have a better chance of getting hired or promoted than a non-certified person?
I know for a fact there's still considerable concern about the "paper MCSE" effect (or whatever we're calling it now that the MCSE itself is defunct), wherein exam-crammers obtain certifications without really earning them, thus (according to some) diminishing the value of the certification itself. Is that something you've run into in your own organization?
I'd love to get your thoughts in comments below -- but if you have a moment to answer a two-question survey on the value of certification in your organization, go here. I'll share the results in a future post.
Posted by Don Jones on 12/16/2011 at 1:14 PM11 comments
I've read the word "hybrid" so many times recently that you'd think I was at a Toyota dealership. Nope. It's "hybrid IT" I'm dealing with.
This new-ish word describes the intersection of the traditional datacenter, all this cloud stuff everyone's hyping, as well as more traditional forms of outsourced IT, like co-located servers and so forth. Hybrid IT is essentially, "all your IT stuff, no matter where it lives."
Managing and monitoring all of that "stuff" is getting tricky-- and more and more necessary -- as we start to rely more and more on "stuff" that lives outside our datacenter. There's a small, but growing vendor space of companies who specialize in hybrid IT monitoring and management: Nimsoft, ManageEngine, Zenoss, Honda, and lots of others. Wait, scratch Honda -- wrong "hybrid" brochure.
Generally speaking, these tools combine traditional, on-premise monitoring tools, such as server-installed agents and probes -- with specialized monitoring services for outsourced services. Some offer specific functionality for monitoring.
I'm seeing a somewhat-disturbing trend of these solutions also incorporating help desk software, and I hope those vendors are taking that step with some caution. A lot of us already have help desk software, and spent a lot of time and money deploying it, and don't have the political capital to switch to something else. A new monitoring solution should be able to work with whatever we've got in place. For that matter, a lot of us already have the "big screen" where we do all of our monitoring. Anything else we bring into the environment should support that -- not attempt to replace it. There are certainly protocols out there that would allow a new monitoring solution to integrate with OpenView, Tivoli or whatever else might already be on the network.
Still, this is a space to watch. It's evolving quickly. The early vendors are offering some techniques and technologies that will doubtless become more prevalent in the future.
Posted by Don Jones on 12/09/2011 at 1:14 PM0 comments
I was recently with a client whose CTO asked a difficult question. You see, he had been asked by his boss to start doing a better job securing company file servers and other network assets. Like many organizations,its security efforts had been a bit haphazard, and resource permissions weren't exactly in stellar shape -- there were access control entries for individual people who weren't with the company any more, it was difficult to determine who had access to what, and so forth.
His question to me, however, wasn't about the best way to fix things up. He wanted solid grounds to tell his boss no. Or at least, not right now.
You see, he knew that this security fixup was mainly being driven by hype and not by any real business need. He knew it would have to be done, but the directive was coming at a bad time given the company's other concerns and priorities. He knew that this task was going to be expensive, and he didn't want to spend that money right then.
It was kind of a shock, frankly. But I shrugged, and led him out of his office. "I'll show you a reason why locking down network security is kinda silly," I told him. "And this is true in most companies." I pointed to a laser printer, which had a stack of recently printed documents next to it. I pointed to a broken shredder, which had a huge pile of "confidential documents to be shredded" sitting next to it. I pointed to employees' desks, which had file cabinets without locks. "You can lock down the network, but your employees appear to print everything, and those printouts aren't secured in any way at all."
His face fell. Sure, I'd pointed out a reason why securing the network wasn't a high priority -- but I'd done so by pointing out a higher security priority: The real-world treatment of sensitive information.
Now, don't get me wrong -- I know the network should be secured. It's accessible from a broader range of locations and devices than the office. But our offices are rarely that secure. People "tailgate" when entering the office with their smart card badges. Custodial staff and other individuals -- often contractors -- have unfettered access to the office after hours when nobody is watching. And c'mon, doesn't it seem a bit silly to spend all that time on money locking down the network when users can just leave printouts of the same data lying around wherever?
I know, I know -- we have to secure the network. I'm not suggesting otherwise. I'm just also suggesting that we have someone look at the security of those same resources once they leave IT's control.
What's your company's policy in physical security? Do you have a locked-down network and a wide-open real world?
Posted by Don Jones on 12/02/2011 at 1:14 PM0 comments
Rolling out a new client operating system is a complex, lengthy process fraught with risk. A new server OS is less stressful, mainly because we're usually a bit happier to have multiple server OS versions running in the datacenter.
With that in mind, the Server edition of Windows 8 is something every organization should look at closely. Here's why:
- Optional GUI. Removing the GUI shell from Server is as easy as unchecking a checkbox or running a PowerShell command, and doing so can increase server stability and reduce the number of patches that have to be installed. Microsoft is on a mission to remove the GUI entirely, so Windows Server 8 is your chance to start getting used to the brave new world on your own terms. You'll rely on rich, client-side GUIs and on the PowerShell command-line. It's happening. Not everyone is happy about that, but it's happening anyway. Might as well start getting used to it.
- Better manageability for server groups. Because much, if not most, of Windows Server 8's management is now PowerShell-based, even management GUIs (which you'll still have) can more easily manage batches of servers through PowerShell's Remoting features. Combined with PowerShell v3's Workflow feature, multi-server management finally becomes a reality. Larger organizations will truly appreciate this level of control and centralization, but you'll need Windows Server 8 pretty widely deployed to take advantage of it.
- Windows Server 8 is introducing what I call "foundation" features, such as the new file security model. Microsoft is finally acknowledging vastly outdated models and building in ones that are more modern and manageable. Getting Windows Server 8 in place will allow you to start reducing your overhead and centralizing both administration and auditing. Some of these new foundation features might not be ones you'll fully deploy yet, but you'll definitely want to start playing with them in isolated scenarios.
Have you looked at the Windows 8 Server preview, yet? It's available to TechNet members, and if you haven't installed it and given it a whirl… well, that's what virtualization is for! Is it something your organization will consider? What are you looking forward to… and what are you fearing about it?
- For more information on this topic, see:
Posted by Don Jones on 11/28/2011 at 1:14 PM0 comments
Windows 8 is likely to be released in 2012, so as 2011 starts meandering to a close, it's worth looking at Microsoft's latest offering and considering whether or not it'll make it into our organizations. Here are four reasons I think organizations will give this new OS a miss:
- They're just now deploying Windows 7. Having skipped Vista, dealt with Windows XP for close to a decade and finally facing the end of Win XP support, organizations are in the midst of Win 7 deployment and planning. They're unlikely to do it again for Win 8. Now that we know we can get by with a 10-year-old, extended-support OS without the world ending, Win7 will probably stick around until 2020 at least.
- The Metro UI. Everyone I talk to either loves it or hates it -- much like the Ribbon introduction in Office 2007. Like the Ribbon, Metro penalizes experienced Windows users the most by moving common tasks to hard-to-find new places. A Win 8 deployment means potential user frustration, retraining, and lost productivity. Is it worth the risk?
- Insufficient new business-class features. Apart from the perennial "most secure version ever" promise, Windows 8 doesn't really offer a ton of must-have new business features. At least in in the preview we have so far, it seems heavily consumer-focused. Businesses are more inclined to go with the "if it ain't broke" mantra and skip any OS version that doesn't deliver significant, obvious advantages.
- Will it really run everything? Microsoft says Win 8 will be Win 7-compatible -- but most companies are still concerned about Win XP compatibility, ideally without using desktop virtualization. Win 8 is still too early to test for compatibility, but simply the concern will slow down a lot of business' interest and adoption.
This just refers to the client edition of Win 8; the Server operating system is a bit of a different situation and I'll write about that in an upcoming post. But regarding the client, what are your thoughts? Is Win8 something your organization will at least look at? Based on what you've seen so far, does it stand a chance in your organization?
- For more information on this topic, see:
Posted by Don Jones on 11/18/2011 at 1:14 PM13 comments
It isn't exactly around the corner, but Windows "8" ( or whatever it's finally called) will be here before you know it. Here are four reasons I think most organizations will give it a serious look:
- It's pretty cross-compatible with Windows 7. That means there should be less resistance to having a mixed 7/8 environment, so as new computers enter the organization pre-loaded with Win 8, there will be less reason to just blow them away and install Win 7.
- It uses less memory. Every indication is that Win8 will use just over half of the RAM Win7 uses to start up, which is a fundamental performance gain. That means users will be able to use more of their computers' memory for their applications.
- It's a win for tired users. Let's face it, our users aren't exactly in the best of moods, what with the economy, cutbacks, and so forth. Strategically deploying a shiny, new OS is a way to liven up their lives a bit.
- The new "reset and refresh” functionality should help meet a critical IT need, making it easier to wipe and restore systems back to a baseline state when needed. This could be a significant time-saver for IT.
This just refers to the client edition of Win 8; the Server operating system is a bit of a different situation and I'll write about that in an upcoming post. But regarding the client, what are your thoughts? Is Win8 something your organization will at least look at? Based on what you've seen so far, does it stand a chance in your organization?
- For more information on this topic, see:
Posted by Don Jones on 11/16/2011 at 1:14 PM3 comments
At Microsoft Tech-Ed 2010, I moderated a roundtable discussion on Active Directory auditing, although the discussion sometimes spun off into auditing things like Exchange, SQL Server, SharePoint and the like. One thing we all concluded was that, simply put, auditing sucks.
The computing power to produce detailed audit messages across a wide range of possible events is non-trivial, leading many organizations to decide to forgo auditing certain things just to maintain a certain level of workload capability. How messed up is that? Organizations have spent years of time and millions of dollars building their own auditing systems. Of course, there's a robust third-party market in auditing solutions, all of which take different approaches and all of which claim to be the best. Where's a decision maker to turn?
Based on that Tech-Ed discussion, as well as some recent conversations with clients, I'm trying to wrap my head around some of these issues -- and I'd love your feedback. There's a very quick, five-question survey that you can take to help me see where folks stand on some key differentiators. At the end, there's also an opportunity to provide even more detailed feedback through a phone or e-mail conversation with me. If you can spare 15 minutes for a call, I'd certainly appreciate it, no matter what size organization you work for. I'll summarize the results -- this may be a paper rather than a blog post here, but I'll make sure you get a copy either way.
Posted by Don Jones on 11/08/2011 at 1:14 PM3 comments