Posey's Tips & Tricks

Will AI Cause the Next Wave of Shadow IT

The rise of a new technology once again brings with it IT's struggle to secure it for enterprise use.

Shadow IT is one of those problems that IT has been battling for decades, in one form or another. For those who might not be familiar with the term, shadow IT refers to the practice of end users using technology that has not been officially sanctioned (or is outright forbidden) by their IT departments.

The use of shadow IT changes over time, but my experience has always been that whenever there is a big leap forward in technology (and that technology is accessible to the average person), there is usually a corresponding spike in the use of shadow IT. In those cases, users are anxious to adopt new technologies that their organizations might have been reluctant to deploy.

In the early 2000s for example, there was a big shadow IT problem related to the use of Wi-Fi. At the time, Wi-Fi was still relatively new, and many organizations forbid its use because of security concerns. However, end users would sometimes acquire their own Wi-Fi routers and plug them into the network jack under their desk. The end result was that the organization had a Wi-Fi network that they didn't know about. The lesson learned for IT was that it was better for the organization to securely deploy their own Wi-Fi networks than to have users plug in their own router with no regard for security.

A similar problem occurred roughly ten years ago as Software-as-a-Service offerings and cloud computing began to mature. Users who were frustrated by their IT departments not allowing certain applications began to realize that they could run those applications in the cloud and IT would be none the wiser.

These are only two of countless shadow IT examples that I have seen over the years. In most of the cases that I have seen, the end users have not been driven by spite or by malicious intent, though there are exceptions. In most cases, the use of shadow IT has been driven by one of two things.

The first of the factors that sometimes leads to the use of shadow IT is that there is a certain coolness factor associated with a new technology. When the iPad was first released for example, IT pros had to deal with users who wanted to show off their new iPads at work (this was long before the Bring Your Own Device trend).

The second factor that tends to lead to shadow IT is user frustration. A user realizes that a new technology will make them more productive or will make their job easier. At the same time, the user is frustrated because the company that they work for is slow to adopt the new technology or has decided that the organization will not be using it at all. As an example, a company that I used to work for had standardized on a particular graphic arts application. However, there was one department within that organization that preferred, and covertly adopted a different application, all in the name of productivity.

The reason why I mentioned the two factors that have traditionally driven shadow IT adoption is because I think that generative AI falls into both categories. Gen-AI is one of those things that is still new and cool, and it's also universally accessible. Tools like ChatGPT are free to use and have almost no learning curve associated with them. Likewise, generative AI tools can go a long way toward making it easier for a user to do their job. In other words, users have a very real incentive to use generative AI and there is almost nothing stopping them from doing so - with or without the IT department's consent.

A recent study underscored this idea. Recently, Microsoft and LinkedIn collectively published the 2024 Work Trend Index Annual Report. This report found that in smaller companies, a whopping 80 percent of users are bringing their own AI tools to work.

This is a huge problem, not just because of the way that it undermines the IT department, but also because of potential data leakage issues. End users may not appreciate that some generative AI platforms treat user input as training data. In such cases, if the user pastes something sensitive into the AI prompt, they risk having the AI ingest that data and potentially expose it to others outside of the company.

To combat this problem, organizations must create a definitive acceptable use policy for AI and clearly communicate that policy to their users. At the same time, the organization must also prioritize putting AI tools in the hands of its users. This means examining the types of business problems that could potentially be solved by AI and then looking for tools that can directly address those problems without creating security or data privacy risks in the process.

About the Author

Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.

Featured

comments powered by Disqus

Subscribe on YouTube