In-Depth

Managing for Security

For many reasons, enterprise application security is an inefficient and expensive model. Obviously there''s no such thing as a completely secure application, but enterprises must target an acceptable level of risk.

Security of applications and infrastructure is uppermost in the minds of most enterprise IS professionals. Yet the largest burden is placed on the IS professionals charged with keeping the application running and serviceable in production, in part, because the IS infrastructure itself—the networks, servers, and especially clients—are a primary source of security breaches. These battles are fought typically by IS operations staff, who have little control over either the incursion or those who left the door open.

Another part of the burden is because security, even today, is usually an afterthought that doesn't make it onto the radar screen until the application is nearing deployment. Security is rarely called out in user requirements, and architects and developers are not anxious to make work for themselves that doesn't contribute to fulfilling those requirements. Unless it is a standing directive that security be part of all custom application work, little more than cursory thought is given toward constructing an application that is safe from illicit manipulation.

But with this strategy enterprises will always be behind the power curve when it comes to handling malicious intent. Those charged with protecting data assets or application integrity lack knowledge of specific vulnerabilities and are faced with managing security based on incomplete information, which often leads to poor choices on balancing security versus accessibility, and these choices result in poor usage models or badly secured applications. Architects or developers will always give at least some thought to security in application development, but much of it is forced on the process by external dependencies. For example, databases are almost always at least password protected, so applications have to perform at least basic authentication before gaining access. However, in at least some cases the database username and password are poorly protected. They may be contained within a client-side script, or passed across the network in the clear.

Problems like these won't be found unless the IS staff conducts a security audit before the application is deployed to the production systems, or until the password is stolen and the database compromised or corrupted. In either case, the damage has already been done. If the problem is discovered in an audit, the application will have to go back to development for repair, which is a much more costly proposition than if it had been built right the first time. If the problem becomes uncovered as a result of an attack, then the theft or destruction of data will come with the cost of hard cash or equally hard-won customer goodwill.

That's how enterprise application security works today. It's a model that's both inefficient and expensive, yet we seem to accept it in the name of having some level of protection of the data critical to enterprise information and operations. It's the cost of bringing applications into production, and it's a cost that is simply too high. It's an approach that is especially difficult for distributed applications using Java 2 Platform, Enterprise Edition (J2EE), which depend a great deal on both a functional infrastructure and coordination between application components. A J2EE application won't work properly unless permissions are set correctly on the application components, the systems, and the network, yet those permissions may not be the default for the computing infrastructure.

Security Trade-Offs
Yet if there's no coordination between the production environment and its rules on the one hand, and development and the requirements by which the application is built and tested on the other, then the application is likely to be unprotected, perform poorly, or be unable to meet service-level agreements. Under these circumstances, no one wins. The application remains poorly utilized by the end users, developers have met requirements yet lack acceptance of the result, and IS operations fail in keeping the application running.

There is a trade-off between prohibiting unauthorized access and protecting data on one hand, and enabling an application to fulfill its purpose efficiently on the other. The only completely secure computer is one that is locked in a vault, with no connections to the outside world, and no I/O. But under virtually any circumstance, such a computer is useless. At the other end of the spectrum, a computer that is fully open to all may be easy to use and efficient, but lacks the protections against data theft that would make it an important tool.

The answer is neither to lock down nor open wide. Instead, enterprises must determine where they want to be on the continuum between the two extremes, and manage the application and infrastructure configuration to that point. There is no such thing as a completely secure application, a fact that at least some enterprises have yet to accept.

Instead, what enterprises have to do is access and manage risk. Some will say that no risk is worth taking, but that is absurd. Risk is inherent in any aspect of life, and computing is no different. The risk of unauthorized access or data theft is constant, and no actions will completely eliminate that risk. Further, the more an enterprise does to eliminate that risk, the less accessible the application will be to legitimate users. If passwords are changed frequently, or if access is available only from select locations, for example, even legitimate users will find it too inconvenient.

Instead, IS operations personnel can target a level of risk that is acceptable and manage to that level. If, for example, we accept the need for remote access to an application, we can mitigate the risk of unauthorized access by requiring tunneling or encryption. Remote connections and the data they access can be monitored randomly, either automatically or by an individual, to ensure that there is no systemic breach of security policies.

That's only part of the answer. A different and potentially more effective model is to manage security throughout the application life cycle. That model is to determine needed security levels as a part of user requirements and manage to those requirements throughout design, development, testing, staging, and production. Just as security is set and managed today in production, it should have a similar strategy throughout the life cycle. In that manner, security can be set and managed from the identification of a business opportunity until the resulting application is deployed and used in support of that opportunity.

A Life Cycle for Security
Managing security throughout an application's life cycle means that systems analysts responsible for working with users to develop requirements have to balance security needs alongside other application features in designing a use model that meets both user and enterprise goals. And it has to fit into the rules of engagement assigned to IS operations. The resulting requirements dictate both business needs and business protections. Systems analysts define the use of the application in a manner that's consistent with the defined security levels in the production-computing environment.

In writing functional specifications and designing an architecture, senior developers have to ensure that the solution satisfies all of the requirements. The compromises that compose an application architecture are still technical, but with the added complexity that security becomes an added dimension. During development, programmers are cognizant of the application's security requirements and the environment in which it will execute and make implementation decisions based on trade-offs between those goals.

Because application testers test to requirements, security goals are promoted in this phase also. The testing process includes security as well as user functions, and testers will write up bugs on areas where the application fails, requiring developers to address them as they would functional flaws. Testers would be charged with ensuring that application security requirements were implemented and working properly.

Note, however, that testing is only half of the task. The other half is the computing infrastructure. IS operations now has the knowledge to understand the needs of the application and is in a better position to maintain service-level agreements on that application, without unduly compromising the computing infrastructure.

The advantage of a life cycle security strategy to the enterprise is clear. Security is built into an application, rather than bolted on. An application is in large part responsible for protecting both itself and the data it accesses. The computing infrastructure no longer has to take sole responsibility for protecting corporate data and computing resources. They're assisted in that goal by the applications themselves and by the willing participation of the end users, developers, and testers.

Furthermore, the security level is a defined one, based on the needs of the application users, rather than on policies established by the computing infrastructure. It is designed and implemented by application developers, who now have to be cognizant of the trade-offs between protecting data and servicing end users. The application follows defined policies, but these policies are taken into account during all phases of the application development life cycle.

None of this means that we still don't have security policies for the computing infrastructure itself. Many applications are bought off the shelf and lack this methodology. Even custom applications can't offer full protection because of compromises made during design and development that may affect the final implementation.

Still, enterprises will find that cooperation on security across the application life cycle benefits everyone. The IS operations staff won't be besieged by unknown security holes not of their making, uncertain of how to address them without breaking the application. Developers can reuse security features across applications and include security unit tests in their toolkits to apply to new projects. Testers become more cognizant of how features interact with security and how to test for a given level of security before the application reaches production. The IS operations staff doesn't have to guess where the security holes might be, because they have documentation and knowledge of the application's security levels and the tests confirming those levels.

Lastly, end users are forced to consider security on the same level as business needs. Rather than focusing exclusively on a computing solution to a business problem, they understand how that solution can affect the integrity and accessibility of their data. And a more educated user is possibly the best security feature of any application.

About the Author

Peter Varhol is the executive editor, reviews of Redmond magazine and has more than 20 years of experience as a software developer, software product manager and technology writer. He has graduate degrees in computer science and mathematics, and has taught both subjects at the university level.

Featured

comments powered by Disqus

Subscribe on YouTube