Special Reports

New Approaches to App Lifecycle Management

Most of today's tools don't address ALM well. Eclipse and Visual Studio change that equation.

As applications and their execution environments have grown more complex over the last decade, so have the challenges of building, testing, and maintaining them across their lifecycles. Technology is changing, as managed application environments depend less on the tools in the past that identified and diagnosed memory leaks, and instead depend more on application performance and scalability.

The problems are more difficult because writing the application becomes easier without memory pointers, but making sure performance and scalability meet requirements is trickier. If an application doesn't perform or scale, or if there is a subtle fault that manifests itself only under certain conditions, you might not find out about it until its end users are actually trying to do work with it.

And if that is the case, fixing those problems is a monumental and possibly hopeless task. If those types of problems could have been found in the development or test environment, they probably would have been. So when performance and scalability problems are sent back to the developers, the conclusion is typically "cannot reproduce." And the cycle continues as users keep experiencing problems, developers can't reproduce them, and system administrators can't collect the detailed level of execution data for developers to diagnose the problem.

This cannot continue. Working software applications are no longer a luxury for enterprises; they are an essential fact of life. Application lifecycle management (ALM) vendors are addressing this new paradigm by trying to bring together disparate people and data. One problem is that individual development, testing, and administration tools were developed in a vacuum as individual, single-purpose tools. Debuggers, error checkers, performance tools, and code coverage analyzers were all built separately, to address a specific developer need. They had different purposes, different user interfaces, and different strengths and limitations. In many cases, they came from different companies, and could not work together in any way. You used one or the other, but not both together.

As a result, developers used these tools only when they got into trouble, and only individually. There was no way to take data from one and make it available to another, or even in many cases to save the data to analyze in different ways. Individual developer and quality tools were troubleshooting instruments, rather than aids to managing the application lifecycle.

Connecting the Lifecycle Dots
ALM vendors are addressing this limitation by defining a common platform for their tools, no matter which stage of the lifecycle they serve. In the case of Microsoft applications, most opt to integrate into Visual Studio using the Visual Studio Industry Partner (VSIP) program. VSIP lets third-party tools appear as Visual Studio features, so that users have the same look and feel as the VS IDE. While Visual Studio is known primarily as a developer's environment, Microsoft is offering a tester's edition with the Team System, extending its reach farther down the application lifecycle. WinFX will include business process modeling tools and graphical design tools, providing a means for involving more IT professionals in creating new applications and services. All are anchored by Visual Studio.

Alternatively, ALM vendors are increasingly using Eclipse as the lifecycle platform of choice. This began with the Software Tools and Performance Project, and is being extended with two new projects being defined this year—the Application Lifecycle Framework (ALF), being led by Serena, and the Tools Services Framework, being led by Compuware. Of course, individual ALM tools have always been able to work with Eclipse as long as they were written as Eclipse plug-ins, but these projects broaden that scope. They are intended to ensure a flow of data and other artifacts across individual ALM tools, and across the application lifecycle. While they are new projects, and have not yet come to fruition, they have the potential to change significantly how software is developed and maintained.

Most of today's ALM tools don't address the entire application lifecycle well, but rather individual pieces of the lifecycle. There are designer tools, developer tools, functional testing tools, performance testing tools, and so on. They frequently have data that could be useful to others across the application lifecycle, yet there is rarely a good way to get that data from one tool to another.

Eclipse, and to a lesser extent Visual Studio, change that equation. By providing a common tools platform for vendor ALM tools, both enable a common look and feel across those tools. They also get access to metadata generated by the platform itself, for a deeper level of analysis than would normally be possible with a standalone tool. Tools can take advantage of data generated by the platform because they can read and understand its metadata.

Of course, that is only half of the equation. Just because ALM tools are hosted together doesn't mean they are able to work together. That is a bit more difficult. It requires that the tools generate their own metadata along with the copious data they already produce in analyzing applications. That metadata must be in a common format and easily accessible to tools across the tool chain. In other words, data, metadata, and other products of individual tools must be portable up and down the lifecycle, and tools must be able to use that data to supplement their individual analysis.

That's where the ALF and Tools Services Framework projects come in. ALF provides a common infrastructure and a set of domain vocabularies that define the events, objects, and attributes. Using the loosely coupled SOA model, it creates a mechanism for handling the flow of information from one tool to another.

Sharing data deals with only part of the problem. The other part is addressed by the Tools Services Framework (also known as Project Corona). It intends to provide a way to share artifacts across the application lifecycle and the tools that choose to subscribe to those artifacts. For example, one of the most difficult things for a developer to do is replicate a fault discovered farther downstream based on the description provided. Even if the discoverer was a trained tester, providing the build number and precise sequence of steps needed to reproduce the problem is an onerous and error-prone activity.

The Tools Services Framework is intended to share artifacts across the application lifecycle. Artifacts are messages, files, and other discrete items that are the result of testing, analysis, or the application of a particular tool. Many artifacts should be shared between tools and participants, but until the Tools Services Framework there was no way of doing so.

Microsoft's Visual Studio provides features for integrating tools, although across a smaller portion of the application lifecycle. The Visual Studio Industry Partner program provides information for third-party tool vendors to integrate seamlessly into Visual Studio and operate on code files. Some metadata is available through Visual Studio regarding the code files, but there is no real facility to generate and share metadata among the third-party tools. Further, there is no common bus for sharing data and artifacts between separate third-party tools.

Microsoft lags on the integrated ALM front, mostly because it has traditionally been strongest with its core audience: developers. Only more recently, with the introduction of Visual Studio Team System, has Microsoft opened up its platform to other parts of the application lifecycle. With Team System, infrastructure architects can define the parameters around which an application has to operate in production, and have those parameters enforced during design and implementation decisions.

Of course, individual vendors can choose to establish their own paths to ALM transparency. That might seem to be an attractive way for the vendor to promote its own lifecycle tools and establish lock-in for its own products. But there are several disadvantages to a "go-it-alone" approach. Of course, tool users dislike vendor lock-in, preferring to use an eclectic set of best-of-breed tools rather than a single vendor solution.

But there's more to it than that. Specific adapters between tools from the same vendor tend to be brittle. New versions of individual tools might add data elements, or reformat existing ones, necessitating changing the adapters between the tools. The vendor might implement a more general-purpose bus that has more flexibility, but if it makes that investment, it might as well use an existing data and metadata exchange technology.

In other words, while using a common data exchange framework gives users more opportunity to choose competing tools, it also makes it easier on the vendor to offer a more robust solution. Increasingly, more vendors are coming to the conclusion that leveraging an existing data exchange solution makes both technical and business sense.

Solving an Intractable Problem
There is a Holy Grail in ALM. The most intractable problems that developers face are those that come from the field. Today, there are two sets of tools—lightweight tools that provide a shallow overview and level of analysis on running applications, and intrusive tools that provide a deep level of analysis during development. The former are important in a production environment, where it is not realistic to intrude on the real work being done, while the latter are important during development, where deep information is critical to finding and fixing bugs.

But these two groups of tools can't talk to each other. More specifically, the production-level analysis tools can't share data with the development tools. So when developers can't reproduce and analyze the problem with their own tools, the problem takes longer to solve. In fact, sometimes it can't be solved at all.

If the participants in the application lifecycle can share information and artifacts, they can more quickly identify and solve problems found in other parts of that lifecycle. In a term borrowed from systems engineering, we can call this "mean time to repair," or MTTR. When an application is in production and doing real work, reducing the MTTR is one of the most valuable things IT professionals can do for their employer.

These new approaches to treating the application lifecycle as a continuum, rather than a series of discrete steps, promise to rationalize the design, construction, testing, deployment, and management of applications. Eventually, it will likely mean sharing a common foundation, whether it is Eclipse or Visual Studio, by professionals across the lifecycle whose ties up until this time were more organizational than work-related.

In the future, however, it is entirely possible that there will be much more seamless, information-intensive communications across the lifecycle. Whether your preferred platform is Visual Studio or Eclipse, tools vendors are finally acknowledging that it's not possible to ignore tools integration. Developers will find this change especially valuable, but it will also benefit every application lifecycle professional, from designers to systems managers.

About the Author

Peter Varhol is the executive editor, reviews of Redmond magazine and has more than 20 years of experience as a software developer, software product manager and technology writer. He has graduate degrees in computer science and mathematics, and has taught both subjects at the university level.

Featured

comments powered by Disqus

Subscribe on YouTube