Integrating the directories of various operating systems isn't a simple matter. Michael lays out the challenge.

Meta-Directory Mindset

Integrating the directories of various operating systems isn't a simple matter. Michael lays out the challenge.

Yes, it’s true. Active Directory isn’t the only directory service on the block. In fact, I’ve read estimates that each Fortune 1000 company has over a hundred data stores of information that could be considered a directory, if not a directory service. Even small organizations will run across many situations where user information is stored in other application-specific data stores such as accounting and payroll systems. And even though the goal is to have a suite of directory-enabled applications, this idea is going to take time. Some applications your business lifeblood is running on don’t even have plans to follow the DEN (Directory Enabled Networks) route.

This leaves the obvious solution—a tried and true one that we know and love, called integration. Only this time it’s directory service integration. It seems the further we travel down the road, the closer we remain to home.

The Mighty Meta-Directory

Directory service integration has a more formal name, coined by the Burton Group (www.tbg.com), called meta-directories, which you may have heard mentioned during the last few “years of the directory.” Labels aside, a meta-directory is nothing more than an architectural concept that covers the issues necessary to implement an umbrella directory over other directories.

As with most concepts, vendors claim varying interpretations of the meta-directory idea. They’re all mostly true, although a central consideration is that a directory becomes a meta-directory when it’s used as the focal point to manage other directories. Not just any data store can be considered a meta-directory. It must have at least the capability to obtain information from other data stores through standard or proprietary means, and then display that information to users and applications through its own interface. Beyond this characteristic, a meta-directory is largely an implementation issue rather than a product issue.

As a result, this plants the subject of meta-directories squarely into the political realm of information systems; the primary stakes are data ownership and control of that ownership.

Synchronization vs. Brokering

From a technical perspective there are two main approaches, with a few variations, to organizing and maintaining the information in meta-directories: synchronization of data stores, and chaining—or brokering—of the data stores. In most situations, they’ll probably both be used in combination to provide a logically unified directory service.

Synchronization is done through the replication of information from one data store to another. This is a necessary feature of an independent directory service, such as NDS or AD, in order to maintain accurate and authoritative information within the distributed databases that make up the respective directories. However, for the meta-directory, synchronization-specific information in each independent directory must be replicated to the other. The direction of this replication depends on a political decision: which directory is the authoritative master and which is the slave? In many cases, a directory can be the master of some information areas and a slave to other areas. In such cases, a directory holds a peer status between participating directories. The key to the synchronization concept is this: there are changes made to the data store within at least one of the directories, if not multiple directories.

Chaining, which is also referred to as brokering, is the retrieval and display of information from another data store without actually updating the local data store. It’s essentially a data request. This is useful because you lose the replication latency inherent in synchronous replication models. Each time you request information, it’s retrieved from the authoritative source without having to worry about the replication schedule between two or more different directory data stores. However, there’s another side to brokering. You have to make sure the master data source is always available in order to obtain valid information (or even any information at all).

The X.500 Model

One good way to understand meta-directories is to look at the original distributed “meta-directory” design, X.500, and to use those terms generically to describe the similar architectural components found in various specific products. You can then discover how a particular meta-directory product is designed and determine if it fits in with your network design and infrastructure.

At the core of the X.500 model is an administratively distributed database, which contains useful information about an object, such as its characteristics and location on the network. The X.500 term for the entire distributed database is the directory information base (DIB); its distribution components are directory system agents (DSA). The DSA is a data store with a hierarchical structure called the directory information tree (DIT). Each entry in the DIT structure consists of one or more nodes, called DSA-specific entries (DSE). A DSE with no subsequent entries, or child entries, is called a leaf entry, and a DSE with a child is called a non-leaf node.

The most important feature of the X.500 standard as it relates to meta-directories is the DSA-distributed data stores. Each database is managed by the organization that is most concerned with the information in its respective data store. This same concept, applied within an organization, is the fundamental idea behind meta-directories. The meta-directory distribution is accomplished via the various DIBs through the different vendor DSA components. Each DSA contains the unique portion of the DIT it’s responsible for maintaining. Since the meta-directory concept involves widely distributed data stores, it would be impractical to have the entire database reside on one computer. Also, there’s a greater chance of accurate data flowing throughout the entire directory when the local owners of a DSA maintain the information.

The LDAP Connection

Instead of X.500, however, Lightweight Directory Access Protocol (LDAP), a subset of X.500, is proving to be the Rosetta stone between proprietary directory data stores. Ironically, X.500’s greatest role may ultimately be that it was responsible for LDAP’s implementation and character. LDAP solves several X.500 limitations that have impeded widespread implementation. First, LDAP doesn’t require synchronous communication between servers or clients. Requests and responses may be exchanged in any order so long as every request receives a response if required. Another popular LDAP feature is the fact that it’s implemented over TCP instead of OSI for communications between both clients and servers.

LDAP is primarily considered an access protocol. That’s because it was first developed as an alternative to DAP as an entry point into X.500 directories. However, it has grown to encompass a complete directory service and is now both an access protocol and a distributed directory service standard. Major directory service vendors with X.500 characteristics, such as AD, NDS, and Domino, are using LDAP as the core method by which their DIBs are queried.

LDAP isn’t a complete rework of X.500. Both support a hierarchical namespace using entries with object class attributes. You needn’t choose between LDAP or X.500, since LDAP and X.500 servers will inter-operate with LDAP servers passing queries to X.500 DSAs; the results will be returned to LDAP clients.

While LDAP APIs are promising and rapidly growing in acceptance and support, it’s doubtful that all the application developers relevant to your information system have added LDAP APIs. Therefore, a third-tier architecture layer may be necessary for immediate meta-directory implementation. A component may be needed that can communicate with a proprietary data store underneath and expose an LDAP interface on top to pass information back and forth between the meta-directory and the proprietary data store. Of course, a given product must have enough critical mass in the market to motivate the vendor to develop a connector.

This type of layered architecture is heavily dependent on a solid physical network infrastructure focused on availability. This shouldn’t be minimized. It’s critical that your directory team work in tandem with your infrastructure and application development teams to make sure each aspect of the network is designed to interoperate efficiently.

The Power of Good Design

Another interesting common communication method will be Directory Services Markup Language (DSML), an extension and use of the XML standard. The DSML standard (www.dsml.org) was initiated by Bowstreet (www.bow street.com) and is now jointly promoted by Microsoft, Novell, Sun, IBM, and others to further the standardized integration of directory services. The DSML standard will be particularly useful in distributing and sharing directory information across company boundaries. It will also play an important part in the development of e-commerce.

A well-designed meta-directory will have a degree of flexibility, so that the control and management of the information can be in various areas of the directory. For example, the IS department should be allowed to manage the groups and to access control lists for resources, while the HR department should have control over personal addresses and contact information. When HR flags someone as terminated in the directory, this should flow down to that person’s access throughout the system.

In addition, there will always be the need for local control of information that shouldn’t be replicated out to other data stores, but that may be retrieved with the proper credentials through chaining.

In addition, it’s common for data stored locally to be considered more important and therefore better cared for than data in a centralized source. This means a well-designed meta-directory will be designed with a centralized architecture, but with pockets of local control to allow the care of information on a daily basis.

To build a unifying data store from proprietary data stores, it’s important for your meta-directory to have a flexible namespace. That namespace should allow you to instantiate common objects such as username with different values from the different data stores. A messaging system will most likely have a different username than an ERP system or NOS name object. The meta-directory must map, or join, these values using a common unique attribute associated with the value; that attribute must be the same across the various data stores. Common examples would be unique values such as social security numbers or, for the privacy-conscious, employee numbers.

These joins can be a combination of synchronized data brought into the meta-directory store, and brokered information in which the details are left in the original data store. Regardless, this unification allows the storage and flexibility in gathering security information such as private/public keys, password lists for different applications, and other authentication information. That security information can then be transparently forwarded for the user to the appropriate application as it’s accessed.

Is A Meta-Directory Even Possible?

Some people question whether adding yet another directory solves the problem of too many directories. One answer comes from Microsoft, with its acquisition of Zoomit and that company’s VIA meta-directory product. Microsoft wants to add more functionality to AD services so it can be considered for meta-directory applications. However, I’d question if any directory exists that could have the structure or schema to support every type of information necessary for every type of application.

Many issues remain regarding the implementation of a widespread directory within a company or among companies. For example, the underlying schemas of each directory must map to each other, and the security context for each directory or organizational unit must be managed to properly flow downstream. While I think the concept of meta-directories is here to stay, I also think we’re embarking on a much more complicated journey than we ever anticipated.

Featured

comments powered by Disqus

Subscribe on YouTube

Upcoming Training Events

0 AM
Live! 360 Orlando
November 17-22, 2024
TechMentor @ Microsoft HQ
August 11-15, 2025