Oracle and Microsoft Working Together a la COM+
Here's how one company's consultants used COM+ and MTS to scale its Oracle database to be available via the Web.
Working at a high tech telecommunications
organization, our team of consultants was asked
to design an application that allowed customers
to schedule resources through the Internet. To
deliver a robust, scalable system, we designed
a simple three-tier application using:
- Active Server Pages to handle
the front end, rendering browser-neutral HTML
for the client.
- Visual Basic to write an object-oriented
middle-tier built of COM components that we
placed in a COM+ package.
- Oracle 8i for the database server
on the back end (see figure 1).
We knew there were plans to add data
sources, including Microsoft SQL Server and possibly
Microsoft Exchange. This is where our challenge
began-we needed to integrate transactions across
|Figure 1. Basic three-tier
architecture for scheduling application
We had to decide how to control the database
transactions. The easiest approach is to allow
Oracle to manage them. This support is built into
every client/server database. While this would
work for the first phases of the project, once
a disparate data source like SQL Server or Exchange
is added, we would need to modify the code. We
didn't want to be tied to maintaining all the
data within Oracle.
Microsoft developed the Distributed
Transaction Coordinator (DTC) for managing transactions
across databases. It uses a two-phase commit system:
Each database does a trial execution of the transaction,
but does not commit the transaction. If all databases
report that the transaction is successful, the
DTC then informs all the databases to commit.
If any databases report back errors or fail to
report back at all, the DTC issues a rollback
to all participating databases. If the DTC fails
or if network communications are lost, each database
automatically rolls back the uncommitted transactions
(see figure 2). This assures that all the databases
either commit or rollback the transaction together
(see "What's a Transaction?").
|Figure 2. How
the Distributed Transaction Coordinator works.
(Click image to view larger version.)
To simplify use of the DTC, Microsoft
built the Microsoft Transaction Services (MTS).
It allows COM-based components to use the DTC
automatically for coordinating database transactions.
This service grew into COM+. With minor changes
to code, software components can execute database
changes and participate in distributed transactions.
When two changes need to occur
simultaneously within a database,
they become a transaction. An
example of a transaction is
simple: when you purchase something
you must pay for it. The exchange
of goods for money is a transaction.
There are many needs for transaction
within databases. For instance,
if you move inventory from one
location to another, you require
a transaction. If one record
gets changes, you will either
end up losing the items, or
they will be counted double.
To know when you need a transaction,
or whether you can support transactions,
you need to apply the ACID test
to the transaction.
- Atomic - The changes
in the database are dependent
on one another and must be
executed together. For example,
a transfer of funds must lower
the source account balance
AND increase the destination
- Consistent - The
transaction must always leave
the database in a consistent
state. Either all the rows
updated or none did. For example,
if items appear on a bill
of lading, they cannot still
be listed as in-stock inventory
(that would be inconsistent).
- Isolated - When one
user is modifying records
within a transaction, no one
else can. For example, if
three users all attempt to
sell the last 4 items in inventory,
only the first person can
modify the records. The other
users will not be able to
see the inventory until the
modification is complete and
the inventory adjusted to
- Durable - The system
must assure that failures
in networks, hard drives and
other system outages do not
allow the system to process
a partially completed transaction.
For instance, if the server
unexpectedly loses power,
it will rollback the partially
committed rows and returns
the database to a consistent
For a database to participate in
these types of transactions, it must support two-phase
commits. The X/Open consortium developed the XA
protocol as a standard for two-phase commit transactional
support. Most client/server databases support
XA protocol, including Oracle.
As an option, however, Oracle also
provides a service that allows the database to
act as a resource manager for the MTS service.
The service consists of two components. One is
a proxy that operates as a service for the Windows
operating system. The other is a client library
that instantiates as a client-side dynamic link
library. To use this service, you must use the
ORACLE OLE DB provider. Oracle reports that Oracle
for MTS provides "superior performance and reliability".
This solution would allow us to develop in all
APIs, including Oracle Objects for OLE (OO4O)
and the Oracle Call Interface (OCI). These interfaces
are provided by Oracle to allow developers to
speak efficiently to their database. ODBC for
Oracle (provided with the Oracle client software)
is a translation layer that converts standard
ODBC calls to OCI calls under the covers.
We examined this option more closely,
but based on several proof of concepts using the
ODBC for Oracle driver with Visual Basic, we decided
not to pursue it. Instead, we chose Oracle's XA
support controlled by the DTC directly. The choice
eliminated Oracle as a single point of failure
for the application, and also allowed us to use
the Microsoft OLE DB provider for Oracle services
within the Visual Basic components.
8i for XA Support
The first step is to configure the XA support
on the server with your database administrator.
XAVIEW.SQL creates the V$XATRANS$ view. The script
can be found in the [Oracle Home]rdbms/admin directory.
The SYS user must run the script. After running
the script, the DBA must grant SELECT on the DBA_PENDING_TRANSACTION
view to the public. We tested the XA support configuration
using the Oracle test program (TestOracleXAConfig.exe).
We confirmed the test using a program from Microsoft
Windows 2000 SDK, Msdtcora.exe.
We also needed to configure the server
to support additional database connections. Before
we did this, we received failures on SQLConnect
calls, failures to enlist in transactions, timeouts
and record lock collisions due to in-doubt transactions.
From the variety of errors, these symptoms took
some time to trace back to a lack of available
database connections. We leveraged Mike's considerable
experience to tune the database to support the
transactions. Each system is different and requires
tuning to match the specific usage of that database.
Several techniques quickly became obvious:
- Balance I/O across disks. Transactions
require increased disks reads and writes.
- Properly index tables to improve
responsiveness to the queries
- Reduce checkpoints (which also
increase disk I/O)
Next, we had to configure our client
software. We chose the Oracle 8.1.6 client software
with the latest patches downloaded from Oracle.
This must be installed on the middle-tier application
server, because the application server instantiates
the components that communicate with the database.
Once the software was installed, we needed to
make some manual corrections to the registry on
the application server. We set the following values
on the keys found in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MTDTC\MTxOCI
"OracleSqlLib" = "orasql8.dll"
"OracleOciLib" = "oci.dll"
It's important to assure the proper
client software is installed and it is used in
the proper order. Several of the development machines
had multiple Oracle homes (multiple copies of
the client software installed). We discovered
we could control the client software used by placing
the proper components in the correct order in
the Path statement in the Environmental variables.
On the production application server, we performed
a clean installation to assure no older copies
of the client software or DLLs were located on
Once we were properly configured, we were prepared
to utilize the COM+ transactions. Transactions
must be used carefully. A transaction's isolation
level determines how and when other resources
can access the data it locks. By default the isolation
level used by COM+ is serializable. This ensures
the highest protection against concurrency, but
at a price of lower throughput. Once a record
becomes part of a transaction, other users cannot
access it until the transaction is committed or
rolled back. These records remain isolated until
the entire two-phase commit process is completed.
It's also important to commit these transactions
quickly, so we performed all the necessary steps
before starting the transaction.
|Figure 3. How COM+ transactions
Figure 3 helps illustrate the process:
- The VB components are called
into a transaction together.
- COM+ informs the distributed
transaction coordinator of the pending transaction.
- The DTC enlists the databases
in the transaction.
- The VB components execute the
changes within the databases. If they receive
no errors, they send a "SetComplete" to COM+.
It informs the DTC of each vote.
- If all components report a "SetComplete",
the DTC sends a commit to the databases.
- If any of the VB components encounters
an error, it sends a "Set Abort" to COM+.
- COM+ passes the "SetAbort" to
- The DTC sends a rollback command
to all databases enlisted in the transaction.
Carefully plan what does and does
not require a transaction. Changes that are not
related or dependent on one another do not need
to be joined together in a transaction. Rows that
are read but not modified should not be included
in transactions, either. Multiple small transactions
are typically more efficient than a single, long
running transaction. And never pause in the middle
of an open transaction to wait for user input
or process information. Gather and process all
data before beginning the transaction and modifying
the tables whenever possible.
COM+ allows components to participate
or abstain from participating in transactions.
We planned the components so the ones that required
transactions could be participate in them. The
components that performed single table updates
also did not need to participate. By minimizing
distributed transactions, we optimized database
One particularly nice feature we
leveraged from Oracle is autonomous transactions.
When an error is encountered during a transaction,
all data modifications are automatically rolled
out of the database. That would normally mean
we could not log errors in the database itself.
By using autonomous transactions, we could separate
error-logging inserts from the rest of the transaction
and record any errors that occurred in an exception
table. Our error returns provide the row identifier
to indicate which row in the exception table contained
By carefully planning our transaction strategy,
we were able to architect a solution that optimized
database performance. By carefully installing
and configuring the correct components, we were
also able to deliver a robust, scalable solution
that would meet future needs as our client scaled