BCB ships with the Local InterBase Server, which is sometimes simply called LIBS. This tool provides all the capabilities of the full InterBase server, but it runs on a local machine. You do not need to be connected to a network to be able to run the Local InterBase Server.
The client software you get with BCB will talk to either LIBS or the standard version of the InterBase Server. From your point of view as a programmer, you will find no difference at all between talking to LIBS and talking to an InterBase server across a network. The only way to tell which server you're connected to is by examining the path in your current alias. In short, LIBS is the perfect tool for learning or practicing real client/server database programming even if you're not connected to a LAN.
The goal of this chapter is to provide you with a useful introduction to LIBS and also a brief overview of transactions. In particular, you will see how to do the following:
Everything that is said about the local InterBase in this chapter applies equally to the full server version of InterBase. As a result, this chapter will also be of interest to people who use InterBase on a network. In particular, this chapter shows how you can use a local system to create a database that is fully compatible with the network version of InterBase. To convert a LIBS database to a real client/server application on a network, you just have to copy your database onto another machine:
copy MyDatabase.gdb p:\remote\nt\drive
You just copy the one file onto the network. No other steps are necessary, other than changing the path in your alias. Of course, you will also need a real copy of the InterBase server.
Note that some versions of BCB ship with at least two licenses for the InterBase server. The real InterBase server runs on most platforms, including Windows 95, Windows NT, and a wide range of UNIX platforms.
Many readers of this book will come from the world of "big iron," where the only kinds of databases that exist are servers such as Oracle, Sybase, InterBase, AS400, or DB2. Other readers come from the world of PCs, where tools such as dBASE, Paradox, Access, or FoxPro are considered to be the standard database tools. Overemphasizing the huge gap that exists between these two worlds is almost impossible.
Readers who are familiar with "big iron" and large network-based servers are likely to find the Local InterBase Server very familiar. Readers who come from the world of PCs are likely to find InterBase very strange indeed, especially at first.
InterBase is meant to handle huge numbers of records, which are stored on servers. It does not come equipped with many of the amenities of a tool such as dBASE or Paradox. In fact, InterBase supplies users with only the most minimal interface and instead expects you to create programs with a client-side tool such as BCB. However, you will find that InterBase is not a particularly difficult challenge, after you get some of the basics under your belt.
Having been in this business for a while, I know that most of the readers of this book probably work inside a corporation or at a small company. However, if you are a student or someone who wants to enter the computer programming world, you should pay special attention to the material in this, and other, chapters on InterBase.
Perhaps 80 percent of the applications built in America today use databases in one form or another. Indeed, most of these applications revolve around, and are focused on, manipulating databases. Furthermore, client/server databases such as InterBase, Oracle, or MS SQL Server form the core of this application development.
If you want to enter the programming world, getting a good knowledge of databases is one of the best ways to get started. Right now, there is virtually an endless need for good database programmers.
One note of caution should perhaps be added here. I happen to enjoy database programming. However, it is not the most romantic end of the computer business. If you're primarily interested in systems programming or games programming, then you should hold out for jobs in those fields rather than focus your career in an area of only minor interest to you.
Databases, however, offer the greatest opportunity for employment. In particular, client/server database programmers are almost always in demand. Because LIBS ships with your copy of BCB, you have a great chance to learn the ins and outs of this lucrative field.
LIBS is installed for you automatically when you install BCB. In most cases, InterBase will run smoothly without any need for you to worry about setup. However, you should take several key steps to ensure that all is as it should be.
First, find out if LIBS is running. By default, it will load into memory every time you boot up the system. If you're running Windows 95 or NT 4.0, you should see LIBS as a little splash of green on the system tray to the right of the toolbar. On Windows NT 3.51, an icon appears at the bottom of your screen. Whatever shape it takes on your system, just click this green object, and you will see a report on the Local InterBase Server configuration.
You need to know where your copy of LIBS is installed. Most likely, it is in the ..\PROGRAM FILES\BORLAND\INTRBASE subdirectory on the boot drive of your computer. For example, my copy of the local InterBase is in C:\PROGRAM FILES\BORLAND\INTRBASE. To find out for sure, open the InterBase Configuration applet that ships with BCB. It will report on the InterBase root subdirectory and enable you to change that directory if need be.
To find this same information in the Registry, run REGEDIT.EXE and open HKEY_LOCAL_MACHINE/SOFTWARE/BORLAND/INTERBASE. Several nodes report on the location of your server and other related information.
In the INTRBASE subdirectory, you will find a copy of a file called INTERBAS.MSG. You should also be able to locate a copy of GDS32.DLL somewhere on your system, most likely in the ..\WINDOWS\SYSTEM subdirectory, but possibly in either your BDE or INTRBASE subdirectory.
A common problem occurs when InterBase users end up with more than one copy of GDS32.DLL. If you work with the networked version of InterBase, you probably already have a copy of the InterBase Client on your system. If this is the case, you should make sure that you don't have two sets of the file GDS32.DLL on your path. On my system, I use the copy of GDS32.DLL that comes with the local InterBase. These tools communicate with both LIBS and the full networked version of InterBase. This setup works fine for me. However, the point is not which version you use, but only that you know which version is on your path and that you have only one version on your system at a time.
To find out which version you are currently using, run the InterBase Communications Diagnostics Tool that ships with BCB. Use the Browse button to find the EMPLOYEE.GDB file, which is probably located in the ..PROGRAM FILES\BORLAND\INTRBASE\EXAMPLES subdirectory. Enter SYSDBA as the user name and masterkey as the password, all lowercase. (This example assumes that you have not changed the password from its default value.) You should get the following readout, or something like it:
Path Name = C:\WINDOWS\SYSTEM\gds32.dll Size = 348672 Bytes File Time = 04:10:00 File Date = 12/18/1995 Version = 4.1.0.6 This module has passed the version check. Attempting to attach to C:\Borland\Intrbase\EXAMPLES\Employee.gdb Attaching ...Passed! Detaching ...Passed! InterBase versions for this connection: InterBase/Windows NT (access method), version "WI-B4.1.0" on disk structure version 8.0 InterBase Communication Test Passed!
The key piece of information you're getting here is the location of GDS32.DLL.
NOTE: Readers who want to connect to the full server version of InterBase will find that the procedure I have just outlined works fine, except that you must have a network protocol such as TCP/IP loaded first. This book includes a description of the wizardry needed to set up a TCP/IP network protocol successfully in Chapter 8, "Database Basics and Database Tools." As I explained in that chapter, this task is usually handled automatically by either Windows 95 or Windows NT, though it helps to have a few extra tips to guide you through the process. I should add that setting up an InterBase connection is usually a fairly straightforward process when compared to setting up other servers.
The most obvious thing that can go wrong with an InterBase connection is simply that it is not being started automatically when you start Windows. If you are having trouble, try simply pointing the Explorer to the IntrBase/bin subdirectory and clicking the IBServer.exe icon. The trouble could be that all is set up correctly, but for some reason the server is not currently running on your machine!
In the preceding section, you learned how to run a diagnostic tool to be sure you are connected to InterBase. This section deals with the issue of making sure that the BDE is connected to InterBase. In other words, that section dealt with making sure that InterBase was running correctly on your machine; this section deals with making sure BCB is connected to InterBase. You should also check the readme file on the CD that accompanies this book for general information about setting up aliases for the programs supplied in this book.
After you have the local InterBase set up, you should take a few minutes to make sure the connection to the BDE is working correctly. In particular, you should make sure an alias points to one of the sample tables that ship with LIBS. For example, after a normal full installation of BCB, you should have an alias called IBLOCAL that points to the EMPLOYEE.GDB file.
In the next few paragraphs, I describe how to set up an alias identical to the IBLOCAL alias, except you can give it a different name. To begin, open the Database Explorer and turn to the Databases page. Select the first node in the tree, the one that's called Databases. Choose Database | New, and then select IntrBase as the Database Driver Name in the New Database Alias page dialog. Click OK.
Name the new alias TESTGDB, or give it whatever name you prefer. The ServerName property for this alias should be set to
c:\program files\borland\intrbase\examples\employee.gdb
You can adjust the drive letter and path to reflect the way you have set up the files on your machine.
The user name should be set to SYSDBA, and the default password you
will
use is masterkey. (If someone has changed the password on your system, then
use the new password.) All the other settings in the Database Explorer can have their
default values, as shown in Figure 15.1. After you have everything set up
correctly,
choose Database | Apply.
FIGURE 15.1.
A sample InterBase alias as it appears in the Database Explorer.
After you have set up and saved your alias, you can connect to the TESTGDB alias exactly as you would with any other set of data. From inside the Explorer, just click the plus symbol before the TESTGDB node. A dialog will pop up prompting you for a password. Make sure the user name is set to SYSDBA, and then enter masterkey as the password. Everything else will then be the same as when working with a Paradox table, except that you will find many new features such as stored procedures and triggers. Most of these new features will be described in this chapter and the next.
To connect to the database from inside BCB proper, first drop a table onto a form, and set its DatabaseName property to TESTGDB. When you try to drop down the list of TableNames, you will be prompted for a password. You should enter masterkey at this point, all in lowercase. Now drop down the list again and select a table. After taking these steps, you can set the Active property for Table1 to True. If this call succeeds, everything is set up correctly, and you can begin using the local InterBase to create BCB database programs. If you can't set Active to True, you should go over the steps outlined previously and see whether you can correct the problem.
NOTE: I usually use SYSDBA and masterkey as the user name and password combination for the InterBase databases in this book. However, I sometimes work with USER1 and USER1 instead, simply because typing USER1 is easier than typing masterkey. One way to change the sign on criteria for InterBase is via the InterBase Server Manager. This, and other tools, will be discussed in Chapter 16, "Advanced InterBase Concepts."
In the preceding two sections, you have learned the basic facts about using LIBS. The next step is to learn how to create your own databases and tables.
Unlike local Paradox or dBASE files, InterBase tables are not stored in separate files located within a directory. Instead, InterBase tables are stored in one large file called a database. Therefore, you need to first go out and create a database, and then you can create a series of tables inside this larger database.
NOTE: The single file system is, in my opinion, vastly superior to having a series of separate files. I'm sure you've noticed what happens after you have placed a few indexes on a typical Paradox table. The end result is that your table is associated with six or seven other files, some of which have to be present or you can't get at your data! A big Paradox database might consist of a hundred or more files, all of which have to be backed up, moved from place to place, and maintained. Life is much simpler when your whole database is stored in a single file!
The simplest way to create a database is with a CASE tool such as SDesigner or Cadet. However, these tools do not ship with BCB, so you must instead choose between the Database Desktop, BCB itself, and the WISQL program that ships with the Local InterBase Server. Absent the presence of a CASE tool, I find that my weapon of choice is WISQL, though this is certainly a debatable decision. (Cadet will be discussed briefly, along with SDesigner, in Chapter 18, "Working with CASE Tools: Cadet, ER1, and SDesigner.")
WISQL stands for Windows Interactive Standard Query Language, or simply the Interactive SQL tool. WISQL is fundamentally a tool for entering SQL statements, with a few other simple features thrown in for good measure. One advantage of relying on WISQL is that it allows you to work directly in the mother tongue of databases, which is SQL. I find that defining databases directly in SQL helps me understand their structure, though of course, there is little reason for resorting to these measures if you have a copy of SDesigner or ERWin available.
You should also remember that WISQL bypasses the BDE altogether. You can therefore use it to test your connections to InterBase even if you are not sure that you have the BDE set up correctly. For example, if you're having trouble connecting to InterBase and you're not sure where the problem lies, start by trying to connect with WISQL. If that works but you can't connect from inside BCB, the problem might lie not with your InterBase setup, but with the way you have deployed the BDE.
NOTE: In addition to WISQL, the other important tool that ships with InterBase is the InterBase Server Manager, IBMGR.EXE. It enables you to test connections to servers and perform simple maintenance tasks such as backing up and restoring databases and setting passwords. You can use IBMGR to back up your data so that you can recover if disaster strikes. What little information you need for this easy-to-use tool is available in the InterBase documentation and in the short section on this tool that appears later in this chapter.
After starting WISQL, choose File |
Create Database. A dialog like the one shown
in Figure 15.2 appears. Set the Location Info to Local Engine, because you are in
fact working with local InterBase. (Actually, there is no reason that you have to
use Local InterBase rather than the full
server version when working through these
examples. However, I will reference LIBS throughout this chapter because it will
be the tool of choice for most readers.)
FIGURE 15.2.
The dialog used for creating databases inside WISQL.
In the Database field, enter the name of the table you want to create. If the table is to be located inside a particular directory, include that directory in the database name. For practice, you should create a database called INFO.GDB that is located in a subdirectory called DATA. If it does not already exist on your system, you should first go to DOS or the Windows Explorer and create the DATA subdirectory. After you set up the subdirectory, enter the following in the Database field:
E:\DATA\INFO.GDB
You can replace E: with the letter for the appropriate drive on your system. The extension .GDB is traditional, though not mandatory. However, I suggest always using this extension so that you can recognize your databases instantly when you see them. Accidentally deleting even a recently backed up database can be a tragedy.
You can set the user name to anything you want, although the traditional entry is SYSDBA, and the traditional password is masterkey. When you first start out with InterBase, sticking with this user name and password combination is probably best. Even if you assign new passwords to your database, the SYSDBA/masterkey combination will still work unless you explicitly remove it using the IBMGR.
After you have entered a user name and password, you can create the database by clicking the OK button. If all goes well, you are then placed back inside WISQL proper. At this stage, you can either quit WISQL or add a table to your database. If something goes wrong, an error message will appear. Click the Details button to try to track down the problem.
Assuming all goes well, the following SQL statement can be run inside WISQL if you want to create a very simple table with two fields:
CREATE TABLE TEST1 (FIRST VARCHAR(20), LAST INTEGER);
Enter this line in the SQL Statement field at the top of WISQL, and then click the Run button. If all goes smoothly, your statement will be echoed in the ISQL output window without being accompanied by an error dialog. The lack of an error dialog signals that the table has been created successfully.
The preceding CREATE TABLE command creates a table with two fields. The first is a character field containing 20 characters, and the second is an integer field.
NOTE: The table-creation code shown here is used to describe or create a table in terms that WISQL understands. In fact, you can use this same code inside a TQuery object in a BCB program.
Throughout most of this chapter and the next, I work with WISQL rather than with the DBD. In describing how to perform these actions in WISQL, I do not mean to imply that you can't use the Database Desktop to create or alter InterBase tables. In fact, the 32-bit version of DBD provides pretty good support for InterBase tables. Still, I have found WISQL to be considerably more powerful than I suspected when I first started using it. Once again, I should add that neither of these tools is as easy to use as a good CASE tool.
After creating a database and table, you should choose File | Commit Work. This command causes WISQL to actually carry out the commands you have issued. At this stage, you should choose File | Disconnect from Database.
In this section, you have learned the basic steps required to use InterBase to create a database and table. The steps involved are not particularly complicated, although they can take a bit of getting used to if you're new to the world of SQL.
WISQL provides a number of tools that can help you explore a database and its contents. In the preceding section, you created a database with a single table. In this section, you will learn how to connect to the database and table from inside WISQL. You will also see how to examine the main features of the entities you have created.
To connect to INFO.GDB, choose File | Connect to Database, which brings
up the dialog shown in Figure 15.3. Enter the drive and the database as
e:\data\info.gdb,
where e: represents the appropriate drive on your machine. Enter the user
as SYSDBA, and the password as masterkey. If all goes well, you
should be able to connect to the database by clicking the
OK button. Once again,
success is signaled by the lack of an error message.
FIGURE 15.3.
Connecting to the INFO.GDB database using
WISQL.
Choose View | Metadata Information and set View Information On to Database, as shown in Figure 15.4. After you click OK, the information displayed in the ISQL output window should look something like this:
SHOW DB Database: c:\data\info.gdb Owner: SYSDBA PAGE_SIZE 1024 Number of DB pages allocated = 210 Sweep interval = 20000
FIGURE 15.4. Preparing to view information on the INFO.GDB
database.
To see the tables available in a database, choose View | Metadata Information, and set View Information On to Table. You can leave the edit control labeled Object Name blank. If you fill it in with a table name, you will get detailed information on a specific table--but in this case we want general information on all tables. Click the OK button and view the information, which should look like the following, in the ISQL output window:
SHOW TABLES TEST1
Browsing through the Metadata Information menu choice, you can see that InterBase supports triggers, stored procedures, views, and a host of other advanced server features.
The Extract menu choice enables you to find out more detailed information about the database and its tables. For example, if you choose Extract | SQL Metadata for a Database, you get output similar to the following:
/* Extract Database e:\data\info.gdb */ CREATE DATABASE "e:\data\info.gdb" PAGE_SIZE 1024 ; /* Table: TEST1, Owner: SYSDBA */ CREATE TABLE TEST1 (FIRST VARCHAR(20), LAST INTEGER); /* Grant permissions for this database */
If you choose Extract | SQL Metadata for Table, you get the following output:
/* Extract Table TEST1 */ /* Table: TEST1, Owner: SYSDBA */ CREATE TABLE TEST1 (FIRST VARCHAR(20), LAST INTEGER);
You should note that WISQL often asks whether you want to save the output from a command to a text file, and the File menu gives you some further options for saving information to files. You can take advantage of these options when necessary, but 90 percent of the time, I pass them by with barely a nod. (Some CASE tools use the output from Extract | SQL Metadata to reverse-engineer a database. If your CASE tool asks you for a script file, you can produce one this way.)
NOTE: The WISQL program accepts most SQL statements. For example, you can perform Insert, Select, Update, and Delete statements from inside WISQL. Just enter the statement you want to perform in the SQL Statement area, and then click the Run button.
WISQL also comes equipped with a handy online reference to SQL. If you have questions about how to format an Alter, Drop, Insert, Create Index, or other SQL statement, you can look it up in the help for WISQL. (For better or worse, this is my number-one reference for SQL statements. Another book I have found useful is called The Practical SQL Handbook, by Bowman, Emerson, and Darnovsky, Addison Wesley, ISBN 0-201-62623-3.)
After reading the preceding three sections, you should have a fair understanding of how WISQL works and how you can use it to manage a database. The information provided in this chapter is nothing more than an introduction to a complex and very sophisticated topic. However, you now know enough to begin using the local InterBase. This accomplishment is not insignificant. Tools such as InterBase, Oracle, and Sybase lie at the heart of the client/server activity that is currently so volatile and lucrative. If you become proficient at talking to servers such as InterBase, you might find yourself at an important turning point in your career.
Now you can break out of the abstract theory rut and start writing some code that actually does something. In this section, you will look at transactions, followed by a discussion of cached updates and many-to-many relationships. In the next chapter, you will see another "real-world" database, when you take a look at a sample program that tracks the albums, tapes, and CDs in a music collection.
The TRANSACT program, found on the CD that accompanies this book, gives a brief introduction to transactions. To use transactions, you must have a TDataBase component on your form. Transactions work not only with real servers such as Sybase, Informix, InterBase, or the local InterBase, but also with the 32-bit BDE drivers for Paradox or dBASE files. In other words, transactions can be part of most of the database work you will do with BCB. Using transactions is, however, a technique most frequently associated with client/server databases.
To begin, drop down a TDatabase component on a TDataModule. Set the AliasName property of the TDataBase object to a valid alias such as IBLOCAL. Create your own string, such as TransactionDemo, to fill in the DatabaseName property of the TDatabase object. In other words, when you're using a TDatabase component, you make up the DatabaseName rather than pick it from a list of available aliases.
Drop down a TQuery object, and hook it up to the EMPLOYEE.GDB file that ships with BCB. In particular, set the DatabaseName property of the TQuery object to TransactionDemo, not to IBLOCAL. In other words, set the DatabaseName property to the string you made up when filling in the DatabaseName property of the TDatabase component. You will find that TransactionDemo, or whatever string you chose, has been added to the list of aliases you can view from the Query1.DatabaseName Property Editor. Now rename Query1 to EmployeeQuery and attach a TDataSource object called EmployeeSource to it.
Finally, set the EmployeeQuery->SQL property to the following string:
select * from employee
Then set the Active property to True and set RequestLive to True.
Add a TTable object to the project, hook it up to the SALARY_HISTORY table, and call it SalaryHistoryTable. Relate the SalaryHistoryTable to the EmployeeQueryTable via the EMP_NO fields of both tables. In particular, you should set the MasterSource property for the SalaryHistoryTable to EmployeeSource. Then click the MasterFields property of the TTable object, and relate the EMP_NO fields of both tables. This way, you can establish a one-to-many relationship between the EmployeeQueryTable and the SalaryHistoryTable.
After you're connected to the database, you can add two grids to your main form so that you can view the data. Remember that you should use the File | Include Unit Header option to link the TDataModule to the main form.
On the surface of the main form, add four buttons, and give them the following captions:
Start Transaction Rollback Commit Refresh
The code associated with these buttons should look like this:
void __fastcall TForm1::StartTransactionBtnClick(TObject *Sender) { DMod->TransDemo->StartTransaction(); } void __fastcall TForm1::RollbackBtnClick(TObject *Sender) { DMod->TransDemo->Rollback(); } void __fastcall TForm1::CommitBtnClick(TObject *Sender) { DMod->TransDemo->Commit(); } void __fastcall TForm1::RefreshBtnClick(TObject *Sender) { DMod->SalaryHistoryTable->Refresh(); }
Run the program, click Start Transaction and edit a record of the SalaryHistoryTable. When you do so, be sure to fill in all the fields of the table except for the first and last, which are called EMP_NO and NEW_SALARY. Be sure not to touch either of those fields, as they will be filled in for you automatically. In particular, you might enter the following values:
CHANGE_DATE: 12/12/12 UPDATER_ID: admin2 OLD_SALARY: 105900 PERCENT_CHANGE: 3
These values are not randomly chosen. For example, you have to enter admin2, or some other valid UPDATE_ID, in the UPDATER_ID field. You can, of course, enter whatever values you want for the date, old salary, and percent change fields. Still, you need to be careful when working with the Employee tables. This database has referential integrity with a vengeance!
After entering the preceding values, you can post the record by moving off it. When you do, the NEW_SALARY field will be filled in automatically by something called a trigger. Go ahead and experiment with these tables some if you want. For example, you might leave some of the fields blank, or enter invalid data in the UPDATER_ID field, just to see how complex the rules that govern this database are. This data is locked up tighter than Fort Knox, and you can't change it unless you are very careful about what you're doing. (It's worth noting, however, that the developers of this database probably never planned to have anyone use these two tables exactly as I do here. Defining rules that limit how you work with a database is easy, but finding ways to break them is easier still. For all of its rigor, database programming is still not an exact science.)
If you started your session by clicking the Start Transaction button, you can now click RollBack and then Refresh. You will find that all your work is undone, as if none of the editing occurred. If you edit three or four records and then click Commit, you will find that your work is preserved.
NOTE: Though you are safe in this particular case, in some instances like this you can't call Refresh directly because the table you're using is not uniquely indexed. In lieu of this call, you can close the table and then reopen it. You could use bookmarks to preserve your location in the table during this operation, or if you're working with a relatively small dataset, as in this example, you can just let the user fend for himself or herself.
Note that when you run the TRANSACT program included on the CD, you don't have to specify a password because the LoginPrompt property of the TDatabase object is set to False, and the Params property contains the following string:
password=masterkey
Now that you have seen transactions in action, you probably want a brief explanation of what they are all about. Here are some reasons for using transactions:
Now that you have read something about the theory behind transactions, you might
want to think for a moment about the TransIsolation property of the TDatabase
object, which affects the way transactions
are handled. Here are some quotes from
the very important online help entry called "Transaction Isolation Levels."
tiDirtyRead | Permits reading of uncommitted changes made to the database by other simultaneous transactions. Uncommitted changes are not permanent, and might be rolled back (undone) at any time. At this level a transaction is least isolated from the effects of other transactions. |
tiReadCommitted | Permits reading of committed (permanent) changes made to the database by other simultaneous transactions. This is the default TransIsolation property value. |
tiRepeatableRead | Permits a single, one-time reading of the database. The transaction cannot see any subsequent changes made by other simultaneous transactions. This isolation level guarantees that after a transaction reads a record, its view of that record does not change unless it makes a modification to the record itself. At this level, a transaction is most isolated from other transactions. |
You must consider other issues when you're working with transactions, but I have tried to cover some of the most important here. In general, I find that transactions are extremely easy to use. However, they become more complex when you consider the delicate subject of concurrency problems, which are frequently addressed through setting the TransIsolation levels of your transactions.
Cached updates are like the transactions just described, except that they enable you to edit a series of records without causing any network traffic. When you are ready to commit your work, cached updates enable you to do so on a record-by-record basis, where any records that violate system integrity can be repaired or rolled back on a case-by-case basis.
NOTE: Some users have reported remarkable increases in performance on some operations when they use cached updates.
The key feature of cached updates is that they let you work with data without allowing any network traffic to occur until you are ready for it to begin. This relatively complex mechanism also enables you to keep track of the status of each record on a field-by-field basis. In particular, when cached updates are turned on, you can query your records one at a time and ask them whether they have been updated. Furthermore, if they have been updated, you can ask the current value of each field in the updated record, and you can also retrieve the old, or original, value of the field.
You can do three things with the records in a dataset after the CachedUpdates property for the dataset has been set to True:
An excellent sample program in the BCB DEMOS subdirectory shows how to use cached updates. This program is a bit complex in its particulars, however, and therefore can be hard to understand. So, instead of trying to go it one better, I will create a sample program that takes the basic elements of cached updates and presents them in the simplest possible terms.
The CacheUp program, shown in Figure 15.5, has one form. On the form is a copy of the OrdersTable. The OrdersTable, as you recall, is related to both the Customer table and the Items table. As a result, changing either the OrderNo or CustNo fields without violating system integrity in one way or another is difficult. When working with this program, you should change these fields to values like 1 or 2, which will almost surely be invalid. You can then watch what happens when you try to commit the records you have changed.
The code for the CachedUpdates program is shown in Listing 15.1. Go ahead and
get this program up and running,
and then come back for a discussion of how it works.
When you're implementing the code shown here, the key point to remember is that none
of it will work unless the CachedUpdates property of the OrdersTable
is set to
True.
Listing 15.1. The form for the CachedUpdates
program.
/////////////////////////////////////// // File: Main.cpp // Project: CachedUpdates // Copyright (c) 1997 Charlie Calvert #include <vcl\vcl.h> #include <typinfo.hpp> #include <sysutils.hpp> #pragma hdrstop #include "Main.h" #pragma resource "*.dfm" TForm1 *Form1; __fastcall TForm1::TForm1(TComponent* Owner) : TForm(Owner) { } void __fastcall TForm1::ApplyBtnClick(TObject *Sender) { OrdersTable->ApplyUpdates(); } void __fastcall TForm1::RevertBtnClick(TObject *Sender) { OrdersTable->RevertRecord(); } void __fastcall TForm1::CancelClick(TObject *Sender) { OrdersTable->CancelUpdates(); } void __fastcall TForm1::OkBtnClick(TObject *Sender) { Close(); } void __fastcall TForm1::OrdersTableUpdateError(TDataSet *DataSet, EDatabaseError *E, TUpdateKind UpdateKind, TUpdateAction &UpdateAction) { TTypeInfo TypeInfo; AnsiString UpdateKindStr[] = {"Modified", "Inserted", "Deleted"}; AnsiString S(UpdateKindStr[UpdateKind]); S += ": " + E->Message; AnsiString Temp = DataSet->Fields[0]->OldValue; Temp = + ": " + S; ListBox1->Items->Add(Temp); UpdateAction = uaSkip; } void __fastcall TForm1::DataSource1DataChange(TObject *Sender, TField *Field) { AnsiString UpdateStat[] = {"Unmodified", "Modified", "Inserted", "usDeleted"}; Panel1->Caption = UpdateStat[OrdersTable->UpdateStatus()]; if (OrdersTable->UpdateStatus() == usModified) { Edit1->Text = OrdersTable->Fields[0]->OldValue; Edit2->Text = OrdersTable->Fields[0]->NewValue; } else { Edit1->Text = "Unmodified"; Edit2->Text = "Unmodified"; }; } void __fastcall TForm1::ListBox1DblClick(TObject *Sender) { AnsiString S(ListBox1->Items->Strings[ListBox1->ItemIndex]); if (S.Length() > 0) ShowMessage(S); }
The first thing to notice about the CachedUpdates program is that it tracks which records have been modified. For example, change the OrderNo field of the first two records to the values 1 and 2. If you now select one of these records, you will see that the small panel in the lower left corner of the screen gets set to Modified. This means that the update status for this field has been set to modified.
Here is the TUpdateStatus type:
TUpdateStatus = (usUnmodified, usModified, usInserted, usDeleted);
Any particular record in a database is going to be set to one of these values.
Here is the code that sets the value in the TPanel object:
void __fastcall TForm1::DataSource1DataChange(TObject *Sender, TField *Field) { AnsiString UpdateStat[] = {"Unmodified", "Modified", "Inserted", "usDeleted"}; Panel1->Caption = UpdateStat[OrdersTable->UpdateStatus()]; if (OrdersTable->UpdateStatus() == usModified) { Edit1->Text = OrdersTable->Fields[0]->OldValue; Edit2->Text = OrdersTable->Fields[0]->NewValue; } else { Edit1->Text = "Unmodified"; Edit2->Text = "Unmodified"; }; }
The relevant line in this case is the second in the body of the function. In particular, notice that it reports on the value of OrdersTable->UpdateStatus. This value will change to reflect the update status of the currently selected record.
NOTE: Notice that in this case, I explicitly type out the names associated with the elements of the TUpdateStatus type. In some cases, you can use an alternative means to accomplish the same end without explicitly typing out the strings. This second technique involves using the advanced RTTI supported by BCB. To show this value to the user, the code could call the GetEnumName routine from the TypInfo unit. This routine retrieves the name of an enumerated value. To use this routine, pass in the type that you want to examine, as well as the ordinal value of the element in that type whose name you want to see:PPropInfo PropInfo = GetPropInfo(PTypeInfo(ClassInfo(__classid(TForm1))), "Borderstyle"); ShowMessage(GetEnumName(PropInfo->PropType, int(bsDisabled)));Unfortunately, this type of code will work only for VCL-style classes and for properties of VCL-style classes. Because a TDataSet does not have an UpdateStatus property, the code in the DataSource1DataChange method must use the more pedantic method outlined previously.
At the same time that the CachedUpdates program reports that a record has been modified, it also reports on the old and new value of the OrderNo field for that record. In particular, if you change the first record's OrderNo field to 1, it will report that the old value for the field was 1003, and the new value is 1. (This assumes that you have the original data as it shipped with BCB. Remember that if you end up ruining one of these tables performing these kinds of experiments, you can always copy the table over again from the CD.)
In the code that reports on the old and new value of the OrderNo field, you should examine these lines in particular:
Edit1->Text = OrdersTable->Fields[0]->OldValue; Edit2->Text = OrdersTable->Fields[0]->NewValue;
As you can see, this information is easy enough to come by--you just have to know where to look.
If you enter the values 1 and 2 into the OrderNo fields for the first two records, you will encounter errors when you try to commit the data. In particular, if you try to apply the data, the built-in referential integrity will complain that there is no way to link the Orders and Items table on the new OrderNo you have created. As a result, committing the records is not possible. The code then rolls back the erroneous records to their original state.
When viewing these kinds of errors, choose Options | Environment | Preferences and then turn off the Break on Exception option. The issue here is that you want the exception to occur, but you don't want to be taken to the line in your program where the exception surfaced. You don't need to view the actual source code because these exceptions are not the result of errors in your code. In fact, these exceptions are of the kind you want and need to produce and which appear to the user in an orderly fashion via the program's list box.
NOTE: Referential integrity is a means of enforcing the rules in a database. This subject is discussed in some detail in Chapter 16 and also in Chapter 12, "Understanding Relational Databases." For now, you should not be concerned with the details of how referential integrity works. The key point is simply that some tables have to obey rules, and the BDE will not let users enter invalid data that violates these rules.
Here is the code that reports on the errors in the OrderNo field and rolls back the data to its original state:
void __fastcall TForm1::OrdersTableUpdateError(TDataSet *DataSet, EDatabaseError *E, TUpdateKind UpdateKind, TUpdateAction &UpdateAction) { TTypeInfo TypeInfo; AnsiString UpdateKindStr[] = {"Modified", "Inserted", "Deleted"}; AnsiString S(UpdateKindStr[UpdateKind]); S += ": " + E->Message; AnsiString Temp = DataSet->Fields[0]->OldValue; Temp = + ": " + S; ListBox1->Items->Add(Temp); UpdateAction = uaSkip; }
This particular routine is an event handler for the OnUpdateError event for the Table1 object. To create the routine, click once on the Table1 object, select its Events page in the Object Inspector, and then double-click the OnUpdateError entry.
The OrdersTableUpdateError method will get called only if an error occurs in attempting to update records. It will get called at the time the error is detected and before BCB tries to commit the next record.
OrdersTableUpdateError gets passed four parameters. The most important is the last, which is a var parameter. You can set this parameter to one of the following values:
TUpdateAction = (uaAbort, uaSkip, uaRetry, uaApplied);
If you set the UpdateAction variable to uaAbort, the entire attempt to commit the updated data will be aborted. None of your changes will take place, and you will return to edit mode as if you had never attempted to commit the data. The changes you have made so far will not be undone, but neither will they be committed. You are aborting the attempt to commit the data, but you are not rolling it back to its previous state.
If you choose uaSkip, the data for the whole table will still be committed, but the record that is currently in error will be left alone. That is, it will be left at the invalid value assigned to it by the user.
If you set UpdateAction to uaRetry, that means you have attempted to update the information in the current record and that you want to retry committing it. The record you should update is the current record in the dataset passed as the first parameter to OrdersTableUpdateError.
In the OrdersTableUpdateError method, I always choose uaSkip as the value to assign to UpdateAction. Of course, you could pop up a dialog and show the user the old value and the new value of the current record. The user would then have a chance to retry committing the data. Once again, you retrieve the data containing the current "problem child" record from the dataset passed in the first parameter of OrdersTableUpdateError. I show an example of accessing this data when I retrieve the old value of the OrderNo field for the record:
AnsiString Temp = DataSet->Fields[0]->OldValue; Temp = + ": " + S; ListBox1->Items->Add(Temp);
Needless to say, the OldValue field is declared as a Variant in the source code to DB.HPP, which is the place where the TDataSet declaration is located:
System::Variant __fastcall GetOldValue(void); ... __property System::Variant OldValue = {read=GetOldValue};
Two other values are passed to the TableUpdateError method. The first is an exception reporting on the current error, and the second is a variable of type TUpdateKind:
enum TUpdateKind { ukModify, ukInsert, ukDelete };
The variable of type TUpdateKind just tells you how the current record was changed. Was it updated, inserted, or deleted? The exception information is passed to you primarily so that you can get at the message associated with the current error:
E->Message;
If you handle the function by setting UpdateAction to a particular value, say uaSkip, then BCB will not pop up a dialog reporting the error to the user. Instead, it assumes that you are handling the error explicitly, and it leaves it up to you to report the error or not, as you see fit. In this case, I just dump the error into the program's list box, along with some other information.
That's all I'm going to say about cached updates. At this point, you should go back and run the Cache program that ships with BCB. It covers all the same ground covered in the preceding few pages, but it does so in a slightly different form. In particular, it shows how to pop up a dialog so that you can handle each OnUpdateError event in an intelligent and sensible manner.
In general, cached updates give you a great deal of power you can tap into when updating the data in a dataset. If necessary, go back and play with the CachedUpdates program until it starts to make sense to you. This subject isn't prohibitively difficult, but it does take a few moments' thought to absorb the basic principles involved.
Many-to-many relationships are necessities in most relational database projects. Suppose, for example, that you have a set of software routines that you want to store in a database. Some of the routines you can use in DOS, some in UNIX, some in Windows NT, and some in Windows 95. Some routines, however, apply to two or more of the operating systems.
To track this information, you might try adding an OS field to your Routines table, where OS stands for Operating System. This solution sounds simple enough. However, there is one problem. The issue, of course, is that some routines will work with more than one OS. For example, you may have a routine that works in Windows NT and Windows 95, but not in UNIX or DOS. As a result, the fairly simple one-to-many relationship you try to establish with the OS fields really needs to be converted into a many-to-many relationship.
Here, for example, is a list of operating systems:
CODE | OS |
1 | DOS |
2 | UNIX |
3 | Windows |
CODE | FUNCTION_NAME | OSCODE |
1 | FormatDriveC | 1 |
2 | AssignAllIRQsToTheMouse | 2 |
The rest of this section describes how to actually go about creating many-to-many relationships. This subject is annoyingly complex, but one that you can master if you take a little time to think things through. My basic goal is to break down this process into a series of steps that you can follow whenever you have to create one of these many-to-many relationships. I might be going too far to say that these steps make the process simple. They do make it manageable, however.
In Listing 15.2, you will find the database definition for a simple set of InterBase tables. You can run this definition through WISQL by choosing File | Run ISQL Script. Alternatively, you can create a new database and pass through the key statements shown here one at a time.
Beneath the data definition for the database, you will find the code to a program
called ManyToMany. This code, in Listings 15.3 through 15.5, shows how to
handle
a many-to-many relationship in a BCB program.
Listing 15.2. The schema for a simple
database that can capture a many-to-many relationship.
/* Extract Database c:\src\unleash2\data\man2man.gdb */ CREATE DATABASE "c:\src\unleash2\data\man2man.gdb" PAGE_SIZE 1024; /* Table: ATTRIBS, Owner: SYSDBA */ CREATE TABLE ATTRIBS (ATTRIBNO INTEGER NOT NULL, ATTRIB VARCHAR(34), PRIMARY KEY (ATTRIBNO)); /* Table: CUSTOMERS, Owner: SYSDBA */ CREATE TABLE CUSTOMERS (CUSTNO INTEGER NOT NULL, NAME VARCHAR(35), PRIMARY KEY (CUSTNO)); /* Table: MIDDLE, Owner: SYSDBA */ CREATE TABLE MIDDLE (CUSTNO INTEGER, ATTRIBNO INTEGER); /* Grant permissions for this database */
Listing 15.3. The main form for the ManyToMany program.
#include <vcl\vcl.h> #pragma hdrstop #include "Main.h" #include "DMod1.h" #include "Relater.h" #pragma resource "*.dfm" TForm1 *Form1; __fastcall TForm1::TForm1(TComponent* Owner) : TForm(Owner) { } void __fastcall TForm1::ChangeAttrBtnClick(TObject *Sender) { RelateForm->RunDialogModal(); }
Listing 15.4. The Relater form from the ManyToMany program.
#include <vcl\vcl.h> #pragma hdrstop #include "Relater.h" #include "DMod1.h" #pragma resource "*.dfm" TRelateForm *RelateForm; __fastcall TRelateForm::TRelateForm(TComponent* Owner) : TForm(Owner) { } void __fastcall TRelateForm::RunDialogModal() { FCustNo = DMod->CustomerTable->FieldByName("CustNo")->AsInteger; Caption = "Attributes for " + DMod->CustomerTable->FieldByName("Name")->AsString; ShowModal(); } void __fastcall TRelateForm::bbInsertClick(TObject *Sender) { InsertQuery->Params->Items[0]->AsInteger = FCustNo; InsertQuery->Params->Items[1]->AsInteger = DMod->AttributeTable->FieldByName("AttribNo")->AsInteger; InsertQuery->ExecSQL(); ViewAttribs(); } void __fastcall TRelateForm::ViewAttribs() { DMod->ViewAttributes(FCustNo); } void __fastcall TRelateForm::bbDeleteClick(TObject *Sender) { DeleteQuery->Params->Items[0]->AsInteger = FCustNo; DeleteQuery->Params->Items[1]->AsInteger = DMod->ViewAttributesQuery->FieldByName("AttribNo")->AsInteger; DeleteQuery->ExecSQL(); ViewAttribs(); } void __fastcall TRelateForm::FormShow(TObject *Sender) { ViewAttribs(); }
Listing 15.5. The data module for the ManyToMany program.
#include <vcl\vcl.h> #pragma hdrstop #include "DMod1.h" #pragma resource "*.dfm" TDMod *DMod; __fastcall TDMod::TDMod(TComponent* Owner) : TDataModule(Owner) { ManyToMany->Connected = True; CustomerTable->Open(); AttributeTable->Open(); } void TDMod::ViewAttributes(int CustNo) { ViewAttributesQuery->Close(); ViewAttributesQuery->Params->Items[0]->AsInteger = CustNo; ViewAttributesQuery->Open(); } void __fastcall TDMod::CustomerSourceDataChange(TObject *Sender, TField *Field) { ViewAttributes(CustomerTable->FieldByName("CustNo")->AsInteger); }
The ManyToMany program enables you to pop up a dialog that contains two lists
of attributes. The left-hand list shows all the
possible attributes that can be associated
with a record in the main table for this program. The right-hand list shows the currently
selected attributes for the current record in the main table. Buttons are supplied
so that you can add items from the
left-hand column to the column on the right. The
dialog in question is shown in Figure 15.5.
FIGURE 15.5.
The Relater dialog relates the Customers
table to the Attributes
table.
The basic idea behind a many-to-many relationship is that you need to have an intermediate table between the main table and the list of attributes that you assign to it. For example, if you have the Routines and the OS tables described, you need a middle table that relates the Routine ID from the Routines table to the OS ID from the OS table.
In the database just shown, the Middle table serves as the intermediary between the Customers table and the Attribs table. Here's how it works.
The Customers table has a series of records in it like this:
select * from Customers CUSTNO NAME =========== =================================== 1 SAM 2 MIKE 3 FREDDY FREELOADER 4 SUNNY SUZY 5 LOU 6 TYPHOID MARY 7 SANDRA 8 MICHELLE 9 NICK 10 NANCY
The Attribs table also has a set of attributes that can be assigned to these customers:
select * from Attribs ATTRIBNO ATTRIB =========== ================================== 1 Nice 2 Naughty 3 Generous 4 Guilty 5 Onerous 6 Criminal 7 Hostile 8 Beautiful 9 Bodacious 10 Endearing
Suppose that Sunny Suzy is both Nice and Bodacious. To connect her to these two attributes, you could add two fields to the Middle table:
CustNo AttribNo 4 1 4 9
Now when you open the Middle table, you will find two entries in it. The first entry has a CustNo of 4, which stands for Sunny Suzy, and an AttribNo of 1, which stands for Nice. Likewise, the second line translates into Sunny Suzy, Bodacious. Here is the key to decoding the table:
4 in CustNo field = Sunny Suzy 1 in the AttribNo field = Nice 9 in the AttribNo field = Bodacious
Of course, you need to make sure that you're doing the right lookups on these numbers. For example, 9 in the AttribNo field equals Bodacious, but 9 in the CustNo field equals Nick!
Now that you understand the principle behind creating a many-to-many relationship, the next step is to create a dialog that can capture this relationship in terms that the user can understand. The ManyToMany program has a main form that contains a grid showing the fields of the Customers table. You can find a button called Attribute on the main form. If you click this button, a dialog like the one shown in Figure 15.5 appears.
On the left-hand side of the Relater dialog is a list of possible attributes that can be assigned to a customer. On the right-hand side of the dialog is a list of the attributes that have in fact been assigned to the current customer. In between the two lists are two buttons. If you click the button with the arrows pointing to the right, the word selected on the left will be added to the list on the right. That is, the attribute will be assigned to the currently selected customer. (The customer list, remember, is back on the main form.) The button with the arrows pointing left will delete the current selected attribute in the right-hand list. This, in effect, removes that attribute from the current customer's list of traits.
At this stage, you are ready to prepare a list of things that you must do to complete the many-to-many relationship:
Some other tasks are associated with creating the Relater dialog. For example, you must put up the table showing the list of possible attributes, and you must add buttons and grids to the dialog. However, I am assuming that all these tasks are too trivial to be worth describing. The key tasks are the three just listed. Keep your mind focused on them, and the rest will be easy.
To begin, drop down a table, data source, and grid, and then set up the list of possible attributes as shown in the left-hand side grid in Figure 15.5. Name the TTable object containing this dataset AttributeTable.
Now drop down a button that will move things from the left-hand grid to the right-hand grid. Put two arrows on it, as shown in Figure 15.5. Drop down a TQuery object, call it InsertQuery, and place the following line of code in its SQL property:
insert into middle (CustNo, AttribNo) values (:CustNo, :AttribNo);
Here is the code you can create to fill in the two bind variables called :CustNo and :AttribNo. This code should be associated with the button that points to the right:
void __fastcall TRelateForm::bbInsertClick(TObject *Sender) { InsertQuery->Params->Items[0]->AsInteger = FCustNo; InsertQuery->Params->Items[1]->AsInteger = DMod->AttributeTable->FieldByName("AttribNo")->AsInteger; InsertQuery->ExecSQL(); ViewAttribs(); }
The FCustNo variable is assigned a value when the dialog is launched. It's the CustNo of the currently selected customer, and it is retrieved when the dialog is first called by the main form. The AttribNo value is retrieved from the currently selected record in the grid on the left. To actually insert the data into the database, you call ExecSQL.
The ViewAttribs routine shows the attributes associated with the current customer. That is, this routine fills in the grid on the right-hand side of the Relater dialog. The ViewAttribs routine is very simple:
void TDMod::ViewAttributes(int CustNo) { ViewAttributesQuery->Close(); ViewAttributesQuery->Params->Items[0]->AsInteger = CustNo; ViewAttributesQuery->Open(); }
This code does nothing more than resolve a single bind variable and then open the ViewAttributesQuery object. The SQL property of the ViewAttributesQuery object should look like this:
SELECT DISTINCT A.ATTRIB, A.ATTRIBNO FROM MIDDLE M, ATTRIBS A WHERE (M.CUSTNO = :CustNo) AND (A.ATTRIBNO = M.ATTRIBNO) ORDER BY A.ATTRIB
This code selects the Attribute and AttribNo from the Attribs table in all the cases in which the AttribNo in the Attribs table is also found in a record from the Middle table that has the CustNo of the currently selected customer. (Phew!) The resulting set of data is shown in the grid on the right-hand side of the dialog shown in Figure 15.5.
To help make this process intelligible, consider the case I outlined, where Sunny Suzy was both Nice and Bodacious:
CustNo AttribNo 4 1 4 9
This SQL code searches through the Middle table and finds all the cases where Sunny Suzy is mentioned in it. In other words, it finds the two records shown. The code then performs a lookup in the Attribs table, finding the words that are associated with the two AttribNos shown in the preceding code. Whenever it finds a match, it displays the match in the right-hand grid. When you have only two records in the Middle table, this does not seem like much of a trick, but the SQL shown here seems a bit smarter if thousands of records appear in the Middle table, only two of which relate to Sunny Suzy.
NOTE: Notice that I call ViewAttributesQuery from both the RelateForm and from the main form. The call from the main form is made each time the user selects a new record to view in the CustomerTable:void __fastcall TDMod::CustomerSourceDataChange(TObject *Sender, TField *Field) { ViewAttributes(CustomerTable->FieldByName("CustNo")->AsInteger); }
At this stage, you are two-thirds of the way through completing the many-to-many relationship. You have found out how to insert records and how to show the list of currently selected items associated with a particular customer. The only step left is to come up with a technique for deleting records.
The SQL to perform a delete from the Middle table looks like this:
delete from Middle where CustNo = :CustNo and AttribNo = :AttribNo;
Here is the code, associated with the leftward-pointing button, that fills in the bind variables in the SQL delete code shown in the preceding paragraph:
void __fastcall TRelateForm::bbDeleteClick(TObject *Sender) { DeleteQuery->Params->Items[0]->AsInteger = FCustNo; DeleteQuery->Params->Items[1]->AsInteger = DMod->ViewAttributesQuery->FieldByName("AttribNo")->AsInteger; DeleteQuery->ExecSQL(); ViewAttribs(); }
The CustNo bind variable is resolved easily enough, because you had the appropriate CustNo passed in from the main form when the dialog was first created. The ViewAttributesQuery holds a list of the currently selected attributes and their AttribNo. Therefore, you can simply ask the ViewAttributesQuery object for the AttribNo of the currently selected record to find out what item the user wants to delete. Now both bind variables are satisfied, and you can perform the deletion by calling ExecSQL!
All in all, creating a many-to-many relationship is not so bad as long as you are methodical about the process. Remember that four key steps are involved:
Tackle these tasks one at a time, and the process will not prove to be terribly difficult.
This chapter gave you a basic introduction to the local InterBase and to several related subjects. In particular, you saw how to create and open InterBase databases, how to set up aliases, and how to perform fundamental database tasks such as transactions.
I should stress that InterBase is a very complex and powerful product, and what you have seen in this chapter should serve as little more than a brief introduction that will whet your appetite. In the next chapter, you will look at stored procedures, triggers, InterBase calls, and a few other tricks that should help you grasp the extent of the power in both the local and server-based versions of InterBase.
BCB protects you from the details of how a server handles basic database chores. However, BCB also enables you to tap into the power associated with a particular server. This was one of the most delicate balances that the developers had to consider when they created BCB: How can you make a database tool as generic as possible, without cutting off a programmer's access to the special capabilities of a particular server? The same type of question drove the developers' successful quest to make BCB's language as simple and elegant as possible without cutting off access to the full power of the Windows API.
Now you can forge on to the next chapter. By this time, we are deep into the subject of databases. In fact, the stage is now set to open a view onto the most powerful tools in a database programmer's arsenal. After you master the stored procedures, generators, and triggers shown in the next chapter, you will be entering into the world of real client/server programming as it is done on the professional level. These tools drive the big databases used by corporations, governments, and educational institutions around the world.
©Copyright,
Macmillan Computer Publishing. All rights reserved.