Special Edition Using Microsoft BackOffice, Volume I

Previous chapterContents


Chapter 25

Implementing Internet Security

by David L. Williams

Gain a basic understanding of the technologies forming the foundations of the secure Internet communications protocols.
Learn about the components that make up the Internet Security Framework defined by Microsoft and understand what roles these pieces play in the overall system.
Learn how to apply the technologies described to create a secure environment with Internet Information Server and Internet Explorer.

The overwhelming global acceptance of the Internet is providing unprecedented levels of connectivity to computer users around the world. While the potential benefits of this newfound communications infrastructure are immeasurable, many of the possible pitfalls are still unknown. One thing on which most experts would agree, however, is that there has never been a greater need for computer security technologies than there is today. In fact, as more and more people begin to use the Internet for electronic commerce, the need for reliable network security will continue to grow.

This chapter explores the many security issues related to the Internet and explains some basic security concepts and the technologies behind them. It then explains how these technologies fit into the new Microsoft Internet Security Framework and how this framework will help to secure the data of most Internet users.

Internet Security Concerns

The recent explosion in the number of individuals with access to the Internet and its protocolsóeither through high-speed T1 connections at the office or modem dial-up connections at homeóhas caught the attention of corporate America. They see this new global, high-speed communications medium as a way to maintain closer contact with partners, suppliers, and customers. Some of the benefits include the following:

Because of these benefits, many companies have eagerly connected their private networks to the Internet, used the Internet to connect once isolated local area networks (LANs), or begun running Internet protocols on their internal LANs to exploit the information-sharing tools of the World Wide Web. Yet there are a number of security concerns associated with transmitting informationóespecially sensitive informationóacross the Internet. These concerns include the privacy, integrity, and authentication of data transmitted across the network.

Privacy

Perhaps the most widely held concern, when it comes to Internet security, is the privacy of transmitted data. The problem is one of transmitting information across a public network without that information being seen by an unknown, possibly hostile third party. The privacy requirement varies greatly depending on the parties involved and the purpose of the information exchange. For instance, most individuals rarely worry about whether their e-mail is seen by someone for whom it is not intended; it is probably not terribly interesting.

However, internal corporate memos transmitted from headquarters to a branch office might make for more interesting reading. Corporations tend to jealously guard that information they deem sensitive to their business operations, as this knowledgeóif made available to a competitorócould easily equate to lost revenues or market share. In fact, there are a number of highly skilled, unscrupulous individuals around who intentionally intercept such information in order to make it available to those competitors willing to pay for it. Any security mechanism for Internet communication must adequately address this issue.

Integrity

In addition to privacy, it is essential to have confidence in the integrity of transmitted information. Because of the way the Internet is constructed, any information transmitted from one point to another is often stored temporarily on any number of machines along the way (see Figure 25.1). At any of these intermediate destinations, it is possible to compromise the integrity of the transmitted data through either accidental or intentional actions.

Fig. 25.1

Data transmitted across the Internet typically resides on various machines while in transit.

In other words, an individual with malicious intent could alter the content of the transmitted data at any point along the path after it leaves the source machine and before it arrives at the final destination. One possible reason for doing this is to mislead the recipient in order to gain some advantage. A recipient who did not know the data had been altered would assign it whatever confidence the source warrants.

Another situation might involve a file, possibly a program, that has been placed in an area for download. This file could be altered from its original form either before or after being placed on the server. In the case of a data file, it is possible to change or delete data values in order to mislead the recipient. For a program file, it is possible to attach a virus so that when the program is run, the virus infects the destination machine. It is, therefore, extremely important in the open world of the Internet to have a reliable means of detecting data and files that have been altered.

Authentication

As if privacy and integrity were not enough to deal with, the issue of authentication is one that must be answered before any real commerce can be conducted on the open Internet. Authentication is the act of verifying that a connected client or server is indeed the client or server it claims to be. It is also used to confirm that a file or set of data did, in fact, originate from a known, trusted source.

It is currently possible on the Internet for a machine to intercept a message or a request sent to a specific IP address, and pretend that it is the intended machine (see Figure 25.2). For instance, a machine may masquerade as a server with which a client is attempting to communicate. In this way, a dialog may begin between the client and the false server. The client may be misled into giving out information to the server, assuming that it is the machine for which the data was intended. This is a technique known as masquerading or spoofing, and can work in the opposite direction as well, with the machine masquerading as a valid client and communicating with a valid server.

Fig. 25.2

A machine masquerading as one party in a communication session can appear authentic to an unsuspecting first party.

When it comes to electronic commerce and online credit card transactions, it is vital that the user is confident the server to which he/she transmits private information (such as an account number) is indeed the machine the user believes it to be. Without this confidence, the grand vision of full-blown electronic commerce on the Internet cannot become reality. Authenticating the commercial site to the satisfaction of the user is the key to raising this confidence.

An Introduction to Cryptography

Given the concerns regarding the safe and private transmission of information across the Internet, it may seem that the best course of action is not to use the Internet at all, except possibly to access the latest sports scores on ESPN. However, all of these concerns can be addressed using techniques developed in the field of cryptography.

Cryptography is the science of translating messages into a form that is safe for transmission such that, if intercepted, they are extremely difficult to restore to the original data. Cryptography dates back to the ancient Egyptians and has been used throughout history for securely transporting military and diplomatic secrets. The recent application of high-speed computers to the field of cryptography has yielded more versatile and secure cryptographic systems.

Today's systems employ a technique known as encryption in which complex algorithms are used to transform the digital representation of the information from one mathematical space to another. To perform the encryption, the user utilizes a special algorithm and a unique numeric value called a key. To retrieve the encrypted information, the user must know the corresponding algorithm to decrypt the data and another key, which is in some way related to the key used during the encryption process.

Private-Key Encryption

Until recently, most computer encryption was performed using some type of symmetric algorithm. This is an algorithm in which a single key is used for both encryption and decryption along with some specified mathematical function (see Figure 25.3). Using this approach, data is encrypted by passing it and the key through the mathematical function to produce data that is completely unintelligible.

Fig. 25.3

In private-key encryption, a single key is used to perform both encryption and decryption of message data.

To decrypt the data, the encrypted data and the same key are passed through the inverse of the function. The complexity of the function and the length of the key (how large the number is) determine how difficult it will be to break the encryption without having the key. The major advantage of a symmetric cryptographic system is speed.

Probably the most widely known symmetric cryptographic algorithm is the Data Encryption Standard (DES) algorithm. DES is a very good encryption algorithm that typically uses a 56-bit key. (The key, represented in binary form, could use up to 56 bits.) There are a few limitations to remember, however, when using private-key encryption.

First, because DES and other forms of private-key encryption use a single, private key to perform both the encryption and decryption, any two parties sharing encrypted information must know the private key used for encrypting that information. Therefore, it makes sense to generate the keys as needed and exchange them just prior to transmitting the encrypted data. Since anyone with knowledge of the private key can decode and read the encrypted data, this exchange of keys requires a secure channel. However, a secure channel cannot be established until the exchange of keys is completed. Because of this paradox, the keys must be stored somewhere on each local machine prior to transmitting the data. This means that, when the size of the network grows, the number of keys to be maintained grows exponentially, soon becoming unmanageable (see Figure 25.4).

Fig. 25.4

As more and more computers on a network require secure communications using private-key encryption, the number of keys to be maintained grows exponentially. This not only poses a problem in terms of maintaining keys, it also compromises the confidentiality of these keys.

Second, because the keys must be known prior to the exchange of information, two parties have knowledge of any private key. This implies a trusted relationship between the two parties. If either party violates this trust, intentionally or otherwise, the privacy mechanism breaks down.

Public-Key Encryption

Many of the limitations of private-key encryption were overcome by the advent of public-key encryption. This method for encryption was first put into practice in 1977 by Ronald Rivest, Adi Shamir, and Len Adleman when they created the RSA encryption algorithm. This system consists of two keys: one private and one public. The private key, as the name suggests, is always kept secret. The public key can be made known to anyone without jeopardizing the security of the system.

Public-key encryption works on the basis of a one-way, trap-door function. In a one-way function, the computation in the forward direction is relatively simple, but the calculation in the reverse direction is extremely difficult. A trap-door function is a one-way function in which the reverse direction is easily calculated if a specific piece of information is known. That piece of information for a public-key encryption algorithm is the private key. The public key can be calculated from knowledge of the private key, but the opposite is inherently more difficult. That is why it is safe to make the public key available to anyone.

To perform public-key encryption, the data to be encrypted is passed as one input to the encryption function, with one key from the public/private key pair as the other input. To decrypt, the encrypted data is passed through the decryption function along with the other key from the pair (see Figure 25.5). Public-key encryption works the same way in both directions. In other words, data encrypted using the public key can be decrypted using the private key, and data encrypted using the private key can be decrypted using the public key.

Fig. 25.5

In contrast with private-key encryption, two keys are required to complete an information transfer using public-key encryption.

One drawback to public-key encryption is speed. Because two keys are known instead of just one (as is the case for private-key encryption), the public-key algorithms require a longer key in order to obtain the same level of security. This generally translates into more computational time required to process the algorithm.

The RSA system makes it practical to implement encryption on a large network. Public keys can be kept in a public area or distributed by a server and obtained by users as needed. Someone who wants to send an encoded message simply looks in a directory (much like a telephone book) to find the public key of the recipient, and uses that key to perform the encryption. Once the message is encoded, only the private key of the intended recipient will successfully decode the message. If the message is intercepted along the way, the individual intercepting it sees only a garbled mess and has very little hope of breaking the code without utilizing a tremendous amount of computing power. In addition, this approach presupposes no prior relationship between the parties in question, and no trust is implied or required to keep the key private.

Network Credentials

It is now apparent that data encryption is a possible solution to the problem of maintaining privacy while transmitting information across the Internet. Two more concerns still remain regarding the safe and confident utilization of an unsecured network: integrity and authentication. As it turns out, one way to address both of these problems is to employ a mechanism known as a digital signature.

Digital Signatures

A digital signature is just what its name implies. It is a digital code that can be attached to an electronic document and is analogous to a handwritten signature in that it uniquely identifies the individual creating it. The idea behind it is actually quite simple, and makes use of the public-key encryption technology previously described. The basic premise revolves around the one-way, trap-door functions. Specifically, any data encoded with a particular private key can only be decoded using the corresponding public key. Conversely, any data that can be decoded with a certain public key could only have been encoded with the corresponding private key. Because only a single entity knows the private key, successful decoding of data with a particular public key therefore uniquely identifies the owner of the related private key.

In practice, the raw data is generally not directly encoded. It is first passed through something called a hashing algorithm. This hashing algorithm generates a cryptographic digest, which is a unique, short-hand representation of the original data. This digest is much easier to work with than the raw data and generally requires much less processing power to encode and decode. Since encryption, especially public-key encryption, is an expensive process in terms of central processing unit (CPU) cycles, and the hashing algorithms are generally much less complex, there can be a significant reduction in processing requirements. Also, this difference in processing requirements becomes even greater as stronger encryption is used.

To create a digital signature, the information to be transmitted is passed through a hashing algorithm to create a cryptographic digest. The digest is then encrypted with the author's private key, and transmitted along with the original text to the recipient. The recipient then decrypts the transmitted digest with the author's public key. In addition, he also creates a new digest by employing the same hashing function on the message body. If the newly created digest matches the decrypted digest, then the decrypted digest could only have been created using the author's private key. Since the key is private, only the author would have access to it, and the source of the document is confirmed. Also, since the two digests match, the document could not have been altered after the author signed it, and the integrity of the document is established (see Figure 25.6).

Fig. 25.6

Digital signatures can be used to verify both the authenticity and integrity of a document.

In practice, the sender generally transmits his public key along with the message so that the recipient is not required to look it up. The problem with this approach is that if someone forged the message and then sent his public key to verify the message, the only thing verified is the integrity of the messageónot the author. In order to authenticate the author, another mechanism known as a digital ID, or certificate, is employed.

Certificates

A certificate is a set of digital data containing an individual's public key and identity information that has been signed by a well-known, trusted third party. This trusted party, known as a certification authority (CA), can be a part of the internal corporate information systems (IS) department, or a commercial supplier of certificate services. This approach enables the transmission of the sender's public key in the form of a certificate along with the signed document. The recipient verifies the certificate with a well-known public key, which is either stored locally on each machine or in a central location. After the recipient validates the certificate, the signature is validated by using the public key contained in the certificate. This process, once completed, authenticates the author of the information and verifies the integrity of the data.

One advantage to this approach is the difficulty required to masquerade as someone else. Because the key used to begin the authentication process is well-known and will only decode information encrypted with the private key of the trusted third party, it is virtually impossible for any unauthorized individual to masquerade as the trusted party. This promotes the element of trust in the system because anyone wanting to sign a message must be known by the trusted party. In addition, the trusted party provides an extra check by specifying the public key for the signing party. Another advantage is that the machine receiving the information need not be physically connected to the trusted authority. The only thing required at the receiving machine is knowledge of the public key of the trusted authority.

The most widely accepted certificate format is the X.509 standard defined by the Consultative Committee in International Telegraphy and Telephony (CCITT). Figure 25.7 shows an illustration of the X.509 format.

Fig. 25.7

The X.509 certificate provides a standard mechanism for validating a user's public key.

Certificate Hierarchies

Microsoft Corporation has been working closely with VeriSign, Inc., one of the leading certification authorities, to define technologies for use in authentication and validation. They have a vision that goes beyond the idea of a single, trusted party providing authentication services. Clearly, a single authority could not handle the workload involved with maintaining verification information on every user of the Internet. Therefore, Microsoft and VeriSign envision a hierarchical approach to the verification process (see Figure 25.8). It makes sense for small groupsósuch as departments, corporations, and government agenciesóto maintain records on their own users and validate them appropriately.

Fig. 25.8

The certificate hierarchy could be utilized for authentication when parties associated with different local entities want to exchange secure information. Instead of direct authentication of the parties, the lower-level authorities authenticate the parties and high-level authorities verify the low-level authorities.

However, communicating between those groups requires a higher-level authority to perform the necessary validation. A hierarchy would exist in which local authorities would be validated by high-level authorities. Those, in turn, would be validated by still higher-level authorities. Ultimately, a single, topmost authority would maintain records on the other authorities. In this way, no matter how wide an area the transmission covers, the recipient simply traverses up the hierarchy until he reaches a level that provides the required measure of trust. In addition, it seems likely that multiple, parallel hierarchies would exist in order to provide different types of authentication.

Code Signing

Among the current predictions about what the future of the Internet will bring, many people talk about the potential for full electronic software distribution. Currently, software vendors develop their software, create disks and manuals, place them into boxes, stamp a logo on the box, shrink wrap the whole package, and ship countless numbers of the packages to various resellers around the world. The consumer then travels to the local computer store, purchases the software, takes it home, and installs it. The expectation for the futureómuch to the dismay of the software resellersóis for the user to simply download an electronic copy of the software and manuals directly from a server on the Internet. In all likelihood, the software would also install itself, making this method of distribution much easier for the consumer.

Until recently, this scenario would be unthinkable due to the risk of viruses, software piracy, and the lack of a secure means of transferring payment to the vendor. However, the public-key encryption, digital signature, and digital certificate technologies come together in a powerful technique referred to as code signing. In this process, the software vender obtains a certificate from a certification authority. The vender then signs the software along with a copy of the certificate using his private key. This way, the entire package is secure and verified by the CA. To use Microsoft's analogy, encryption with the private key equates to shrink wrapping the package, and the digital certificate equates to a notary seal. The one item not addressed in this scenario is the electronic payment method. This is discussed in the "Secure Electronic Transactions" section later in this chapter.

The Microsoft Security Framework

Given that brief overview of the technologies involved, it is now time to discuss Microsoft's plan to use their Internet Security Framework to support secure computing over the Internet. This framework is a comprehensive set of technologiesóboth public-key and password-basedóthat cover the entire range of secure Internet computing requirements. According to Microsoft, this framework is meant to enable the user to do the following:

In this framework, Microsoft supports many of the current standards for secure communications on the Internet, as well as supplementing and enhancing a few of them. Its goal is to facilitate a migration to secure network computing without requiring existing security systems to be discarded. In addition, many of the network security features, such as digital certificates, integrate into the existing security model for Windows NT so that network administrators can use the tools with which they are already familiar. This also serves to further blur the distinction between the Internet and the intranet.

Authenticode

The Internet Security Framework includes a set of utilities, dubbed Authenticode, that enable the user to digitally sign files and then verify that the files were signed correctly. These utilities are part of the ActiveX Software Development Kit (SDK) and are freely available from Microsoft. The utilities included are the following:


NOTE: In order to use the Authenticode utilities, the CryptoAPI must already be installed and running. To check this, go to a command prompt and type api *. This command will generate a series of success messages. If no messages are generated, the CryptoAPI is either not installed or not running.

Microsoft recommends the following steps for using these utilities to sign a file:

Run MakeCert to generate a public/private key pair, associate the keys with a specified publisher's name, and create an X.509 certificate signed by a root key. If no root key is specified, it generates one for you. (A root key is the public key of a certification authority.)

Run Cert2SPC to create a software publishing certificate (SPC), which wraps the X.509 certificate into a signed data object.


CAUTION: The software publishing certificate generated by Cert2SPC is for test purposes only and is not for use in signing files or software intended for publication. To obtain a valid SPC, contact a certification authority.

Run SignCode to create a digest of the file and sign the digest using the private key generated in Step 1 and the SPC generated in Step 2.

Once the file is signed, it is a good idea to verify both the signature and the integrity of the file, as in the following process:

Run PeSigMgr to confirm that the valid SPC was correctly embedded into the file.

Run ChkTrust to create a new digest of the information stored within the file and compare it to the encrypted digest that is stored within the signed data object in the file.

Qualifications for Software Publishing Certificates

In order for the system to work as promised, it is vital for the public to maintain confidence in the assurances offered by the software publishing certificates. Toward this end, Microsoft has put forth some suggested criteria to be applied consistently to each SPC applicant. This would ensure that certain standards are met by the applicants, regardless of the authority supplying the certificate.

In their criteria, Microsoft differentiates between commercial and individual software publishers. The qualifications for these two types of publishers differ somewhat.

Individual Qualifications

To obtain an individual SPC, the user must meet the following qualifications:

Commercial Qualifications

In order to obtain a commercial SPC, the applicant must meet the qualifications specified for the individual publisher plus the two following additional requirements:

Wallet

The wallet resides on the user's computer or hardware device, such as a hard card, and provides a mechanism for secure storage of private information, such as digital IDs, certificates, electronic receipts, credit card numbers, and private keys. The wallet is a service accessible through a standard programming interface and available based on an access control policy. Some information, such as private keys, will be made available for use programmatically, but will not be directly accessible. The user can transport the wallet from one device to another by means of the Personal Information Exchange protocol.

Personal Information Exchange

Personal Information Exchange (PFX) is a set of platform-independent protocols that enable the user to transport personal, secure, sensitive information across some unsecured medium. It enables users to maintain a single set of personal ID information and import or export it anytime the user requires a physical relocation. For example, if a user is working at the office and must go home to complete the work, that individual can use PFX to transport the information to a home computer. This is true even if the two computers are based on different platforms.


NOTE: To transfer personal information between different platforms, PFX must be implemented on both platforms.

PFX defines different modes of operation, depending on the circumstances of the transfer and the security required. It describes the following two secure protocols for multi-platform exchanges:

For the purposes of discussion in this chapter, emphasis is given to the Direct Exchange protocol. This protocol defines a primary data unit (PDU) referred to as a safe. A safe is a secure "container" for holding private data during transport, and can be imported or exported on any compliant platform. The safe is segmented into four "compartments" to handle the following different types of data:

The PFX protocol also defines something referred to as baggage, which can hold private key information that has already been protected with an encryption algorithm. The purpose of carrying the baggage outside the safe is to avoid superencryption, in which certain pieces of data are encrypted multiple times. The entire safe and baggage, along with a version tag, are included in an integrity-mode wrapper to preserve the integrity of the data. This wrapper cannot keep anyone from tampering with the data, but it will enable detection of such tampering.

To exercise the direct exchange of information, the user must decide between two types of privacy modes and two types of integrity modes, based on the kind of transfer to be made. All four modes are as follows:

As discussed earlier, the privacy modes exist to ensure the secrecy of the data being transferred, and the integrity modes allow for detection of any damage or alteration of the data. The public-key privacy mode performs standard public-key encryption using a single-purpose public key for the destination platform. This privacy mode is the most secure and provides results suitable for transporting the data across a public communication channel. The public-key integrity mode uses a private key from the source platform to create a trusted digital signature (signed by a CA) to protect the integrity of the package.


NOTE: The keys used for the PFX protocol are platform-specific and are dedicated for use with PFX. They are not associated with any user.

The password privacy mode is interesting in that it does not actually use a specified password to protect the data. Instead, it uses a combination of the username and password to generate a reproducible private key used for symmetric encryption (private-key encryption) of the data. The key is then regenerated at the destination utilizing the same process, and that key is used to decrypt the data. The strength of this mode is determined by the length of the password (which should be long and difficult to guess), but it is not suitable for enabling public transport of the data.

The password integrity mode works in much the same way. A unique, integrity key is generated based on the password supplied. Then, a message authentication code (MAC) is created as a unique function of the data and integrity key. The MAC is then regenerated on the destination platform using the same password and transmitted data. The newly generated MAC is then compared to the transmitted MAC, verifying integrity.

In general, it is much better to use the public-key modes when possible. However, using these modes requires advance knowledge of the destination platform as well as the keys for that platform. If access to the keys exists, use the public-key privacy mode. It allows for public transport of the data. If not, password mode can be used, but this requires physical protection of the data during transit. An example of this would be a floppy disc stored in a safe. The integrity modes are not as critical because each will protect the data from tampering. However, it is always better to provide the extra security of public-key encryption whenever possible.

Certificate Server

As discussed in the "Network Credentials" section, certificates afford the capability of authenticating a user's digital signature by providing the digital equivalent to a notary seal supplied by a trusted party. Use of these certificates makes it easy to authenticate the parties involved in network connections and authors of electronic documents. Microsoft's certificate server enables control to be exercised over certificate management. Its primary functions are as follows:

The use of the certificate server enables organizations to take control of authentication on a group, department, or enterprise level. In addition, the organization maintains total control over the policies and procedures in effect regarding the issuance of certificates. Also, the server is policy independent and has no predefined criteria for determining certificate recipients. This ensures that the organization has flexibility in adapting its procedures over time. Moreover, it enables the certificates to be mapped to Windows NT permission groups, providing powerful use of current security settings as well as familiar tools.

Thanks to its use of the CryptoAPI, the certificate server is isolated from the encryption keys themselves. This provides added security as well as enabling the use of any key management system according to individual needs. It also maintains transport independence, which means that requests for certificates can be received from the following variety of sources:

Finally, the certificate server maintains adherence to standards, such as the X.509 certificate format. As such, it works with non-Microsoft clients, such as Netscape's Navigator Web browser. It also supports alternate certificate formats, allowing for use with third-party security systems as well as future changes in the certificate specifications.

Secure Channel Services

No security framework would be complete without a specification for private communications between two points. Microsoft, therefore, defines secure channel services, which are responsible for establishing and maintaining a secure point-to-point connection with the following properties:

A private connection means simply that no unauthorized party can view the information passed between the two points. Actually, it is possible for a third party to "eavesdrop" on the communications due to the structure of the Internet. However, anything seen would be unintelligible because of the encryption performed on the data prior to transmission. Therefore, the data passed from application to application along the connection is kept private.

A reliable connection is one in which errors are detected if they exist. This is accomplished using a digital signature on a hash made from the text of the data, then transmitting this signed hash along with the data. On the receiving end, if the signed hashówhich cannot be successfully modifiedómatches what the hash of the data should look like, the integrity of the data is demonstrated with confidence. Should an error be detected, it may be possible to correct the error through retransmission. If not, the secure connection is terminated.

An authenticated connection is one in which at least the server is proven to be authentic. Optionally, the server may request the client to authenticate itself as well. The authentication is performed by means of a digital certificate signed by a trusted third party. Authenticating the connection makes it virtually impossible for anyone to masquerade as the server and attempt to get the client to reveal its secrets.

To achieve this secure connection, Secure Channel Services defines three protocols with which it will operate. These are Secure Sockets Layer (SSL) 2.0, SSL 3.0, and Private Communications Technology (PCT) 1.0. In addition, Transport Layer Security (TLS) will be supported upon its release. The underlying mechanisms for all of these protocols is quite similar. The basic process involves the following actions:

The goal of the Secure Channel Services is to provide a secure socket connection without incurring a major performance penalty. It provides quick reconnect times in addition to encryption and message authentication. It uses public-key encryption for the handshake and key exchange phase, then switches to private-key encryption for all remaining communications.

All data flowing across the secured channel is broken into records of a manageable size. A MAC is added to the record to ensure integrity. The record and MAC are encrypted using a symmetric encryption algorithm and transmitted across the channel. On the receiving end, the packet is decrypted using the same symmetric key. The MAC is then compared with one newly generated from the received data. If the integrity is confirmed, the record is then forwarded to the receiving application.

To enable all of the protocols to interact, a universal client hello format was devised. This allows great flexibility in terms of connecting clients running one protocol with servers running another, and provides for backward compatibility. For instance, it enables an SSL 3.0 client to connect with an SSL 2.0 server, or a PCT 1.0 client to connect with an SSL 2.0 (or 3.0) server.

Secure Sockets Layer (SSL)

SSL 2.0 operates almost exactly as in the process described in the previous section. It is the first of the protocols supported by Secure Channel Services, was defined by Netscape Communications, and appeared originally in late 1994. SSL has gained significant popularity with the support of many of the large publishers of World Wide Web server and client software. However, there are a few weaknesses to version 2.0. Critics complain that it requires too many handshake rounds to establish a connection. It also contains a possible security hole during the client authentication phase.

To address some of these problems, Netscape revised SSL with a new 3.0 version. Among the most important improvements provided in this new version is the reduction in the number of rounds required in the initial handshake phase. This reduces the overhead significantly with regards to establishing a new connection. SSL is also backwards compatible with 2.0.

Private Communications Technology (PCT)

PCT was developed by Microsoft in late 1995. It is based heavily on the SSL protocols and uses essentially the same record format. Its key improvements revolve around speed. It requires fewer messages and shorter message structures than SSL 2.0. The exchange of messages during the handshake phase has been shortened so much that establishing an initial connection without authenticating the client requires only one message in each direction, and no connection requires more than two messages in each direction.

The primary differences between SSL and PCT are as follows:

Transport Layer Security (TLS)

Moving toward a unified, open standard, Microsoft has offered a discussion draft called Secure Transport Layer Protocol (STLP). It has become known as TLS, and it attempts to combine the SSL 3.0 and PCT protocols. It starts essentially with SSL 3.0 and adds features from PCT. The goal is to provide a single protocol that is simpler, more robust, and more scaleable than either SSL or PCT.

CryptoAPI

To make it easier for developers to take advantage of the technologies discussed in this chapter, Microsoft has produced the CryptoAPI. This application programming interface (API) provides services to enable developers to readily add cryptographic as well as certificate management functions to their 32-bit applications without requiring them to have in-depth knowledge of the underlying implementations or algorithms. It also protects the sensitive private key information from direct access by the applications utilizing the cryptographic functions.

Currently there are two versions of this API. Version 1.0 provides all of the basic encryption and decryption functionality in addition to key management facilities. Version 2.0 implements all of the version 1.0 functionality and adds functionality for using and managing certificates. Version 2.0 provides calls broken into the following four functional areas:

Cryptographic Service Provider

The CryptoAPI functions abstract out a security layer and insulate the developer from the details of that layer. They also take a "black box" approach, enabling the functionality of the security layer to be implemented as separate modules called cryptographic service providers (CSPs). Each CSP provides its own implementation of the CryptoAPI. The underlying algorithms and key sizes may differ as well as the methods of implementation. For example, the CSP bundled with the system is called the Microsoft RSA Base Provider. This CSP implements the CryptoAPI using RSA encryption algorithms and key lengths suitable for export out of the United States. Another CSP may implement strong encryption (long keys) not eligible for export. Still another may require use of a smart card for verification of the user. Multiple CSPs can reside on a system simultaneously. The application, prior to exercising any of the cryptographic functions, first acquires a handle to the desired CSP.

To install a CSP, first obtain a copy of the CSP on whatever distribution media is supplied by the vendor. Then execute the setup procedure provided with the distribution. This procedure should copy all necessary files to their appropriate places and perform any registry modifications required.

Security Support Provider Interface (SSPI)

The security support provider interface is a layer residing between an application and the CryptoAPI. It provides applications with a very high-level interface to security functions and enables any application using SSPI to connect to any security modules using SSPI. These security modules provide applications with an authenticated connection. They are usually implemented using the CryptoAPI and provide a level of abstraction and extensibility to the system.

Secure Electronic Transactions (SET)

In order to fund many of the ventures now appearing on the Internet, it is clear that some type of secure electronic transfer of funds will need to be possible. In addition, it will need to be such an easy process that the average consumer with a computer at home will feel just as comfortable purchasing an item from the Internet Shopping Network as from the Cable Shopping Network. One might feel that all of the encryption and authentication technology just discussed would be fine for making online purchases with credit cards. In fact, that may be a fair assumption. However, there are a couple of limitations associated with making electronic transactions using only those technologies:

To answer this need, dozens of corporations have been working on protocols for implementing an electronic payment system. One protocol has begun to emerge form a combination of two previous front runners. Secure electronic transactions (SET) is a protocol defined jointly by VISA and MasterCard, with the help of a partnership of companies, including GTE, IBM, Microsoft, Netscape, SAIC, Terisa, and VeriSign. With the support of these industry giants, it is not difficult to see why SET has become the leading contender for an electronic transaction standard.

SET is an open specification which provides for the protection of payment card purchases on any type of network. The primary goals of SET include the following:

The steps involved in making an online purchase using SET are demonstrated by the following sample session:

The customer, using Internet Explorer (or Netscape Navigator) browses the site of an online merchant.

When the customer decides to make a purchase, an encrypted SET charge slip is transmitted to the merchant along with a credit draft for the specified purchase amount.

The merchant contacts a processing bank to obtain approval for the transfer. The charge slip (still encrypted) and a copy of the draft are sent to the bank (the charge slip never having been seen by the merchant).

The bank contacts VISA to authorize and settle the transfer.

VISA approves the transfer for the processing bank and notifies the issuing bank of the transfer.

The processing bank notifies the merchant that the transfer was approved.

The merchant sends an electronic receipt to the customer. The receipt is electronically signed by the merchant using the merchant's private key and is legally binding.

At the end of the month, the issuing bank sends the customer a credit card statement containing (among other things) the purchase from the merchant.

The customer (presumably) pays the issuing bank (maybe electronically).

Although this may seem like a long and complicated process, it is not that different from the process taking place today. However, in the scenario above, no credit card was ever physically handed to a merchant and no credit card number was given over the phone.

Securing Internet Information Server

Whether you are using Internet Information Server (IIS) in an Internet or intranet environment, you are opening your server to access by a number of users. In most cases, these individuals will be unknown to you. Therefore, it is absolutely essential to take every possible precaution in securing the server against unauthorized access. In order to accomplish this, it is necessary to understand the security process implemented by IIS to restrict server access.

IIS Security Process

The security process implemented by IIS involves traversing a set of security layers, each of which must be successfully passed in order to move to the next layer and eventually to the requested resource. There are essentially four such security layers standing between a user making a request for a resource through IIS and the requested resource, as follows:

IIS first checks the Internet Protocol (IP) address of the user against a list of allowed IP addresses. If the IP address is rejected, the security check fails.

IIS validates the username and password against the valid Windows NT user accounts. Even users accessing the IIS services by anonymous access have a user account set up specifically for the anonymous user.

Any resource requested from IIS must reside in a directory (or directory tree) that has been identified to IIS. A request for any resource residing outside one of these predefined (virtual) directories will be denied. Also, IIS enables the setting of permissions on the directories defined. These are permissions imposed by IIS and are in addition to any NTFS permissions found.

Any user requesting a resource must have appropriate NTFS permissions for that user. For instance, if a file is requested by a user, that user's Windows NT account must have read access to the file, or the request is denied.

Given this list of security layers, it seems prudent to address each layer and apply the most rigid requirements possible, according to the type of access you want to allow. This way, unnecessary security holes can be eliminated and the risks of unauthorized access to your server minimized.

Filtering IP Addresses

The first line of defense for IIS is to filter out IP addresses for any unauthorized clients. This may or may not be feasible for your particular case, as it depends a great deal on whether IIS is being used for an Internet or intranet server. If this is an Internet server, it seems likely that anyone is welcome and no filtering will be used unless you want to single out individual addresses from which users may be harassing your server. However, an intranet is a much more limited system, and IP filtering may be a good fit. In fact, subnet masks provide a good means to block out (or allow) large groups of IP addresses.

To set up IP filtering, you have two basic choices. You can either grant access to everyone except those specifically listed as blocked, or deny access to everyone except those specifically listed as granted. The following are the steps for blocking an IP address:

  1. Start the Internet Service Manager.

  2. Click the Advanced tab.

  3. Click the Granted Access option button. This specifies that all users will be granted access unless explicitly listed in the denied access list.

  4. Click the Add button.

  5. If you want to enter a single address to block, click the Single Computer option button. In the IP Address box, type the IP address to be blocked. If you would rather use a name, click the button next to the IP Address box, and type the name of the computer to be blocked (e.g., www.gasullivan.com).

  6. If you want to block a range of addresses, click Group of Computers. In the IP Address box, enter the first IP address in the range. In the Subnet Mask box, type the subnet corresponding to the group to be blocked.

  7. Click the OK button.

  8. In the Advanced tab, click the OK button.

  9. The following is the process for granting access to an IP address:

  10. Start the Internet Service Manager.

  11. Click the Advanced tab.

  12. Click the Denied Access option button. This specifies that all users will be denied access unless explicitly listed in the granted access list.

  13. Click the Add button.

  14. If you want to enter a single address to allow access to, click the Single Computer option button. In the IP Address box, type the IP address to be granted access. If you would rather use a name, click the button next to the IP Address box, and type the name of the computer to be granted access (e.g., www.gasullivan.com).

  15. If you want to grant access to a range of addresses, click Group of Computers. In the IP Address box, enter the first IP address in the range. In the Subnet Mask box, type the subnet corresponding to the group to be granted access.

  16. Click the OK button.

  17. In the Advanced tab, click the OK button.

Windows NT User Accounts

Anyone who has browsed the World Wide Web probably knows that the vast majority of sites do not require a username and password. They use an anonymous account that works for anyone accessing the site. That will probably also be the case for you if you are running a general Internet site. However, an intranet site could operate either way depending upon the type of information being shared. For sharing general information, such as memos or postings, it may be preferable to allow anonymous access. However, for something like a departmental server or a Web application that has a small, known user base, it may be wise to create Windows NT user accounts for each user and to challenge users with username and password authentication before allowing access.

There are a couple of points to consider before making this decision. First, since the gopher service is always an anonymous protocol, the discussion here centers on File Transfer Protocol (FTP) and Hypertext Transfer Protocol (HTTP) access. With FTP, it is generally safer to allow anonymous access rather than authenticated. That may sound backwards, but the reason is simple. FTP does not encode password information prior to transmission. Therefore, anyone watching the network with a protocol analyzer would be able to intercept the username and password information in plain text. In other words, by requiring authentication, you are actually putting the account at risk. Therefore, it is better to limit what is available via FTP and to allow anonymous access to that information.


CAUTION: FTP is one the most difficult of all the Internet protocols to secure. For this reason, it is strongly recommended that you consider carefully your requirements for running the FTP server.

Second, IIS supports both HTTP basic authentication and the Windows NT Challenge/Response protocol for client authentication. Both of these authentication methods enable the client to send username and password information to the server. However, in basic authentication, both the username and password are transmitted in plain text. This creates the same risk of interception as in FTP authentication. Clients implementing the Windows NT Challenge/Response authentication protocol, however, transmit an encrypted username and password. This eliminates the risk of interception and provides much greater security. To date, however, Internet Explorer version 2.0 and higher are the only browsers supporting this protocol.


CAUTION: The plain text authentication information is not a problem if secure channel services are utilized. In this case, all data traveling across the link is encrypted, regardless of the protocol being used.

IIS Directory Access

No directory can be accessed through IIS until a virtual directory is created for it. The virtual directory is a directory name that is mapped to a physical directory on either a local or remote disk. A Uniform Resource Locator (URL) referencing the virtual directory is remapped by IIS to the physical directory to access a requested resource. Therefore, be sure to create virtual directories only when absolutely necessary.

Further, when creating virtual directories with IIS, it is possible to set any of three permission types (depending on the type of service) for the virtual directory and all subdirectories: read, write, and execute. It is important not to overlook these permission settings, as they are separate from any permissions maintained by NTFS and provide another level of security. The general rules for setting these directory permissions are as follows:


CAUTION: In general, it is bad practice to enable both read and write privileges on any given directory. Therefore, it is a good idea to segment a site to keep all relevant documents together and all related executables together, but separate from one another. Also, it is wise to install the publishing services on a completely separate disk partition from the one containing the Windows NT operating system. This provides a fairly good barrier between the server and the operating system files.

To set or modify access permissions for a virtual directory, perform the following steps:

  1. Open Internet Service Manager.

  2. Double-click the WWW service.

  3. Click the Directories tab.

  4. Select the folder for which you want to set permissions.

  5. Click the Edit Properties button.

  6. Select the appropriate check box.

  7. Click OK, and then click OK again.

NTFS File Permissions

The last line of defense lies in NTFS's capability to restrict user access on a file-by-file basis. Ensure that all user and group privileges are appropriately restricted. Follow the guidelines outlined in the previous section, "IIS Directory Access," to determine which permissions to set for files in the IIS publishing directory trees. Also, do not allow the anonymous user account any access to files outside these trees.

Security Policies

The most important thing to remember about security is that any secure system is only as good as its weakest link. More often than not, that weak link is in the enforcement of the security policy. One example of this is a mainframe system which requires user ID/password authentication to log on. If it is possible to call a help desk and get a password changed for a specified user ID without presenting any form of identification, the policy has been circumvented.

It is often difficult to decide what the security policies should be for an organization. There are always tradeoffs in security versus usability, and the policies depend upon the individual needs of the organization involved. However, once the policies are put into place, it is important to follow them consistently, or unexpected security holes may appear.

Implementing Secure Channel Services

When it comes to an Internet application involving electronic commerce or an intranet application enabling access to sensitive information, enabling secure channel services on the server seems prudent. This enables all data flowing between the client and server (both directions) to be encrypted for privacy and integrity. Secure channel services can be enabled on a directory basis so that it is enforced only for data requiring secure transport. This way, a single server can host multiple applications: some secure, some not, and some requiring secure channel services for only portions of their available data.


CAUTION: Use of secure channel services does impose a performance penalty. For this reason, it is a good idea to determine which parts of the site really contain sensitive information and which do not. Then, only implement secure channel services on those portions required.


NOTE: To enable secure channel services on the entire Web site, just enable these services on the root IIS directory. Otherwise, enable services on each directory as desired.

To enable secure channel services on IIS, the following steps must be performed:

  1. Generate a key pair and a request file.

  2. Obtain a certificate.

  3. Install the certificate on IIS.

  4. Activate secure channel services on the desired directories.

Generating Keys

In order to generate keys for use in obtaining certificates, IIS comes with a utility called the Key Manager. This utility can be accessed by either clicking the Key Manager icon in the Internet Information Server program group, or launching the Microsoft Internet Service Manager and selecting Tools, Key Manager. Once Key Manager is open, perform the following procedure:

Choose Key, Create New Key.

Fill in all information in the Create New Key and Certificate Request dialog boxes.


CAUTION: Commas should not be used in the fields for the certificate request dialog box. They indicate an end-of-field marker, and will cause Key Manager to generate an improper certificate request without warning the user.

Click the OK button.

Another dialog box appears prompting you to re-enter the password. This is just to ensure that the password was not typed incorrectly. Enter the password again and click the OK button.

Upon completion of the form, the Key Manager will generate a file containing the key pair just created, and a second file containing a certificate request. Also, the name of the new key appears in the Key Manager window. You are now ready to obtain a valid certificate.


NOTE: Do not use the key generated by Key Manager on the Internet until a valid certificate is obtained from a certification authority. This can be either an internal authority or a commercial authority, such as VeriSign.

Obtaining and Installing Certificates

Once you have a certificate request file, you can obtain a valid certificate. The procedure for this may vary from one CA to another, and will depend on the type of certificate requested as well as the type of CA. Eventually, you will receive a valid certificate from the CA. It is now time to install this certificate on IIS. To install the certificate, perform the following steps:

  1. Start Key Manager by either clicking the Key Manager icon in the Internet Information Server program group or launching the Microsoft Internet Service Manager and selecting Tools, Key Manager.

  2. Select the key pair that corresponds to the certificate request file used to obtain the certificate.

  3. Select Key, Install Key Certificate.

  4. Select the certificate file containing the certificate supplied by the CA, and click Open.

  5. When prompted, type the same password that you used when you created the key pair.

  6. Select Servers, Commit Changes Now.

  7. Click OK to commit the changes.

Enabling Secure Sockets Layer

In order to use the newly installed certificate, it is necessary to enable Secure Sockets Layer on at least one virtual directory for IIS. This can be done for the root directory to secure the entire site or on a directory-by-directory basis, but any subdirectories of secure directories are also secure. Simply perform the following procedure:

  1. Start the Internet Service Manager.

  2. Double-click the WWW service to open the Service Properties dialog box.

  3. Click the Directories tab.

  4. Select the folder for which SSL security is desired, then click Edit Properties.

  5. Select the Require secure SSL channel option, and then click OK.


NOTE: Once a directory is set to require SSL, an URL referencing a document from the secure directory must use HTTPS rather than HTTP, indicating the secure connection.

Security and Internet Explorer

Internet Explorer 3.0 provides facilities for implementing most of the security mechanisms described in this chapter. These features provide the capability to complete the client side of the security framework when implementing an intranet application as well as providing client authentication in a public Internet environment. In addition, it affords protection against false server sites or malicious applications through authentication of the server and validation of the application's author.

Certificates

Internet Explorer manages certificates that enable authentication to a server, identifying you uniquely as you connect to the server. It also enables you to specify both trusted publishers and certification authorities. If you access a site from which you begin to download active content, Internet Explorer presents you with a notification of the publisher's certificate if one exists. You can then proceed to download the content or block it. Also, you can specify that all future software published by the same vendor or any publisher with credentials from the same certification authority be considered safe, and that no such future notification is needed (see Figure 25.9).

Fig. 25.9

Microsoft's Internet Explorer 3.0 displays the certificate accompanying downloaded software and prompts the user for the appropriate action.

Once a publisher or certification authority has been added to the list of trusted publishers, it is possible to view and delete them from the list by performing the following steps:

  1. Select View, Options.

  2. Select the Security tab.

  3. Press the Publishers button to view all trusted publishers and certification authorities.

  4. View or remove any software publisher or CA from the list.

  5. Press the OK button.

  6. In addition to software publishing, there is a similar mechanism for the recognition of trusted servers. More specifically, it is possible to indicate that any server possessing a certificate from a trusted CA should be trusted. To view and/or delete a CA from this list, perform the following steps:

  7. Select View, Options.

  8. Select the Security tab.

  9. Press the Sites button to view all trusted certification authorities.

  10. View or remove any CA from the list.

  11. Press the OK button.

Finally, it is possible to view personal certificates as well. These personal certificates are used for client authentication to a server and uniquely identify the user. They are used to certify anything from e-mail addresses to credit cards and bank accounts. The procedure for obtaining and installing a certificate varies from one CA to another. Generally, a request is made containing all personal information pertaining to the type of certificate desired. After verifying all of the information, the CA delivers the digital ID using an appropriate means. The certificate is then installed into Internet Explorer, usually in some automated process.

To view any installed personal certificate, perform the following steps:

  1. Select View, Options.

  2. Select the Security tab.

  3. Press the Personal button.

  4. Select the certificate from the list, and press the View Certificate button.

  5. Press the OK button.

Active Content

In addition to utilizing certificates for authentication, Internet Explorer also provides protection from active content, such as ActiveX controls and Java applets. This protection integrates quite closely with the digital certificates because much of the protection offered involves trusting applications whose source is trusted.

Both Java applets and ActiveX controls are appearing on wide numbers of sites across the Internet. It is important to understand exactly what risks are involved with running these executables downloaded from the Internet.

Java applications do not actually exist as executable code, but rather as a set of operations to be performed by the Java Virtual Machine. Alternatively, a just-in-time (JIT) compiler will compile the Java byte codes into local machine code, which then gets executed directly on the processor. The important point to remember is that in both cases, there is a layer between the downloaded program and the processor. This layer isolates the machine from the Java program and provides protection for the computer. Because of this, Java programs are tightly restricted in their capability to access the local computer's hardware and are, therefore, generally safer to run.

In contrast, ActiveX components are compiled executable modules that run directly on the local hardware and can access nearly anything they wantóconstrained only by the specific operating system safeguards. For instance, it has been demonstrated that an ActiveX component could shut down a Windows 95 computer without the permission of the user. Fortunately, this demonstration did nothing malicious and warned the user prior to performing the shutdown, but it was a grave reminder of the type of havoc that could be wreaked if malicious intent existed.

It is valid to point out that software downloaded from a trusted source probably will not contain malicious code, especially since the integrity is also checkedódisallowing any tampering. However, errant code is not detected and could accidentally produce similar results. This is really no different than with any software purchased and loaded in a traditional manner. Therefore, the same judgment should be applied when deciding which ActiveX components are to be allowed to run.

To access the settings for allowing and disallowing content based on ActiveX and Java, perform the following steps:

  1. Choose View, Options.

  2. Select the Security tab.

  3. Press the Personal button.

  4. In the Active Content frame, select the desired security settings, as follows:

  1. Press the Safety Level button.

  2. Select the security level desired. The recommended settings are generally a good idea.

  3. Press the OK button to exit the Safety Level dialog box.

  4. Press the OK button to exit the Options dialog, and the Apply button to apply the changes without closing.

Secure Communications

Obviously, secure communications would be somewhat useless if only the server supported it. Therefore, Microsoft included support for secure communications in Internet Explorer 3.0. In fact, Internet Explorer 3.0 supports SSL 2.0 and 3.0, as well as PCT. These protocols run fairly seamlessly and require very little input from the user. In fact, it is sometimes difficult to tell if a secure page is being accessed, except for the HTTPS in the URL and the extra lock icon appearing at the bottom right of the Internet Explorer's main frame. The user does have some control, however, over the types of protocols run and what warnings are to be seen. To modify these values, perform the following steps:

  1. Select View, Options.

  2. Select the Advanced tab.

  3. In the Warnings frame, select any warnings you want to receive.

  4. To select security protocols allowed, Press the Cryptography Settings button.

  5. Select any protocol to be allowed.

  6. Press the OK button to close the Cryptography Protocols dialog box.

  7. Press the OK button to exit the Options dialog, and the Apply button to apply the changes without closing.

Web Ratings

Anyone who has surfed the Internet, even for a short time, will probably admit to being just a little overwhelmed by the sheer enormity of it. It is not at all difficult to become lostófollowing link after link with little hope of ever retracing the exact path you took to get to a destination. The recent proliferation of search engines has made it much easier to find information relating to a given topic without relying on luck. It is now possible to perform a search on a subject by simply keying in a word or group of words and being presented with anywhere from zero to ten million documents pertaining in some way to those words. It is up to the individual user to decide what is relevant and what is not.

In addition to all of the research papers published by universities, information offered by various organizations, and personal items posted by individuals, the Web has become home to a growing number of sites offering mature subject matter. It is up to the individual to determine what is offensive. There has been a recent outcry against the ease with which anyone, no matter what age, can access this material via the Internet. Attempts have been made on various regional and national levels to restrict what may be published, but to no avail. Many people simply do not realize that censorship by a single country could not work unless that country was also willing to restrict connections to any country with contrary policies.

The Internet is truly a global entity. How then can individuals restrict small children from gaining access to these sites or filter what they themselves see? The answer lies in ratings. By rating Web sites, it becomes possible for the user to filter those sites based on individual criteria for the ratings. In this way, access can be restricted, or filtered, based on preset criteria, saving precious time from manually sifting through the list of sites returned from your query.

Platform for Internet Content Selection (PICS)

One standard put forth by the World Wide Web Consortium has caught on very quickly. The PICS specification has gained the acceptance of many larger Internet software vendors. PICS defines an infrastructure for associating labels with content. It consists of the following two basic components:

PICS does not actually define any rating system, but allows for definitions to be made. In fact, any group or individual can create a rating system to meet specific needs. Once a rating system is defined and a page is rated using the appropriate labels, a browser using the rating system will filter the page according to the user's criteria. If the criteria are met, the browser displays the page; otherwise, it only displays a message indicating why the page could not be viewed.

When talking about rating systems, most people automatically think about sexual or violent content. However, it would be just as appropriate to devise a system based on something else entirely, such as the amount and type of technical content in a publication. In this way, users on a corporate intranet could filter out documents based on their technical level in any given area.


NOTE: It is theoretically possible to incorporate multiple ratings into a single document. This means documents on an intranet could be organized into different categories, with each document containing a rating one or more categories. The documents could then be filtered by the user based on multiple criteria.

Recreational Software Advisory Council (RSACi)

One of the most popular of the early rating systems defined for the Internet is from the Recreational Software Advisory Council called RSACi. This system ships with Internet Explorer 3.0 and is likely to be popular with parents of small children. It is an easy to understand, head-on approach to rating. It essentially defines four categories of ratingsóshown in Table AD.1óeach of which is then divided into levels that define the rating.

Table AD.1 Categories in the RSACi Internet Rating System

Level Description

Violence

1 Creatures killed; creatures injured; damage to realistic objects; fightingóno injuries
2 Humans killed; humans injured; rewards injuring non-threatening creatures
3 Blood and gore; rewards injuring non-threatening humans; rewards killing non-threatening creatures; accidental injury with blood and gore
4 Wanton and gratuitous violence; rape

Language

1 Mild expletives
2 Expletives; non-sexual anatomical references
3 Strong, vulgar language; obscene gestures
4 Crude or explicit sexual references

Nudity

1 Revealing attire
2 Partial nudity
3 Frontal nudity; non-sexual frontal nudity
4 Provocative frontal nudity

Sex

1 Passionate kissing
2 Clothed sexual touching
3 Non-explicit sexual activity; sexual touching
4 Sex crimes; explicit sexual activity

To utilize the RSACi system, the user sets the desired level for each category. Any document with a rating in any category exceeding the allowable level for that category will not be permitted. The publisher must fill out a questionnaire and submit it along with a processing fee to RSAC. The application can be for a single document, a branch or directory of a Web site, or an entire site. RSAC then responds with a rating that is incorporated into the HTML documents to be published.

Incorporating Ratings into HTML Documents

Incorporating ratings into an HTML document is a relatively straightforward process. It is probable that the HTML authoring tools will eventually aid in this task, but for now it is accomplished with the aid of a standard text editor, such as Notepad. First, obtain a rating for your document based on the rating system you want to incorporate. In the case of RSACi, submit a questionnaire, and you will receive a set of rating labels. A sample rating tag looks something like this:

<META http-equiv="PICS-Label" content='(PICS-1.0
"http://www.rsac.org/ratingsv01.html" l gen true comment "RSACi North America
Server" by "RSAC " for "http://www.gasullivan.com" on "1996.09.27T08:15-0500"
exp "1997.09.27T08:15-0500" r (n 0 s 0 v 0 l 1))'>

This tag uses the RSACi rating system to rate a fictional document on G.A.Sullivan's Web site. The rating is valid between the dates of 9/27/96 and 9/27/97. The actual rating values are shown last and are simply indicated by a set of label/value pairs. For instance, n 0 corresponds to a rating of no nudity. In fact, this tag indicates no nudity, sex, or violence. However, the language rating of 1 indicates that mild expletives are used.

Once you have the ratings, open the HTML document in a text editor, and insert the tag into the header section of the document. When it is pasted in, save the document and exit. Listing 25.1 is an example of the resulting document.

Listing 25.1 Simple HTML Document Containing RSACi Rating

<HTML>
<HEAD>
<META http-equiv="PICS-Label" content='(PICS-1.0
"http://www.rsac.org/ratingsv01.html" l gen true comment "RSACi North America
Server" by "RSAC " for "http://www.gasullivan.com" on "1996.09.27T08:15-0500"
exp "1997.09.27T08:15-0500" r (n 0 s 0 v 0 l 1))'>
</HEAD>
<BODY>
This is a page containing MILD EXPLETIVES !
</BODY>
</HTML>

Controlling Access to Documents with Internet Explorer

With all of the information about rating systems on the Internet, it would be nice to be able to use that information to restrict access to certain types of material for children, students, and possibly employees. Internet Explorer 3.0 provides a simple means to accomplish this by incorporating a control center called Content Advisor (see Figure 25.10).

Fig. 25.10

Internet Explorer's Content Advisor enables an administrator to restrict access to content based on document ratings.

To access Content Advisor, follow these steps:

  1. Select View, Options. This brings up the Options dialog box.

  2. Click the Security tab.

There are two buttons associated with Content Advisor. One enables or disables the use of any rating settings, and the other launches Content Advisor to enable modification of the settings. First time selection of either button brings up a dialog box prompting you to enter a new supervisor password to be used when changing settings. Subsequent selection will require you to enter the supervisor password to proceed. Without this password, no one can to enable, disable, or modify any of the settings.


CAUTION: Make sure you remember the supervisor password. If you forget it, you will have to reinstall Internet Explorer to change or override any settings made with Content Advisor. If you write this password down, make sure it is not available to those who are restricted.

To enable or disable the Content Advisor settings, select the leftmost button in the Content Advisor section of the Security tab. This button will be labeled either Enable Ratings or Disable Ratings, depending on the current state of the system. Once the button is pressed, Internet Explorer prompts for the supervisor password. If it is entered correctly, Internet Explorer displays a dialog box indicating the success of the action.

To modify the actual rating settings, click the button labeled Settings. Again, you are prompted for the supervisor password. Just enter it and press OK. The Content Advisor dialog box appears containing three tabs: Ratings, General, and Advanced. The Ratings tab enables you to set specific values for a selected rating system and category. To change settings, go to the category list and click the category you want to change. A slider control appears below the category list. Just slide the control left or right until you achieve the desired level as described just below the slider. When you are satisfied with the settings of all categories, press the Apply button to save the changes (or OK to save and leave the Content Advisor). If you feel that you accidentally changed some settings, pressing Cancel will discard changes and exit.

The General tab enables you to change the supervisor password and set two options. The first optionóUsers can see sites which have no ratingómeans just that. If this option is selected, any page containing no rating information is allowed to be viewed. Currently, there are a large percentage of sites with no rating information. If those were all blocked, it might be too limiting to the user, depending on the circumstances involved. The second option, Supervisor can type a password to allow users to view restricted content provides an override mechanism. If this option is set when a restricted page is encountered, Internet Explorer notifies the user of the restricted page and displays a dialog box prompting for the supervisor password. If the password is entered, the page is displayed. This allows parents to view material that would otherwise be restricted for their children.

Finally, the Advanced tab enables you to add additional rating systems and delete existing systems. To add a new rating system requires a file that defines the system. This file should have the .RAT extension. It should be obtained from the group or organization responsible for defining the system. Once you have this file, click the Rating Systems button. This brings up a dialog box that lists all existing systems. Click the Add button and browse the location of the .RAT file. Select the file and click the Open button. This should install the new rating system. Now click the OK button to close the Rating Systems dialog box. To change the settings for the new system, go back to the Ratings tab and select the appropriate levels. Then click OK to save and close, or Cancel to discard changes.


TIP: Although it is not required, it is a good idea to place new .RAT files into the windows\system directory where the original RSACi file resides. This enables all such files to be similarly located and should avoid future confusion or accidental deletion of one or more of these files.

Internet Ratings API

Microsoft is currently working on an Internet Ratings API to provide developers with support for PICS-based rating systems and related services. This API is not available at the time of this writing; but when available, should do the following:

The introduction of this API should encourage the development of many new applications with innovative new uses for the rating systems.

From Here...

In this chapter, you learned about the basic cryptographic technologies employed in the secure Internet communication protocols. You also saw an outline of Microsoft's Internet Security Framework, which lists the protocols that will form the basis for secure network computing on the Microsoft operating systems and how they interact with other systems on the Internet. In addition, you learned how to apply these technologies to your Internet/intranet applications utilizing the features available in Internet Information Server and Internet Explorer.

For more information on some of the topics addressed in this chapter, see the following chapters:

To learn more about the features of Internet Explorer and some of the other browsers available, see Chapter 21, "Web Browsers."


Previous chapterContents


Macmillan Computer Publishing USA

© Copyright, Macmillan Computer Publishing. All rights reserved.