Java 1.1 Unleashed
- 34 -
|
Type of Attack | Description |
Theft of information | Nearly every computer contains some information that the owner or primary user of the machine would like to keep private. |
Destruction of information | In addition to data that is private, most of the data on typical computers has some value, and losing it would be costly. |
Theft of resources | Computers contain more than just data. They have valuable, finite resources that cost money: disk space and a CPU are the best examples. A Java applet on a Web page could quietly begin doing some extensive computation in the background, periodically sending intermediate results back to a central server, thus stealing some of your CPU cycles to perform part of someone else's large project. This would slow down your machine, wasting another valuable resource: your time. |
Denial of service | Similar to theft of resources, denial-of-service attacks involve using as much as possible of a finite resource, not because the attacker really needs the resource, but simply to prevent someone else from being able to use it. Some computers (like mail servers) are extremely important to the day-to-day operations of businesses, and attackers can cause a lot of damage simply by keeping those machines so busy with worthless tasks that they can't do their real jobs. |
Masquerade | By pretending to be from another source, a malicious program can persuade a user to reveal valuable information voluntarily. |
Deception | If a malicious program were successful in interposing itself between the application and some important data source, the attacker could alter data--or substitute completely different data--before giving it to the application or the user. The user would take the data and act on it, assuming it to be valid. |
Now that you understand why security features are important and what kinds of threats exist, it's time to learn how Java's security features work and how they protect against those threats.
The Java security model is composed of three layers, each dependent on those beneath it. The following sections cover each of the layers, describing how the security systems work from bottom to top.
The first line of defense against untrusted programs in a Java application is a part of the basic design of the language: Java is a safe language. When programming language theorists use the word safety, they aren't talking about protection against malicious programs. Rather, they mean protection against incorrect programs. Java achieves this in several ways:
All these qualities make Java a "safe" language. Put another way, they ensure that code written in Java actually does what it appears to do, or it fails. The surprising things that can happen in C (such as continuing to read data past the end of an array as though it were valid) cannot happen. In a safe language, the behavior of a particular program with a particular input should be entirely predictable--no surprises.
The second layer of Java security involves careful verification of Java class files--including the virtual machine bytecodes that represent the compiled versions of methods--as they are loaded into the virtual machine. This verification ensures that a garbled .class file won't cause an error within the Java interpreter itself, and it also ensures that the basic language safety is not violated. The rules about proper language behavior that were written into the language specification are good, but it's also important to make sure that those rules aren't broken. Checking everything in the compiler isn't good enough because it's possible for someone to write a completely new compiler that omits those checks. For that reason, the Java library carefully checks and verifies the bytecodes of every class loaded into the virtual machine to make sure that those bytecodes obey the rules. Some of the rules, such as bounds checking on references to array elements, are actually implemented in the virtual machine, so no real checks are necessary. Other rules, however, must be checked carefully. One particularly important rule that is verified rigorously is that objects must be true to their type--an object that is created as a particular type must never be able to masquerade as an object of some incompatible type. Otherwise, there would be a serious loophole through which explicit security checks could be bypassed.
This verification process doesn't mean that Java code can't be compiled to native machine code. As long as the validation is performed on the bytecodes first, a native compiled version of a class is still secure. "Just in time" (JIT) compilers run within the Java virtual machine, compiling bytecodes to native code as classes are loaded, just after the bytecode-verification stage. This compilation step doesn't usually take much time, and the resulting code runs much faster.
The third and final layer of the Java security model is the implementation of the Java class library. Classes in the library provide Java applications with their only means of access to sensitive system resources, such as files and network connections. Those classes are written so that they always perform security checks before granting access.
This third layer is the portion of the security system that an application can control--not by changing the library implementation, but by supplying the objects that actually make the decisions about whether to grant each request for access. Those objects--the security manager and the class loaders--are the core of an application's security policy, and you'll read more about them (including how to implement them) a little later in this chapter.
The first two layers of the Java security model are primarily concerned with protecting the security model itself. It's the third layer, the library implementation, in which explicit measures are taken to protect against the kinds of attacks listed in Table 34.1. To help thwart those attacks, Java checks each and every attempt to access particular system resources that could be used in an attack. Those resources fall into six categories, as listed in Table 34.2.
Resource | Description |
Local file access | The ability to read or write files and directories. These capabilities can be used to steal or destroy information, as well as to deny service by destroying important system files or writing a huge file that fills the remaining space on your disk. Applets can also use local file access to deceive you by writing an official-looking file somewhere that you will find later and believe to be trustworthy. |
System access | The ability to execute programs on the local machine, plus access to system properties and other system resources such as the clipboard, keyboard and mouse input events, and printer queues. These capabilities can be used for theft or destruction of information or denial of services in much the same way that direct file access can. |
Network access | The ability to create network connections, both actively (by connecting to some machine) and passively (by listening and accepting incoming connections). Applets that actively create connections may be trying to usurp the user's identity, exploiting the trust that other machines place in him or her. Applets that try to listen for incoming connections may be taking over the job of a system service (such as a Web server). |
Thread manipulation | The ability to start, stop, suspend, resume, or destroy threads and thread groups, as well as other sorts of thread manipulation such as adjusting priorities, setting names, and changing the daemon status. Without restrictions on such capabilities, applets can destroy work by shutting down or disabling other components of the applications within which they run, or can do so to other applets. Rogue applets can also mount denial-of-service attacks by raising their own priority while lowering the priorities of other threads (including the system threads that may be able to control the errant applets). |
Library manipulation | The ability to create factory objects that find and load extension classes from the network or other sources. An untrust- worthy factory object can garble user data, transparently substitute incoming data from a completely different source, or even steal outgoing data--without the user of the application realizing what's happening. See "Further Reading," later in this chapter, for pointers to more information about factory objects. |
Window creation | The ability to create new top-level windows. New top-level windows may appear to be under the control of a local, trusted application rather than an applet, and they can prompt unwary users for important information such as passwords. The Java security system permits applications to forbid applets from creating new windows, and it also permits tagging applet-owned windows with a special warning for users. |
Let's look at an example to see how the security model works in practice. This example concentrates on what happens in the third layer, for two reasons: The lower two layers sometimes deal with some rather esoteric issues of type theory, and they are not within the programmer's control. The top layer, on the other hand, is relatively straightforward and can be controlled by Java application programmers.
Suppose that the Snark applet has been loaded onto your system and wants to read one of your files--say, diary.doc. To open the file for reading, Snark must use one of the core Java classes--in particular, FileInputStream or RandomAccessFile in the java.io package. Because those core classes are a part of the security model, before they allow reading from that particular file, they ask the system security manager whether it's okay. Those two classes make the request in their constructors; FileInputStream uses code like this:
// Gain access to the system security manager. SecurityManager security = System.getSecurityManager(); if (security != null) { // See if reading is allowed. If not, the security manager will // throw a SecurityException. The variable "name" is a String // containing the file name. security.checkRead(name); } // If there is no security manager, anything goes!
The security manager is found using one of the static methods in the System class. If there is no security manager, everything is allowed; if there is a security manager, it is queried to see whether this access is permitted. If everything is fine, the SecurityManager.checkRead() method returns; otherwise, it throws a SecurityException. Because this code appears in a constructor, and because the exception isn't caught, the constructor never completes, and the FileInputStream object can't be created.
How does the security manager decide whether the request should be allowed or not? The SecurityManager class, an abstract class from which all application security managers are derived, contains several native methods that can be used to inspect the current state of the Java virtual machine. In particular, the execution stack--the methods in the process of executing when the security manager is queried--can be examined in detail. The security manager can thus determine which classes are involved in the current request, and it can decide whether all those classes can be trusted with the resource being requested.
In the Snark example, the security manager examines the execution stack and sees several classes, including Snark. That means something to us, but it probably doesn't mean a lot to the security manager. In particular, the security manager has probably never heard of a class called Snark, and presumably it doesn't even know that Snark is an applet. Yet that's the really important piece of information: if one of the classes currently on the execution stack is part of an applet or some other untrusted, dynamically loaded program, granting the request could be dangerous.
At this point, the security manager gets some help. For each class on the execution stack, it can determine which class loader is responsible for that class. Class loaders are special objects that load Java bytecode files into the virtual machine. One of their responsibilities is to keep track of where each class came from and other information that can be relevant to the application security policy. When the security manager consults Snark's class loader, the security manager learns (among other things) that Snark was loaded from the network. At last, the security manager knows enough to decide that Snark's request should be rejected.
Before we plunge ahead into the deeper details of how security managers and class loaders work, let's step to the other side of the security model for a moment and see what it looks like to untrusted classes. We've seen what happens backstage--but what does it look like if you don't have a backstage pass?
Applets and other untrusted (or partially trusted) classes, such as "servlets" in Java-based Web servers, or protocol handlers and content type handlers in HotJava, run within the confines of the application security policy. (In fact, depending on the security policy itself, it's possible that all classes except for the security manager and class loaders run under some security restrictions.) Such "unprivileged" classes are the kind that most Java programmers will be writing, so it's important to understand what the Java security facilities look like from the point of view of ordinary code.
Security violations are signaled when the security manager throws a SecurityException. It is certainly possible to catch that SecurityException and ignore it, or try a different strategy, so an attempt to access a secured resource doesn't have to mean the end of your applet. By trying different things and catching the exception, applets can build a picture of what they are and are not allowed to do. It's even possible to call the security manager's access-checking methods directly, so that you can find out whether a certain resource is accessible before actually attempting to access it.
NOTE: Such probing may seem sneaky, but there are plenty of legitimate uses for it. If you are writing a word processing applet, for example, it is a good idea to make the applet probe first to determine whether it is allowed to write a file to the local disk--before allowing the user to spend three hours typing an important document. As a user, wouldn't you much rather know from the beginning that saves are not allowed?
This section delves a little deeper into the implementation of security managers and class loaders--deep enough to help you implement such classes yourself, should you have to. If you are building Java applets or other dynamic extensions, class libraries, or even standalone Java applications that don't need dynamic network extensibility, you may want to skip this section and proceed to "How Good Is Java Security?" later in this chapter. But if you are building a Java-based application that has to host applets or other untrusted classes, this section is important for you because those are the kinds of applications that need a security policy.
Unfortunately, the JDK doesn't come with a working security policy mechanism that's ready for an application to use. The SecurityManager class that comes with the JDK is an abstract class, so no instances can be created. It wouldn't be useful anyway, because every access- checking method throws a SecurityException immediately, in every case--whether any untrusted classes are active or not! Clearly, no program could accomplish anything useful with that security manager on watch.
Therefore, if your application plans to host untrusted classes, you need a new SecurityManager and one or more ClassLoader implementations. You'll probably have to build them yourself, because nobody is yet offering reusable security policy implementations that Java programmers can use "out of the box." I expect such third-party security support to be available at some point, but until that happens, the next sections explain how to do it yourself.
Unlike most other portions of an application, class loaders must work both sides of the security fence. They must take care to consult the security manager before allowing certain operations, and they must cooperate with the security manager to help it learn about classes and make decisions about access requests. They must also avoid breaking any of the assumptions about classes on which the security manager relies.
When defining a class, the class loader must identify the package in which the class belongs and call SecurityManager.checkPackageDefinition() before actually loading the class into that package. Membership in a package gives a class special access to other classes in the package and can provide a way to circumvent security restrictions.
When the class loader defines a class, it must also resolve the class. Resolving a class involves locating and loading (if necessary) other classes that the new class requires. This is done by calling a native method called ClassLoader.resolveClass(Class). If other classes are needed during the resolution process, the Java runtime system calls the loadClass(String, boolean) method in the same ClassLoader that loaded the class currently being resolved. (If the boolean parameter is true, the newly loaded class must be resolved, also.)
The class loader must be careful not to load a class from an untrusted source that will mirror a trusted class. The CLASSPATH should be searched first for system classes. This is especially important during the resolution process.
Additionally, the class loader should check with the security manager about whether the class being resolved is even allowed to use the classes in the requested package. The security manager may want to prevent untrusted code from using entire packages.
Listing 34.1 gives an example of the steps you can take to load a class securely.
protected Class loadClass(String cname, boolean resolve) { // Check to see if I've already loaded this one from my source. Class class = (Class) myclasses.get(cname); if (class == null) { // If not, then I have to do security checks. // Is the requestor allowed to use classes in this package? SecurityManager security = System.getSecurityManager(); if (security != null) { int pos = cname.lastIndexOf('.'); if (pos >= 0) { security.checkPackageAccess(cname.substring(0, pos)); } } try { // If there's a system class by this name, use it. return findSystemClass(cname); } catch (Throwable e) { // otherwise, go find it and load it. class = fetchClass(cname); } } if (class == null) throw new ClassNotFoundException(); if (resolve) resolveClass(class); return class;
}
In the preceding listing, the real work of actually retrieving a class and defining it is done in the fetchClass() method. The primary security responsibility of that method is to call SecurityManager.checkPackageDefinition(package) before actually defining the class, as described earlier.
The way this resolution process works (with the ClassLoader that loaded the class being responsible for resolving class dependencies) is one reason why applications typically define one class loader for each different source of classes. When a class from one source has a dependency on some class named MyApplet, for example, it would probably be a mistake to resolve the dependency using a class with the same name from another source.
The other side of the class loader's responsibility for security is to maintain information about classes and provide that information to the security manager. The type of information that is important to the security manager depends on the application. Currently, most Java applications base security decisions on the network host from which a class was loaded, but other information may soon be used instead. With digital signature facilities available, it will be feasible to allow certain classes special privileges based on the organization or authority that signed those classes.
Implementing a security manager can involve a lot of work, but if you have a coherent security policy, the process isn't particularly complicated. Most of the work involved stems from the fact that the SecurityManager class has a lot of methods you must override with new, more intelligent implementations that make reasonable access decisions instead of automatically disallowing everything.
Once the security manager has decided to allow an operation, all it has to do is return. Alternatively, if the security manager decides to prohibit an operation, it just has to throw a SecurityException. Communicating the decision is easy--the hard part is deciding.
The section, "Example: Reading a File," earlier in this chapter, contains a simple example of the workings of the Java security system. That example omitted a few details for the sake of simplicity, but now you need to know the whole story. The security manager can examine the execution stack to find out which classes have initiated an operation. If an object's method is being executed at the time the security manager is called, the class of that object is requesting the current operation (either directly or indirectly). The important thing about the objects on the stack, from the security manager's point of view, is not the objects themselves but their classes and those classes' origins. In Java, each object contains a pointer to its Class object, and each class can return its class loader through the getClassLoader() method. The implementation of SecurityManager uses those facts, along with native methods that can find the objects on the stack itself, to find out the classes and class loaders that have objects on the execution stack. Figure 34.1 shows the Java execution stack while the security manager is executing.
The security manager and the Java execution stack.
The JDK applet viewer application and Netscape Navigator 3.0 have simple security models: If a class is not a system class (that is, if it wasn't loaded from CLASSPATH), it isn't trusted and isn't allowed to do very much. If your security model is that simple, your security manager will be simple, too. Calling SecurityManager.inClassLoader() tells you whether the operation is being requested by untrusted code. It returns true if there is any class loader at all on the stack. System classes loaded from CLASSPATH don't have a class loader, so if there's a class loader on the stack anywhere, there's an untrusted class in control.
If an operation is to be prohibited in general, but allowed if it comes from a particular trusted class, you can investigate further. SecurityManager.classLoaderDepth() tells you how deep on the stack the first class loader is. When classLoaderDepth() is used with SecurityManager.classDepth(String), it's possible to determine whether a particular trusted class is really in control. Imagine a distributed calendar-management system that makes use of applets. Such a system may include a trusted class, Invite, that records an invitation of some sort in your local calendar file. Applets can use the Invite class to issue invitations. Although you don't want an untrusted applet to write directly to your calendar file, you can trust Invite to write only invitations and not do any damage or reveal any private information. In such an application, the security manager's checkWrite() method might contain code like this:
if (classDepth("Invite") < classLoaderDepth()) { // The Invite class is in control, so we can allow the request. return; } else { throw new SecurityException("attempted to write file" + filename);
}
The inClass(String) method can also be helpful in this situation, if you're confident that the class you're interested in doesn't call any untrusted classes along the way. Be careful, however, because inClass() simply tells you that the specified class is on the stack somewhere. It says nothing about how deep the class is or what classes lie above it on the stack.
Now that digital signature technology is available for Java classes, it will be possible to verify the source of Java classes and loosen the security restrictions appropriately. If you are designing an application with such capability, you may need more information about the class loader responsible for the object requesting an operation. The currentClassLoader() method returns the ClassLoader object highest on the stack. You can query that object for application-specific information about the source of the class. (You learn more about authenticated, digitally signed classes later in this chapter.)
Finally, if all those other methods aren't enough to implement your security policy, SecurityManager provides the getClassContext() method. It returns an array of Class objects, in the order they appear on the stack from top to bottom. You can use any Class methods on these objects to learn various things: getName(), getSuperclass(), and getClassLoader(), among others.
Building your application's security manager takes work, and it can be complicated, but it doesn't have to be a nightmare. Just be sure to design a coherent security policy first.
We've talked about implementing class loaders and security managers, but there's one crucial question left: how are those new implementations installed so that they are called when classes have to be loaded from the network and when security decisions have to be made?
To answer that question, let's start with the security manager. One particular instance of SecurityManager serves as the security manager for an application. That instance is installed by using System.setSecurityManager(). Here's how to install a security manager in your application:
System.setSecurityManager(new mySecurityManager());
Likewise, the security manager can be accessed by using System.getSecurityManager(). Any Java method can query the security manager, but it's crucial that the methods that provide access to sensitive system resources query the security manager before they permit the access. Such checks are very simple, and all the Java library classes that consult the security manager use nearly identical code to do it. For example, here's the File.delete() method:
/** * Deletes the specified file. Returns true * if the file could be deleted. */ public boolean delete() { SecurityManager security = System.getSecurityManager(); if (security != null) { security.checkDelete(path); } return delete0();
}
The delete0() method is what really does the work of deleting the file. (It's declared private, so other classes can't call it directly.) Before calling it, the delete() method checks with the security manager to see whether the operation is permitted. If everything is fine, the security manager's checkDelete() method simply returns, and the delete0() method is called. If the operation is not allowed, checkDelete() throws a SecurityException. Because delete() makes no attempt to catch the exception, it propagates up to the caller, and delete0() is never called.
In the delete() method, if there is no security manager defined, access is always granted. The same is true for all the methods that perform security checks. If no security manager is defined, everything is allowed. Thus, it's important that any application that is going to be loading untrusted classes create a security manager before the first untrusted class is loaded into the virtual machine.
What about the class loaders? How are they called to load classes from the network?
Classes are either loaded explicitly by the application or automatically by objects called factory objects. Some applications may have optional functionality that is loaded on demand. In that case, it may make sense for the application to have built-in knowledge of the classes required for those optional features. In other situations, the classes are supplied by third parties and should be loaded in response to some of the data being handled by the application. In such cases, it makes sense to have factory objects that search for a class that can handle the situation and load it. The core Java library can use factory objects to load protocol handlers and content type handlers in conjunction with the URL class. See "Further Reading," later in this chapter, for pointers to more information about factory objects.
In any case, a class loader is called explicitly by application code, whether it is a factory object or some other part of the application, so no "installation" of ClassLoader objects is usually necessary.
Earlier in this chapter, you learned about the system resources that are protected by the Java security facilities. The security system must also take care to protect certain parts of the security model itself, so that applets cannot subvert the security system and slip in through the back door.
The first line of special protection for the security system is the mechanism for installing the security manager. Obviously, if an applet could install its own security manager, it could do anything it pleased. Therefore, installation of the security manager must be protected. But there's a "chicken and egg" problem here: when the security manager is first installed, there's no security manager to rule on whether it should be allowed!
The resolution of this problem is simple. If no security manager is installed, any class can install one. Once a security manager is active, however, it is always a security violation to attempt to replace it. This implies that applications must take care to establish their security policies before any untrusted code is brought into the virtual machine. Another implication is that during any individual execution, an application can have only one security manager. Thus, if it's desirable to adjust the security policy of an application while it is running, those adjustments must be handled dynamically by the security manager itself; they can't be accomplished by replacing the security manager.
The other defenses for the security system are more conventional in that they involve library routines that consult the security manager for access decisions. The protected aspects of the security system fall into two categories:
Java 1.1 includes important new features for building flexible security policies. Authentication is the process of verifying the identity or origin of someone or something. If you can verify that an applet originated with a particular person or company, you might be willing to trust it with additional privileges beyond those which you ordinarily grant to applets. The following sections discuss these facilities as well as some problems and limitations that currently apply.
Before going into the details of cryptographic security as it relates to Java, you must know a few basics about cryptography in general. Because this chapter isn't about cryptography, I won't go into great depth--and I will certainly stay far away from the complex math involved. The java.security package hides all these details anyway, so the level of discussion presented here is sufficient for most developers.
Encryption is the process of transforming a message in such a way that it cannot be read without authorization. With the proper authorization (the message's key), the message can be decrypted and read in its original form. The theories and technologies of encryption and decryption processes are called cryptography.
Modern cryptography has its basis in some pretty heavy mathematics. Messages are treated as very large numbers, and an original, readable message (the plain text) is transformed into an encrypted message (the cipher text) and back again by means of a series of mathematical operations using appropriate keys. The keys are also large numbers. All this math means that cryptography is a somewhat specialized field, but it also means that computers are pretty good cryptographic tools. Because computers treat everything as numbers (at least at some level), cryptography and computers go together well.
The obvious use for encryption is to keep secrets. If you have a message that you need to save or send to a friend, but you don't want anyone else to be able to read it, you can encrypt it and give the key to only the people you want to trust with the secret message.
Less obvious, but just as important, is the possibility of using cryptography for authentication: that is, verifying someone's identity. After you know how to keep secrets, authentication comes naturally. For centuries, people have proved their identities to each other by means of shared secrets (secret handshakes, knocks, or phrases, for example). If you were to meet someone who claimed to be a childhood friend, but who had changed so much that you didn't recognize him or her, how would he or she go about convincing you? Probably by telling you details of memorable experiences that you two alone shared. The more personal, the better--the more likely that both of you would have kept the secret through the years. Cryptographic authentication works the same way: Alice and Bob share a key, which is their shared secret. To prove her identity, Alice encrypts an agreed-on message using that key and passes the encrypted message to Bob. When Bob decrypts it successfully, it is proof that the message originated from someone who shares the secret. If Bob has been careful to keep the secret and trusts Alice to do the same, he has his proof.
You may have noticed in the preceding two paragraphs that keeping secrets and proving identity both depend on keeping other secrets: the keys. If some enemy can steal a key, he or she can read the secret messages or pretend to be someone else. Thus, key security is very important. Worse still, for most uses of cryptography, keys must be traded between people who want to communicate securely; this key exchange represents a prime opportunity for the security of the keys to be compromised.
Conventional cryptographic algorithms are symmetric: that is, the same key is used for both encryption and decryption. More recently, researchers have developed asymmetric, public-key cryptographic algorithms that use key pairs: If a message is encrypted with one key, it must be decrypted with the other key in the pair. The two keys are related mathematically, but in such a complex way that it's not feasible (that is, it's too costly or time consuming) to derive one key from the other, given sufficiently long keys.
Public-key cryptography simplifies key management immensely. You can treat one of the keys in the pair as your public key and distribute it widely, keeping the other as your secret key, known only to you. If Bob wants to create a message that only Alice can read, he can encrypt it using her public key. Because the public key can't be used to decrypt the message, others who also know Alice's public key can't read it, but Alice, using her secret key, can. Then, if Alice wants to prove her identity to Bob, she can encrypt an agreed-on message with her secret key. Bob (or anyone else) can decrypt it with her public key, thus demonstrating that it must have been encrypted with her secret key. Because only Alice knows her secret key, the message must really have come from her.
Public-key cryptography sounds unlikely and almost magical when you first encounter it, but it's not such an uncommon idea. Your own handwritten signature is somewhat like a key pair. Many of the people and organizations you deal with regularly probably recognize your signature (or have a copy on file for comparison), making the appearance of your signature a sort of public key. Actually placing your signature on a new piece of paper, however, is a skill that only you have: that's the secret key. Of course, signatures can be forged, but the point is that for all but one person, creating the signature is pretty difficult but having anyone verify it is easy. Public-key cryptography makes possible the creation of digital signatures that work in much the same way, except that forging a digital signature is much more difficult than forging a written signature.
If Alice wants to apply a digital signature to a document before sending it to Bob, a simple way for her to do so is to encrypt the document with her secret key. Because many people know her public key, the document isn't private--anyone with Alice's public key can decrypt it and read the contents (applying another layer of encryption with another key is possible; doing so produces a document that is both signed and private). When Bob successfully decrypts the message with Alice's public key, that action indicates that the message must have originally been encrypted with her secret key. What makes this effective as a signature is that, because only Alice knows her secret key, only she could have encrypted it in the first place.
Many other details enter into the practical use of cryptography, of course. For several reasons, practical digital signatures are not as simple as the preceding example. Even with public-key cryptography, key management and security are important (and tricky) issues. For example, once Bob has Alice's public key, how does he know for sure that it really is her key? If the key is actually a fake provided by an attacker, signed documents that appear to be from Alice would really be from the attacker, and the attacker would be able to read documents that Bob intended for Alice's eyes only. To reduce this risk, most systems provide a mechanism by which a third party can certify a key, attesting to the key's proper ownership.
Another complication is that public-key (asymmetric) cryptography is much more complicated (and thus much slower) than symmetric cryptography, so symmetric cryptography still has an important role to play. One serious problem is that, unlike most computer algorithms, most good cryptographic algorithms come with legal entanglements. Many are protected by patents, so they must be licensed from the patent holders. The United States government considers implementations of strong encryption algorithms to be in the same category as munitions, and it places heavy restrictions on their export (even though many of the best algorithms were invented outside the United States, and even though it's legal to export books containing cryptographic code). Some other governments prohibit the use of strong cryptography except for purposes of authentication, and a few governments ban it entirely. There are bills currently pending in the U.S. Congress to lift the export restrictions, but as I write this, those bills haven't yet become law, and the U.S. government's cryptography export policy is one of the factors currently delaying the release of some of the facilities planned for the java.security package.
Fortunately, the package hides most of the technical complications, and the Java license explains all the legal and political details. The rest of this chapter covers the basics of how you can use the java.security package with the rest of the Java library to make it possible for applets to do really useful work.
The java.security package, along with the javakey tool, provide facilities for digitally signing classes (actually, for signing JAR bundles) so that the identity of the party responsible for the classes can be verified. It's the identity of the signer of the classes that is later used to make access decisions.
The javakey tool is somewhat difficult to use. It is a command-line tool that supports a wide variety of options that direct it to perform different tasks. It has two primary jobs: managing a database of entities (individuals or groups that may want to sign or be associated with some applets or Java classes), and applying digital signatures. In more detail, here are some of the things javakey can do:
When an application loads an applet from a signed JAR bundle, if the application has access to a certificate for the signer, the signature can be verified, thus providing fairly reliable information about the origin of the applet. Signatures are represented by special signature files in the bundle, and verification is done using a suite of classes in the java.security package--the most important classes being Signature, MessageDigest, and Identity. Once the signatures have been verified, the identities of the signers can be recorded using the class loader's setSigners() method. When the time comes to make access decisions, the security manager can learn about the signers by using the Class.getSigners() method.
Unfortunately, the supported Java 1.1 classes don't provide all the facilities you need to build such an application yourself. In particular, the classes that understand the format of the signature files are a part of the sun.* packages, which are not supported (or documented, for that matter) for use outside of Sun Microsystems. As of this writing, the java.security package is useful for manipulating the identity database, and the javakey tool is useful for signing JAR bundles, but Java is still missing the features required to build applications that exploit digital signatures.
Assuming that you could verify the signer of a class, there would still remain the problem of making the access decision. How does the security manager know how much trust the user places in the provider of the class?
The other important new security feature in Java 1.1 is the access control list (ACL). ACLs are a useful way of representing information about permissions and privileges. Given an entity and a particular permission to be checked, the java.security.acl.Acl class can check whether the entity holds that permission by using the checkPermission() method.
ACLs have a structure that seems rather complex at first. An ACL consists of a set of ACL entries, each of which associates a set of permissions with a single principal (an individual entity or a group). An individual can be a member of multiple groups, and the effective permissions held by the individual are a composite of the directly associated individual permissions and those of all of the groups of which he or she is a member. (Individual permissions take precedence over group permissions.)
Another twist is that ACL entries come in both positive and negative varieties. A negative permission cancels an equivalent positive permission, so that permissions are not strictly additive. By using the relationship between negative and positive permissions, you can create special exceptions; for example, taking away a permission from someone who otherwise (by virtue of group membership) would hold the permission, or giving special privileges to individuals who are trusted more than others in a group.
ACLs are powerful, flexible, and simple to use. They are also general purpose and can be used for more than just trusted applets. Unfortunately, just as with the code-signing features, the Java 1.1 ACL implementation omits some important facilities. The classes in the java.security.acl package form a nice interface to ACLs that already exist, but they provide no way to create ACLs or to initialize them from a properties file or some other external resource.
There remains one big topic to cover in this chapter: just how good is Java security? Does it really do all that it claims to do, and is it really the best thing out there, or does it have some serious weaknesses? Should we really be trusting our systems to Java applets?
Of course, the precise answer depends on who you ask. Network security is a complex topic, and we don't yet fully understand every detail of it. Nevertheless, some rough consensus is beginning to emerge about the strength of Java's security facilities. In the following sections, I attempt to answer the questions objectively, based on my own analysis as well as on the opinions of others, including some experts in the security community.
Java security doesn't try to solve every security problem. There are some potential attacks that Java doesn't currently attempt to prevent (although work is in progress to extend the security facilities into these areas as well). These attacks involve the abuse of resources that aren't necessarily sensitive, such as CPU cycles, memory, and windows.
The current Java security model takes a yes-or-no approach to security: a class is either allowed access to a resource or it is not. There is no provision for allowing an untrusted class to use only a certain amount of some resource. Java does not enforce resource quotas. It doesn't even keep track of resource usage so that the application can enforce quotas.
Applets can exploit this fact to mount attacks of varying severity: denial-of-service attacks that render your machine unusable by allocating all your memory or some other finite resource; annoyances that make noises or pop up big, bouncing windows on your screen that flash and then disappear before you can destroy them; or resource theft attacks, where an applet stealthily lurks in the background, using your machine to perform part of some large calculation while you wonder why your computer seems a little slower than usual today.
Java's designers didn't simply forget about these issues. They decided not to deal with them at the time because they are extremely difficult to handle correctly. Especially when it comes to CPU time, it's difficult (and often impossible) to determine whether a class is doing useful work or simply wasting the resource. Researchers are currently investigating ways to prevent (or limit) these kinds of attacks, and future versions of Java will attempt to deal with them as well as possible.
In addition to the kinds of attacks that were intentionally excluded from the scope of the Java security facilities, there have been a few accidental flaws. Several have been found by Drew Dean, Ed Felten, and Dan Wallach, a team of researchers at Princeton University. Others were found by David Hopwood, an Oxford researcher. Wallach and Hopwood have both consulted with Netscape, helping to strengthen Java security in Netscape products.
All these security holes permitted very serious attacks that could result in loss or theft of valuable data. It's comforting, however, to know that each of those holes was the result of a simple bug in the code that implemented part of the security architecture, rather than any fundamental flaw in the architecture itself. In each case, the bugs were fixed within a few days of discovery. In fact, the process in which researchers carefully examine Java for security weaknesses is one of the important reasons why Sun made the Java source code available for study. Java security is now much stronger because several security bugs have been found and eliminated.
That holes have been found in Java security is a reminder that we should be cautious, but there's no reason for panic. In truth, all network-oriented applications are candidates for serious security holes, and such problems are actually quite common in network applications. It's also common for security considerations to be almost an afterthought. Java actually seems to be stronger than many other programs because certain aspects of its security architecture have been central to Java's design from the beginning, and because all the Java hype has prompted some intense scrutiny.
What are the chances that someone will discover a really fundamental security flaw in Java--a serious flaw in the security architecture itself?
It's impossible to say for sure, of course, but we do know enough about security to be able to analyze the Java security model and make some guesses. It turns out that the Java security architecture is not perfect, and there are some things to watch out for. Dean, Felten, and Wallach, the Princeton researchers mentioned earlier, have expressed some concern about the complexity of Java's security architecture, but most who have studied the matter seem to believe that the outlook is good.
The biggest weakness in the Java security model is its complexity. Complexity begets mistakes. The problem is exacerbated by the fact that application authors currently must implement their own security manager and class loaders. Hopefully, a few flexible, configurable security policy implementations will be made available for reuse, so that they can be carefully tested and debugged once, and used many times. Such a development would greatly reduce the potential for new security holes.
Another weakness is that the security responsibility is split between the security manager and the class loaders. It would be better if the job were localized in one class.
Although we may wish things were simpler, as a whole, the Java security architecture appears to be strong enough, and we can expect it to grow stronger with time.
Microsoft has its own proposal for dynamically loaded code on the Internet: ActiveX. ActiveX is essentially a way to leverage the existing body of OLE controls that already exists for the Windows environment. Questions have been raised about the security implications of the proposal, however. ActiveX certainly has considerable value in some situations, but there are good reasons to be worried about potential security problems.
ActiveX controls can be written in Java, and you can bet that Java-based controls will begin appearing in the near future. However, most of the large existing body of controls have been written in C or C++, and have been compiled into native Intel machine code. The ActiveX answer to securing these controls is based on a digital signature technology called Authenticode. Developers and software vendors sign their controls, and the ActiveX system verifies that signature against a registry of signature authorities, thereby verifying the origin of the control. Users can configure ActiveX to allow only controls from certain vendors to run on their system; for example, if you trust Microsoft and Corel, but nobody else, you can configure ActiveX to automatically fetch and run controls from those two companies, but to refuse all others.
That sounds okay, right? After all, it sounds a lot like the code-signing system for Java classes that is intended to help loosen the rigidly conservative Java security policy.
Actually, however, the two systems aren't all that similar. To understand why, you have to look at what a signature on a piece of software really means. A signature really only means "I signed this and it hasn't been changed since I did." The person or authority who applies the signature might actually mean more than that, but the Authenticode proposal doesn't require any guarantees. Even if guarantees were present, it's unlikely that they would be very strong, based on the weak (or nonexistent) warranties that are found on most software products today.
So a signature is not a guarantee of safety. If that's the case, what makes the Java code-signing technology better than Authenticode? The difference is that ActiveX security is all-or-nothing; Java security can be extremely fine grained and flexible. If you (or the ActiveX system) choose to allow a native-code ActiveX control to run on your system, that control has complete access to your system. It can read any file you can read, delete or change any file you can, make any network connection it chooses--even format your hard disk. Java, on the other hand, can grant access to classes selectively, even choosing which files a particular class is allowed to read. A word processing applet, invoked specifically to edit one particular document, can be allowed to write to that document and no other.
Authenticode is not really a security architecture; it's a trust management architecture. Furthermore, it's a very limited trust management architecture because it doesn't have any understanding of partial trust.
Some people say that ActiveX will probably be widely used on corporate intranets, with Java ruling the roost on the Internet at large. That's probably a good prediction--certainly ActiveX will be very useful on intranets because the security issues aren't quite as important there. However, even that thought makes me uneasy because security studies have shown that most security problems originate within an organization--and because of a prediction of my own: I believe that, over the next five to ten years, the walls between corporate intranets and the Internet will become more porous, often coming down altogether. Companies that rely heavily on intranet applications built with ActiveX will be at a disadvantage when that time comes.
Several Web pages, and at least one other book, provide more in-depth treatment of Java security topics. JavaSoft maintains a FAQ page on applet security issues (http://java.sun.com/sfaq/); the Princeton researchers who have found several serious Java security bugs maintain a similar page with an outsider's perspective (http://www.cs.princeton.edu/sip/java-faq.html). For more information about the second layer of the security model (validation and verification of Java class files and bytecodes as they are loaded into the interpreter), read The Java Virtual Machine Specification, by Tim Lindholm and Frank Yellin. The book is a technical description of the architecture of the virtual machine and contains a precise description of the steps taken by the class verifier. To see examples of applets that exploit some of the weaknesses in the current Java security facilities, take a look at Mark LaDue's Hostile Applets page (http://www.math.gatech.edu/~mladue/HostileApplets.html). (Don't worry, you can go at least that far without fear; Mark keeps the hostile applets off the main page.) For more information about programming with the Java security system, read Maximum Java (published by Sams.net). It provides deeper coverage of several topics that readers of this chapter might find interesting, including these:
Java's security model is possibly the least understood aspect of the Java system. Because it's unusual for a language environment to have security facilities, some people have been bothered by the danger; at the same time, because the security restrictions prevent some useful things as well as some harmful things, another group of people has wondered whether security is really necessary.
Java security is important because it makes exciting new things possible with very little risk. Early security holes caused by implementation bugs are being closed, and technology is being fielded that permits the strict security policy to be relaxed carefully and selectively. Resources that can be used to destroy or steal data are being protected, and researchers are examining ways to prevent applets from using other resources to cause annoyance or inconvenience.
Application developers can design their own security policies and supply parts of the third layer of the Java security model to implement those policies in their applications.
The Java security architecture is sound. Early weaknesses and bugs are not a surprise, and the process that has exposed those flaws has also helped remove them.
©Copyright, Macmillan Computer Publishing. All rights reserved.