by Glenn Vanderburg
Java's popularity and high profile are the product of several different characteristics, but none is more important than Java's claim to be a "secure" language. Java isn't the first language to have built-in security features, but it is one of the first; the others have been research projects or special-purpose languages. Certainly none can come close to Java's general-purpose, mainstream appeal.
This chapter presents a brief survey of Java security. First, you'll learn why security is important, and what kinds of threats Java protects against. You'll also learn the details of how Java security works, and how applications make use of the security features to implement their own security policies. Finally, the chapter discusses the quality of Java's security facilities: how strong they are, what weaknesses remain, and how Java security compares with other systems.
Since the introduction of Java nearly a year ago, it's been amusing to watch the varying reactions to Java's security claims. Java's creators recognize that security is important, and believe that they have devised a good solution. There are many who agree with them, but there are also many who don't, and the dissenters range all over the spectrum. Some decry the whole idea of indiscriminately bringing untrusted, possibly rogue programs onto your machine from the Internet, claiming that no language security model can be solid enough to hold up against clever crackers. Others wonder what all the fuss is about, and ask whether security is needed at all-wouldn't it be better if applets could actually read and write your files and do useful work for you? Many developers take a middle view; they understand the need for security in general, but they're frustrated when their own Java programs aren't allowed to do benign, helpful things.
All these groups have some good points, but they miss other important points about Java security, or Internet security in general.
If your computer and the data stored in it are important to you, it's easy to understand why you may not want to run Java applets at all. Java's creators claim that it's secure, but what if they're wrong? You would be opening the door for some malevolent programmer to enter your computer and steal, destroy, or alter important data.
But the same is true of software that you manually load from the Internet and install on your machine. Even if you never do that, you also risk your data when you purchase a shrink-wrapped program from a store and install it from disks or CD. Those programs could destroy or alter your data immediately, or steal it the next time you are connected to the Internet. Although this may sound unlikely to you, it has happened. Several companies have accidentally shipped viruses to their customers along with products. Early beta versions of the Microsoft Network software performed some data collection that many users considered to be an invasion of privacy.
Java may actually improve the overall security of our computers because it guards against accidents almost as much as it guards against deliberate damage. Most of us have had the experience of installing a buggy new program that caused problems on our computers so severe that we had to reinstall the operating system. In many cases, such experiences result in the loss of important data. Although Java may not completely eliminate such problems, Java applets won't be able to inflict such damage, even accidentally.
In fact, the only perfectly secure computer is one that is unplugged. To do useful things, you have to take some risks, and the essence of security is to have acceptable levels of risk and inconvenience. This is true of all security, not just computer security. That's not to say that good security has to be a lot of trouble, but perfect security is always more trouble than it's worth.
On the other extreme, if what's on your computer isn't vitally important to you, or if you're simply not convinced anyone would want to steal or destroy it, you may see Java's security restrictions as a nuisance. It would be great if applets, fetched on demand from Web pages (and free of charge) could actually be useful applications that you could use to accomplish your work.
That certainly would be great, even if the applets weren't free. Unfortunately, there are a couple things about this scenario that should make you uneasy. Traditional software requires your conscious intervention before it can do anything on your computer: you have to acquire and install it first, and then actually run it. An applet, on the other hand, can be invoked without forethought when you browse to a Web page that contains it. Furthermore, you may never know the applet is there-it may not take up any visible space on the Web page. So an applet can run on your system without any initiative on your part, and it may not leave a trace behind. With an applet, you have no choice, and the applet has no accountability.
You may wonder why anyone would want to attack your system. The unfortunate fact is that many crackers choose their targets randomly. They aren't searching for any particular data, or seeking to harm any particular individual. Instead, they are vandals, or they just want to practice their skills or find some random computer they can use to hide their tracks while they attack their true target. They search for vulnerable systems and zero in on them. Historically, most cracking incidents have focused on UNIX systems, but with the increasing prevalence of Windows and Macintosh systems on the Internet, things are bound to change.
Programs automatically fetched to your system and run locally really are different from those you install deliberately, and their security needs are different. Think of workers who come to your home: it's fine to let selected people in when you're there and expecting them, but you wouldn't leave all your doors open all the time just in case someone came by to do some work on your house.
Many users and applet developers understand the need for security, but wish it weren't so strict and inflexible. They say that users should be able to disable or weaken Java's security if they want to. To a large degree, they're right, and you can expect Java applications to have more flexibility in the future.
Java security isn't all or nothing. The application can grant or deny access to applets based on a wide variety of criteria: the name of the applet, where it came from, the type of resource it's trying to access, even the particular resource. An application can choose to let applets read some files, but not others, for example.
If that's the case, then why is Java security in early applications (such as Netscape Navigator and Microsoft Internet Explorer) so inflexibly strict? There are two reasons.
The first is that such configurable, flexible security is difficult to get right. It is usually easy to build a high, impenetrable wall with no holes at all, but a wall with selective holes is much harder to develop. Flexible, selective schemes involve a lot of extra complexity, and with complexity comes the potential for mistakes. In addition, there are subtle, difficult questions surrounding flexible security schemes. For example, employees may have very different ideas about acceptable security than their employer does-how much control should be given to the users and how much to the site security administrator? Faced with numerous tough questions like that, the people behind Java decided to be careful at first. They are starting with an extremely conservative security model, which will become less rigid as time goes on. This is probably a good strategy because a big security scare early in Java's lifetime would have really dampened enthusiasm for the language.
The other problem is that there isn't yet a good criterion for deciding which applets should be trusted and which should not. The best solution is probably to trust applets based on who wrote them, but that's difficult to verify. JavaSoft has developed a model based on digital signatures, but they've been delayed in releasing it because of U.S. government export restrictions. Currently, the government regulations only permit them to export a particularly weak form of signature system, and they expect to be able to use something strong enough to provide real security. Hopefully, these issues will be resolved and a system for signing Java classes will be available by early 1997.
Later in this chapter, you'll read more about the details of the security model and how the application security manager can make fine-grained decisions about access to system resources.
To really understand the Java security model-why it's important, how it works, and how to work with it-you should have a good idea about the kinds of security attacks that are possible and which system resources can be used to mount such attacks. Java takes care to protect these resources from untrusted code. If you are using a Java application that allows you to configure applet security, or if you are writing a Java application that loads classes from the Net, it helps to understand just what doors you might be opening when you give an applet access to a particular resource.
There are several different kinds of security attacks that can
be mounted on a computer system. Some of them are surprising to
people who are new to computer security issues, but they are very
real and can be devastating under the right circumstances. Table
35.1 lists some common types of attacks.
Type of Attack | Description |
Theft of information | Nearly every computer contains some information that the owner or primary user of the machine would like to keep private. |
Destruction of information | In addition to data that is private, most of the data on typical computers has some value, and losing it would be costly. |
Theft of resources | Computers contain more than just data. They have valuable, finite resources that cost money: disk space and a CPU are the best examples. A Java applet on a Web page could quietly begin doing some extensive computation in the background, periodically sending intermediate results back to a central server, thus stealing some of your CPU cycles to perform part of someone else's large project. This would slow down your machine, wasting another valuable resource: your time. |
Denial of service | Similar to theft of resources, denial-of-service attacks involve using as much as possible of a finite resource, not because the attacker really needs the resource, but simply to prevent someone else from being able to use it. Some computers (like mail servers) are extremely important to the day-to-day operations of businesses, and attackers can cause a lot of damage simply by keeping those machines so busy with worthless tasks that they can't do their real jobs. |
Masquerade | By pretending to be from another source, a malicious program can persuade a user to reveal valuable information voluntarily. |
Deception | If a malicious program were successful in interposing itself between the application and some important data source, the attacker could alter data-or substitute completely different data-before giving it to the application or the user. The user would take the data and act on it, assuming it to be valid. |
In addition to these common attacks, Java applets can try another kind of attack. Because applets are fetched to your machine and run locally, they can try to assume your identity and do things while pretending to be you. For example, machines behind corporate firewalls often trust each other more than they trust machines on the wider Internet, so once an applet has started running on your machine behind the firewall, it may try to access other machines, exploiting that trust. (That's the reason current Java applications only let applets make network connections to the machine from which they were fetched.) Another example is mail forging: once on your machine, an applet may attempt to send threatening or offensive mail which appears to be from you. Of course, Internet mail can be forged from other machines besides your own, but doing it from your own machine makes it a little more convincing.
Now that you understand why security features are important and what kinds of threats exist, it's time to learn how Java's security features work and how they protect against those threats.
The Java security model is composed of three layers, each dependent on those beneath it. The following sections cover each of the layers, describing how the security systems works.
The first line of defense against untrusted programs in a Java application is a part of the basic design of the language: Java is a safe language. When programming language theorists use the word safety, they aren't talking about protection against malicious programs. Rather, they mean protection against incorrect programs. Java achieves this in several ways:
All these qualities make Java a "safe" language. Put another way, they ensure that code written in Java actually does what it appears to do, or fails. The surprising things that can happen in C (such as continuing to read data past the end of an array as though it were valid) cannot happen. In a safe language, the behavior of a particular program with a particular input should be entirely predictable-no surprises.
The second layer of Java security involves careful verification of Java class files-including the virtual machine bytecodes that represent the compiled versions of methods-as they are loaded into the virtual machine. This verification ensures that a garbled class file won't cause an error within the Java interpreter itself, but it also ensures that the basic language safety is not violated. The rules about proper language behavior that were written into the language specification are good, but it's also important to make sure that those rules aren't broken. Checking everything in the compiler isn't good enough, because it's possible for someone to write a completely new compiler that omits those checks. For that reason, the Java library carefully checks and verifies the bytecodes of every class that is loaded into the virtual machine to make sure that those bytecodes obey the rules. Some of the rules, such as bounds checking on references to array elements, are actually implemented in the virtual machine, so no real checks are necessary. Other rules, however, must be checked carefully. One particularly important rule that is verified rigorously is that objects must be true to their type-an object that is created as a particular type must never be able to masquerade as an object of some incompatible type. Otherwise, there would be a serious loophole through which explicit security checks could be bypassed.
This verification process doesn't mean that Java code can't be compiled to native machine code. As long as the validation is performed on the bytecodes first, a native compiled version of a class is still secure. "Just-in-time" (JIT) compilers run within the Java virtual machine, compiling bytecodes to native code as classes are loaded, just after the bytecode verification stage. This compilation step doesn't usually take much time, and the resulting code runs much faster.
The third and final layer of the Java security model is the implementation of the Java class library. Classes in the library provide Java applications with their only means of access to sensitive system resources, such as files and network connections. Those classes are written so that they always perform security checks before granting access.
This third layer is the portion of the security system that an application can control-not by changing the library implementation, but by supplying the objects that actually make the decisions about whether to grant each request for access. Those objects-the security manager and the class loaders-are the core of an application's security policy, and you'll read more about them (including how to implement them) a little later in this chapter.
The first two layers of the Java security model are primarily
concerned with protecting the security model itself. It's the
third layer, the library implementation, in which explicit measures
are taken to protect against the kinds of attacks listed in Table
35.1. To help thwart those attacks, Java checks each and every
attempt to access particular system resources that could be used
in an attack. Those resources fall into six categories, as listed
in Table 35.2.
Resource | Description |
Local file access | The capability to read or write files and directories. These capabilities can be used to steal or destroy information, as well as to deny service by destroying important system files or writing a huge file that fills the remaining space on your disk. Applets can also use local file access to deceive you by writing an official-looking file somewhere that you will find later and believe to be trustworthy. |
System access | The capability to execute programs on the local machine, plus access to system properties. These capabilities can be used for theft or destruction of information or denial of service in much the same way that direct file access can: by executing commands that manipulate your files. Additionally, system properties may contain information that you view as private or that can help an attacker break into your system using other means. |
Network access | The capability to create network connections, both actively (by connecting to some machine) and passively (by listening and accepting incoming connections). Applets that actively create connections may be trying to usurp the user's identity, exploiting the trust that other machines place in him or her. Applets that try to listen for incoming connections may be taking over the job of a system service (such as a Web server). |
Thread manipulation | The capability to start, stop, suspend, resume, or destroy threads and thread groups, as well as other sorts of thread manipulation such as adjusting priorities, setting names, and changing the daemon status. Without restrictions on such capabilities, applets can destroy work by shutting down or disabling other components of the applications within which they run, or do so to other applets. Rogue applets can also mount denial of service attacks by raising their own priority while lowering the priorities of other threads (including the system threads that may be able to control the errant applets). |
Factory object creation | The capability to create factory objects that find and load extension classes from the network or other sources. An untrustworthy factory object can garble user data, transparently substitute incoming data from a completely different source, or even steal outgoing data-without the user of the application realizing what's happening. See "Further Reading," later in this chapter, for pointers to more information about factory objects. |
Window creation | The capability to create new top-level windows. New top-level windows may appear to be under the control of a local, trusted application rather than an applet, and they can prompt unwary users for important information such as passwords. The Java security system permits applications to forbid applets from creating new windows, and it also permits tagging applet-owned windows with a special warning for users. |
The third layer of the security model isn't just concerned with protecting system resources; it also provides protection for some Java runtime resources, to protect the integrity of the security model itself. You'll learn more about that kind of protection in "Protection for Security Facilities," later in this chapter.
Let's look at an example to see how the security model works in practice. This example concentrates on what happens in the third layer, for two reasons: The lower two layers sometimes deal with some rather esoteric issues of type theory, and they are not within the programmer's control. The top layer, on the other hand, is relatively straightforward and can be controlled by Java application programmers.
Suppose that the Snark applet has been loaded onto your system and wants to read one of your files-say, diary.doc. To open the file for reading, Snark must use one of the core Java classes-in particular, FileInputStream or RandomAccessFile in the java.io package. Because those core classes are a part of the security model, before they allow reading from that particular file, they ask the system security manager whether it's okay. Those two classes make the request in their constructors; FileInputStream uses code like this:
// Gain access to the system security manager. SecurityManager security = System.getSecurityManager(); if (security != null) { // See if reading is allowed. If not, the security manager will // throw a SecurityException. The variable "name" is a String // containing the file name. security.checkRead(name); } // If there is no security manager, anything goes!
The security manager is found using one of the static methods in the System class. If there is no security manager, everything is allowed; if there is a security manager, it is queried to see whether this access is permitted. If everything is fine, the SecurityManager.checkRead() method returns; otherwise, it throws a SecurityException. Because this code appears in a constructor, and because the exception isn't caught, the constructor never completes, and the FileInputStream object can't be created.
How does the security manager decide whether the request should be allowed or not? The SecurityManager class, an abstract class from which all application security managers are derived, contains several native methods that can be used to inspect the current state of the Java virtual machine. In particular, the execution stack-the methods in the process of executing when the security manager is queried-can be examined in detail. The security manager can thus tell which classes are involved in the current request, and it can decide whether all those classes can be trusted with the resource being requested.
In the Snark example, the security manager examines the execution stack and sees several classes, including Snark. That means something to us, but it probably doesn't mean a lot to the security manager. In particular, the security manager has probably never heard of a class called Snark, and presumably it doesn't even know that Snark is an applet. Yet that's the really important piece of information: if one of the classes currently on the execution stack is part of an applet or some other untrusted, dynamically loaded program, then granting the request could be dangerous.
At this point, the security manager gets some help. For each class on the execution stack, it can determine which class loader is responsible for that class. Class loaders are special objects that load Java bytecode files into the virtual machine. One of their responsibilities is to keep track of where each class came from and other information that can be relevant to the application security policy. When the security manager consults Snark's class loader, the security manager learns (among other things) that Snark was loaded from the network. At last, the security manager knows enough to decide that Snark's request should be rejected.
Before we plunge ahead into the deeper details of how security managers and class loaders work, let's step to the other side of the security model for a moment and see what it looks like to untrusted classes. We've seen what happens backstage-but what does it look like if you don't have a backstage pass?
Applets and other untrusted (or partially trusted) classes, such as "servlets" in Java-based Web servers, or protocol handlers and content type handlers in HotJava, run within the confines of the application security policy. (In fact, depending on the security policy itself, it's possible that all classes except for the security manager and class loaders run under some security restrictions.) Such "unprivileged" classes are the kind that most Java programmers will be writing, so it's important to understand what the Java security facilities look like from the point of view of ordinary code.
Security violations are signaled when the security manager throws
a SecurityException. It is
certainly possible to catch that SecurityException
and ignore it, or try a different strategy, so an attempt to access
a secured resource doesn't have to mean the end of your applet.
By trying different things and catching the exception, applets
can build a picture of what they are and are not allowed to do.
It's even possible to call the security manager's access checking
methods directly, so that you can find out whether a certain resource
is accessible before actually attempting to access it.
Note |
Such probing may seem sneaky, but there are plenty of legitimate uses for it. If you are writing a word processing applet, for example, it is a good idea to make the applet probe first to determine whether it is allowed to write a file to the local disk, before allowing the user to spend three hours typing an important document. As a user, wouldn't you much rather know from the beginning that saves are not allowed? |
This section delves a little deeper into the implementation of security managers and class loaders-deep enough to help you implement such classes yourself, should you need to. If you are building Java applets or other dynamic extensions, class libraries, or even standalone Java applications that don't need dynamic network extensibility, you may want to skip this section and proceed to "How Good Is Java Security?" later in this chapter. But if you are building a Java-based application that needs to host applets or other untrusted classes, this section is important for you because those are the kinds of applications that need a security policy.
Unfortunately, the JDK doesn't come with a working security policy mechanism that's ready for an application to use. The SecurityManager class that comes with the JDK is an abstract class, so no instances can be created. It wouldn't be useful anyway, because every access-checking method throws a SecurityException immediately, in every case-whether any untrusted classes are active or not! Clearly, no program could accomplish anything useful with that security manager on watch.
Therefore, if your application plans to host untrusted classes, you need a new SecurityManager and one or more ClassLoader implementations. You'll probably have to build them yourself, because nobody is yet offering reusable security policy implementations that Java programmers can use "out of the box." I expect such third-party security support to be available at some point, but until that happens, the next sections explain how to do it yourself.
Unlike most other portions of an application, class loaders must work both sides of the security fence. They must take care to consult the security manager before allowing certain operations, and they must cooperate with the security manager to help it learn about classes and make decisions about access requests. They must also avoid breaking any of the assumptions about classes on which the security manager relies.
When defining a class, the class loader must identify the package in which the class belongs and call SecurityManager.checkPackageDefinition() before actually loading the class into that package. Membership in a package gives a class special access to other classes in the package and can provide a way to circumvent security restrictions.
When the class loader defines a class, it must also resolve the class. Resolving a class involves locating and loading (if necessary) other classes that the new class requires. This is done by calling a native method called ClassLoader.resolveClass(Class). If other classes are needed during the resolution process, the Java runtime calls the loadClass(String, boolean) method in the same ClassLoader that loaded the class currently being resolved. (If the boolean parameter is true, the newly loaded class must be resolved also.)
The class loader must be careful not to load a class from an untrusted source that will mirror a trusted class. The CLASSPATH list should be searched first for system classes. This is especially important during the resolution process.
Additionally, the class loader should check with the security manager about whether the class being resolved is even allowed to use the classes in the requested package. The security manager may want to prevent untrusted code from using entire packages.
Listing 35.1 gives an example of the steps you can take to load a class securely.
Listing 35.1. Loading a class securely.
protected Class loadClass(String cname, boolean resolve) { // Check to see if I've already loaded this one from my source. Class class = (Class) myclasses.get(cname); if (class == null) { // If not, then I have to do security checks. // Is the requestor allowed to use classes in this package? SecurityManager security = System.getSecurityManager(); if (security != null) { int pos = cname.lastIndexOf('.'); if (pos >= 0) { security.checkPackageAccess(cname.substring(0, pos)); } } try { // If there's a system class by this name, use it. return findSystemClass(cname); } catch (Throwable e) { // otherwise, go find it and load it. class = fetchClass(cname); } } if (class == null) throw new ClassNotFoundException(); if (resolve) resolveClass(class); return class; }
In the preceding listing, the real work of actually retrieving a class and defining it is done in the fetchClass() method. The primary security responsibility of that method is to call SecurityManager.checkPackageDefinition(package) before actually defining the class, as described previously.
The way this resolution process works (with the ClassLoader that loaded the class being responsible for resolving class dependencies) is one reason why applications typically define one class loader for each different source of classes. When a class from one source has a dependency on some class named MyApplet, for example, it would probably be a mistake to resolve the dependency using a class with the same name from another source.
The other side of the class loader's responsibility for security is to maintain information about classes and provide that information to the security manager. The type of information that is important to the security manager depends on the application. Currently, most Java applications base security decisions on the network host from which a class was loaded, but other information may soon be used instead. With digital signature facilities available, it will be feasible to allow certain classes special privileges based on the organization or authority that has signed those classes.
Implementing a security manager can involve a lot of work, but if you have a coherent security policy, the process isn't particularly complicated. Most of the work involved stems from the fact that SecurityManager has a lot of methods you must override with new, more intelligent implementations that make reasonable access decisions instead of automatically disallowing everything.
Once the security manager has decided to allow an operation, all it has to do is return. Alternatively, if the security manager decides to prohibit an operation, it just has to throw a SecurityException. Communicating the decision is easy-the hard part is deciding.
The section, "Example: Reading A File," earlier in this chapter, contains a simple example of the workings of the Java security system. That example omitted a few details for the sake of simplicity, but now you need to know the whole story. The security manager can examine the execution stack to find out which classes have initiated an operation. If an object's method is being executed at the time the security manager is called, the class of that object is requesting the current operation (either directly or indirectly). The important thing about the objects on the stack, from the security manager's point of view, is not the objects themselves but their classes and those classes' origins. In Java, each object contains a pointer to its Class object, and each class can return its class loader through the getClassLoader() method. The implementation of SecurityManager uses those facts, along with native methods that can find the objects on the stack itself, to find out the classes and class loaders that have objects on the execution stack. Figure 35.1 shows the Java execution stack while the security manager is executing.
Figure 35.1: The Security manager and the Java execution stack.
Note |
Because the security manager really doesn't care about the objects themselves-just the classes and class loaders-the documentation for the SecurityManager class blurs the distinction a bit. It refers to "the classes on the execution stack" and "the class loaders on the execution stack." This chapter uses the same phrases. Strictly speaking, the classes in question aren't actually on the stack, but they have instances that are. Likewise, the class loaders aren't really on the stack, but they are responsible for classes that are. It's just a lot easier to talk about "a ClassLoader on the stack" than "an object on the stack that is an instance of a class that was loaded by a ClassLoader." |
The JDK applet viewer application and Netscape Navigator 2.0 have simple security models: if a class is not a system class (that is, if it wasn't loaded from CLASSPATH), it isn't trusted and isn't allowed to do very much. If your security model is that simple, your security manager will be simple, too. Calling SecurityManager.inClassLoader() tells you whether the operation is being requested by untrusted code. It returns true if there is any class loader at all on the stack. System classes loaded from CLASSPATH don't have a class loader, so if there's a class loader on the stack anywhere, there's an untrusted class in control.
If an operation is to be prohibited in general, but allowed if
it comes from a particular
trusted class, you can investigate further. SecurityManager.classLoaderDepth()
tells you how deep on the stack the first class loader is. When
classLoaderDepth() is used
with SecurityManager.classDepth(String),
it's possible to determine whether a particular trusted class
is really in control.
Imagine a distributed calendar management system that makes use of applets. Such a system might include a trusted class, Invite, which records an invitation of some sort in your local calendar file. Applets can use the Invite class to issue invitations. You wouldn't want an untrusted applet to write directly to your calendar file, but Invite can be trusted to write only invitations and not to do any damage or reveal any private information. In such an application, the security manager's checkWrite() method might contain code like this:
if (classDepth("Invite") < classLoaderDepth()) { // The Invite class is in control, so we can allow the request. return; } else { throw new SecurityException("attempted to write file" + filename); }
The inClass(String) method can also be helpful in this situation, if you're confident that the class you're interested in doesn't call any untrusted classes along the way. Be careful, however, because inClass() simply tells you that the specified class is on the stack somewhere. It says nothing about how deep the class is or what classes lie above it on the stack.
Currently, Java applications typically don't support multiple levels of trust-a class is either trusted or it's not. In the future, when digital signature technology is available for Java classes, it will be possible to verify the source of Java classes and loosen the security restrictions appropriately. If you are designing an application with such a capability, you may need more information about the class loader responsible for the object requesting an operation. The currentClassLoader() method returns the ClassLoader object highest on the stack. You can query that object for application-specific information about the source of the class.
Finally, if all those other methods aren't enough to implement your security policy, SecurityManager provides the getClassContext() method. It returns an array of Class objects, in the order that they appear on the stack, from top to bottom. You can use any Class methods on these objects to learn various things: getName(), getSuperclass(), and getClassLoader(), among others.
Building your application's security manager takes work, and it can be complicated, but it doesn't have to be a nightmare. Just be sure to design a coherent security policy first.
We've talked about implementing class loaders and security managers, but there's one crucial question left: how are those new implementations installed so that they are called when classes need to be loaded from the network and when security decisions need to be made?
To answer that question, let's start with the security manager.One particular SecurityManager instance serves as the security manager for an application. That instance is installed by using System.setSecurityManager(). Here's how to install a security manager in your application:
System.setSecurityManager(new mySecurityManager());
Likewise, the security manager can be accessed by using System.getSecurityManager(). Any Java method can query the security manager, but it's crucial that the methods that provide access to sensitive system resources query the security manager before they permit the access. Such checks are very simple, and all the Java library classes that consult the security manager use nearly identical code to do it. For example, here's the File.delete() method:
/** * Deletes the specified file. Returns true * if the file could be deleted. */ public boolean delete() { SecurityManager security = System.getSecurityManager(); if (security != null) { security.checkDelete(path); } return delete0(); }
delete0() is the method that really does the work of deleting the file. (It's declared private so that other classes can't call it directly.) Before calling it, the delete() method checks with the security manager to see whether the operation is permitted. If everything is fine, the security manager's checkDelete() method simply returns, and the delete0() method is called. If the operation is not allowed, checkDelete() throws a SecurityException. Because delete() makes no attempt to catch the exception, it propagates up to the caller, and delete0() is never called.
In the delete() method, if there is no security manager defined, access is always granted. The same is true for all the methods that perform security checks. If no security manager is defined, everything is allowed. Thus, it's important that any application that is going to be loading untrusted classes create a security manager before the first untrusted class is loaded into the virtual machine.
What about the class loaders? How are they called to load classes from the network?
Classes are loaded either explicitly by the application or automatically by objects called factory objects. Some applications may have optional functionality that is loaded on demand. In that case, it may make sense for the application to have built-in knowledge of the classes required for those optional features. In other situations, the classes are supplied by third parties and should be loaded in response to some of the data that is being handled by the application. In such cases, it makes sense to have factory objects that will search for a class that can handle the situation and load it. The core Java library can use factory objects to load protocol handlers and content type handlers in conjunction with the URL class. See "Further Reading," later in this chapter, for pointers to more information on factory objects.
In either case, a class loader is called explicitly by application code, whether a factory object or some other part of the application, so no "installation" of ClassLoader objects is usually necessary.
Earlier in this chapter, you learned about the system resources that are protected by the Java security facilities. The security system must also take care to protect certain parts of the security model itself, so that applets cannot subvert the security system and slip in through the back door.
The first line of special protection for the security system is the mechanism for installing the security manager. Obviously, if an applet could install its own security manager, it could do anything it pleased. Therefore, installation of the security manager must be protected. But there's a "chicken and egg" problem here: when the security manager is first installed, there's no security manager to rule on whether it should be allowed!
The resolution of this problem is simple. If no security manager is installed, any class can install one. Once a security manager is active, however, it is always a security violation to attempt to replace it. This implies that applications must take care to establish their security policies before any untrusted code is brought into the virtual machine. Another implication is that during any individual execution, an application can have only one security manager. Thus, if it's desirable to adjust the security policy of an application while it is running, those adjustments must be catered to in the security manager itself-they can't be accomplished by changing the security manager.
The other defenses for the security system are more conventional in that they involve library routines that consult the security manager for access decisions. The protected aspects of the security system fall into two categories:
There remains one big topic to cover in this chapter: just how good is Java security? Does it really do all that it claims to do, and is it really the best thing out there, or does it have some serious weaknesses? Should we really be trusting our systems to Java applets?
Of course, the precise answer depends on who you ask. Network security is a complex topic, and we don't yet fully understand every detail of it. Nevertheless, some rough consensus is beginning to emerge about the strength of Java's security facilities. In the following sections, I attempt to answer the questions objectively, based on my own analysis as well as on the opinions of others, including some experts in the security community.
Java security doesn't try to solve every security problem. There are some potential attacks that Java doesn't currently attempt to prevent (although work is in progress to extend the security facilities into these areas as well). These attacks involve the abuse of resources that aren't necessarily sensitive, such as CPU cycles, memory, and windows.
The current Java security model takes a "yes or no" approach to security: a class is either allowed access to a resource or it is not. There is no provision for allowing an untrusted class to use only a certain amount of some resource. Java does not enforce resource quotas. It doesn't even keep track of resource usage so that the application can enforce quotas.
Applets can exploit this fact to mount attacks of varying severity: denial-of-service attacks that render your machine unusable by allocating all your memory or some other finite resource; annoyances that make noises or pop up big, bouncing windows on your screen, flash, and then disappear before you can destroy them; or resource theft attacks, where an applet stealthily lurks in the background, using your machine to perform part of some large calculation while you wonder why your computer seems a little slower than usual today.
Java's designers didn't simply forget about these issues. They decided not to deal with them at the time because they are extremely difficult to handle correctly. Especially when it comes to CPU time, it's difficult (and often impossible) to determine whether a class is doing useful work or simply wasting the resource. Researchers are currently investigating ways to prevent (or limit) these kinds of attacks, and future versions of Java will attempt to deal with them as well as possible.
In addition to those attacks that were intentionally excluded from the scope of the Java security facilities, there have been a few accidental flaws. Several have been found by Drew Dean, Ed Felten, and Dan Wallach, a team of researchers at Princeton University. Others were found by David Hopwood, an Oxford researcher who is now working for Netscape, helping to strengthen Java security in Netscape products.
All these security holes permitted very serious attacks that could result in loss or theft of valuable data. It's comforting, though, that each of the holes was the result of a simple bug in the code that implemented part of the security architecture, rather than any fundamental flaw in the architecture itself. In each case, the bugs were fixed within a few days of discovery. In fact, the process in which researchers carefully examine Java for security weaknesses is one of the important reasons why Sun made the Java source code available for study. Java security is now much stronger because several security bugs have been found and eliminated.
That holes have been found in Java security is a reminder that we should be cautious, but there's no reason for panic. In truth, all network-oriented applications are candidates for serious security holes, and such problems are actually quite common in network applications. It's common for security considerations to be almost an afterthought. Java actually seems to be stronger than that of many other programs because certain aspects of its security architecture have been central to Java's design from the beginning, and because all the Java hype has prompted some intense scrutiny.
What are the chances that someone will discover a really fundamental security flaw in Java-a serious flaw in the security architecture itself?
It's impossible to say for sure, of course, but we do know enough about security to be able to analyze the Java security model and make some guesses. It turns out that the Java security architecture is not perfect, and there are some things to watch out for. Dean, Felten, and Wallach, the Princeton researchers mentioned earlier, have expressed some concern about the complexity of Java's security architecture, but most who have studied the matter seem to believe that the outlook is good.
The biggest weakness in the Java security model is its complexity. Complexity begets mistakes. The problem is exacerbated by the fact that application authors currently must implement their own security manager and class loaders. Hopefully, a few flexible, configurable security policy implementations will be made available for reuse, so that they can be carefully tested and debugged once, and used many times. Such a development would greatly reduce the potential for new security holes.
Another weakness is that the security responsibility is split between the security manager and the class loaders. It would be better if the job were localized in one class.
Although we may wish things were simpler, as a whole, the Java security architecture appears to be strong enough, and we can expect it to grow stronger with time.
Microsoft has its own proposal for dynamically loaded code on the Internet: ActiveX. ActiveX is essentially a way to leverage the existing body of OLE controls that already exists for the Windows environment. Questions have been raised about the security implications of the proposal, however. ActiveX certainly has considerable value in some situations, but there are good reasons to be worried about potential security problems.
ActiveX controls can be written in Java, and you can bet that Java-based controls will begin appearing in the near future. However, most of the large existing body of controls have been written in C or C++, and have been compiled into native Intel machine code. The ActiveX answer to securing these controls is based on a digital signature technology called Authenticode. Developers and software vendors sign their controls, and the ActiveX system verifies that signature against a registry of signature authorities, thereby verifying the origin of the control. Users can configure ActiveX to allow only controls from certain vendors to run on their system; for example, if you trust Microsoft and Corel, but nobody else, you can configure ActiveX to automatically fetch and run controls from those two companies, but refuse all others.
That sounds okay, right? After all, its sounds a lot like the code-signing system for Java classes which will be used to help loosen the rigidly conservative Java security policy.
Actually, however, the two systems aren't very similar. To understand why, you have to look at what a signature on a piece of software really means. A signature really only means "I signed this and it hasn't been changed since I did." The person or authority who applies the signature might actually mean more than that, but the Authenticode proposal doesn't require any guarantees. Even if guarantees were present, it's unlikely that they would be very strong, based on the weak (or nonexistent) warranties that are found on most software today.
So a signature is not a guarantee of safety. If that's the case, what makes the Java code-signing technology better than Authenticode? The difference is that ActiveX security is all-or-nothing; Java security can be extremely fine-grained and flexible. If you (or the ActiveX system) choose to allow a native-code ActiveX control to run on your system, that control has complete access to your system. It can read any file you can read, delete or change any file you can, make any network connection it chooses-even format your hard disk. Java, on the other hand, can grant access to classes selectively, even choosing which files a particular class is allowed to read. A word processing applet, invoked specifically to edit one particular document, can be allowed to write to that document and no other.
Authenticode is not really a security architecture; it's a trust management architecture. Furthermore, it's a very limited trust management architecture because it doesn't have any understanding of partial trust.
Some people are saying that ActiveX will probably be widely used
on corporate intranets, with Java ruling the roost on the Internet
at large. That's probably a good prediction-certainly ActiveX
will be very useful on intranets because the security issues aren't
quite as important there. However, even that thought makes me
uneasy, because of a prediction of my own. I believe that, over
the next five to ten years, the walls between corporate intranets
and the Internet will become more porous, often coming down altogether.
Companies that rely heavily on intranet applications built with
ActiveX will be at a disadvantage when that time comes.
Further reading |
Several Web pages, and at least one other book, provide more in-depth treatment of Java security topics. JavaSoft maintains an FAQ page on applet security issues (http://java.sun.com/sfaq/); the Princeton researchers who have found several serious Java security bugs maintain a similar page with an outsider's perspective (http://www.cs.princeton.edu/sip/java-faq.html). For more information about the second layer of the security model (validation and verification of Java class files and bytecodes as they are loaded into the interpreter), read The Java Virtual Machine Specification, by Tim Lindholm and Frank Yellin. The book is a technical description of the architecture of the virtual machine and contains a precise description of the steps taken by the class verifier. To see examples of applets that exploit some of the weaknesses in the current Java security facilities, take a look at Mark LaDue's Hostile Applets page (http://www.math.gatech.edu/~mladue/HostileApplets.html). (Don't worry, you can go at least that far without fear; Mark keeps the hostile applets off the main page.) For more information about programming with the Java security system, read Tricks of the Java Programming Gurus (published by Sams.net). It provides deeper coverage of several topics that readers of this chapter might find interesting, including:
|
Java's security model is possibly the least understood aspect of the Java system. Because it's unusual for a language environment to have security facilities, some people have been bothered by the danger; at the same time, because the security restrictions prevent some useful things as well as harmful things, some people have wondered whether security is really necessary.
Java security is important because it makes exciting new things possible with very little risk. Early security holes caused by implementation bugs are being closed, and technology is being fielded that permits the strict security policy to be carefully and selectively relaxed. Resources that can be used to destroy or steal data are protected, and researchers are examining ways to prevent applets from using other resources to cause annoyance or inconvenience.
Application developers can design their own security policies and supply parts of the third layer of the Java security model to implement those policies in their applications.
The Java security architecture is sound. Early weaknesses and bugs are not a surprise, and the process that has exposed those flaws has also helped remove them.