The poet Shelley wrote, "On the pedestal, these words appear: 'My name is Ozymandias, King of Kings. Look on my works, ye Mighty, and despair!' Nothing beside remains." (Shelley's Poetry and Prose, editors Donald Reiman and Sharon Powers, W.W. Norton and Co., 16th ed., 1992).
At least that old monarch had an empire built of stone. Using the Virtual Reality Modeling Language (VRML) we purport to build entire worlds out of nothing more substantial than 1s and 0s, stored on highly refined and rather well-structured bits of sand.
VRML represents a first attempt at allowing end users to build three-dimensional models that can be quickly sent over the Net and explored using readily available browsers. The initial standard was defined by a small group, with the input of an online discussion group. The second generation of the VRML standard is being developed with even more widespread input, and promises to be both stable and powerful-the prerequisites to widespread adoption.
VRML is eight parts technology and two parts "true believer." Since very few people are moved to even mild interest by a file format, let alone a format about applied geometry, any such format that can move thousands of people to passion and enthusiasm bears examining.
To understand the emotional appeal behind VRML, it is necessary to understand a bit of history and literature. Since the 1980s, both serious science writers and writers of science fiction have begun to converge on a dream-the simulated world. It has gone by many names and had many forms: The computer-generated "consensual hallucination" of William Gibson's Neuromancer. The Matrix. The Metaverse. The holodeck.
A generation of the brightest programming minds has shared the vision of a world where people interact with computer-generated artifacts without clumsy typed commands-a world where you see things instead of having to ask the computer what it sees-a world where you do things with a mouse or a trackball or a joystick instead of asking the computer to carry out some command. A brave new world where human beings and computers can together Ahem. I digress.
The effort that led most directly to VRML was the development of a system at Silicon Graphics (SGI) called Open Inventor. Inventor provides a language in which to describe three-dimensional (3-D) objects. It is a large, general language that took years to develop and is fairly expensive to run in terms of computer power.
SGI makes some of the faster computers in the world and it has carved out a substantial niche in the graphics market, so Inventor gives it a workbench on which to explore the next generation of "things to do with a computer."
Along the way, the Inventor team at SGI got interested in developing a version of Inventor that could be served over the Web and that would would run efficiently on desktop computers. Its proposal is available at www.sgi.com/Technology/Inventor/VRML/VRMLDesign.html and makes for pretty interesting reading (at least as proposals go).
Three key developers (Gavin Bell, Anthony Parisi, and Mark Pesce) started a mailing list to solicit comments. The result was a language specification. (To learn more about the mailing list, visit http://vrml.wired.com/listfaq.html. The VRML 1.0 spec is available from the same site, at http://vrml.wired.com/vrml.tech/vrml10-3.html.)
A number of free browsers are circulating around the Web and even one commercial product, Virtus Walkthrough, has incorporated VRML into its feature set. New browsers are announced regularly.
A fairly complete list is maintained at http://www.sdsc.edu/SDSC/Partners/vrml/repos_software.html.
The trend is to integrate VRML browsers into HTML browsers (though
the two languages are quite different and there are no plans to
merge them).
Note |
Netscape has announced that VRML will be integrated into the next major release of Navigator. Considering Netscape's market share, their announcement will focus a great deal of attention on VRML. To get a feel for that new browser visit http://www.netscape.com/ and look for the beta release of Netscape Navigator 3.0, codenamed Atlas. |
Now that VRML 1.0 is launched, the development community is turning its attention to VRML 2.0. The draft specification was released for review in April 1996 and is expected to be sanctioned by May. The principal extensions are in the areas of object behavior, physics, and networking.
A summary of the ideas that contributed to Version 2 is given at http://www.bluerock.com/. The latest information on VRML 2.0 will be found at the VRML home page at http://vrml.wired.com/, the VAG home page at http://vag.vrml.org/ and, of course, on the mailing list. Details of the Version 2.0 process are given at http://vag.vrml.org/vrml20info.html.
Rather than the "three men and a language" approach used to birth VRML 1.0, the process is moving toward something the IETF might approve: making VRML an official open standard. (As it stands today, VRML is an open standard in fact and practice, but has no official standing with any standards-setting body.)
VRML is best illustrated by example. Figure 36.1 shows "The House of Immersion," a model developed by Sandy Ressler and Christinee Piatko at the National Institutes for Standards and Technology (NIST). Using various browser controls, the user can walk around the house, look in through the windows, or go inside.
Figure 36.1 : On the front path at The House of Immersion.
Once inside, the visitor finds interesting things to explore. There is a desk, for example, in the room to the left of the entryway, and a piano a little farther in (see Figs. 36.2 and 36.3).
Figure 36.2 : The desk in the House of Immersion is linked back to the NIST site.
Figure 36.3 : The House's Piano is a hyperlink to HyperReal's page on music machines.
Selecting these anchor objects activates a link. The desk, for example, is linked to http: //www.nist.gov/itl/div878/ovrt/OVRThome.html. The piano is linked to http://www.hyperreal.com/music/machines/. When users follow one of these links, many browsers send a message to the HTML browser directing it to show the user the associated page.
The developer can also link to another VRML file (by convention, file extension ".wrl"). When users follow such a link, they stay in the VRML browser but change worlds.
Still another construct, WWWInline, allows the world-builder to include other VRML files in much the same way as a GIF or JPEG can be embedded in an HTML document.
The House of Immersion was built as part of an experiment at NIST. College students were presented with information about a community center in textual form, two-dimensional form, in a three-dimensional model, and in this VRML model. The objective was to see which medium afforded the students the highest retention of the material.
For details of the experiment and to get a copy of the files used, visit http://www.nist.gov/itl/div878/ovrt/projects/imm/immerse.html and http://www.nist.gov/itl/div878/ovrt/projects/vrml/vrmlfiles.html.
VRML files are ASCII text files. Although some authoring systems
are available, it is best to learn VRML at the text level, then
move to an authoring system to build complex files.
Tip |
Just as validation is important for HTML, it is now possible (and just as important) to validate VRML. Visit Daeron Meyer's VRML Authenticator at http://www.geom.umn.edu/~daeron/docs/vrml.html. |
The first line in a VRML file must be:
#VRML V1.0 ascii
Note that VRML is case-sensitive. Be sure to get the case and spacing exactly as shown above.
After that first line, VRML permits exactly one node. That node is nearly always a Separator node (discussed in the next section), which holds multiple nodes that hold multiple nodes and so forth, as the virtual world is built up object by object.
VRML uses a Cartesian, right-handed, 3-D coordinate system. Units of length measure are meters; units of angle measure are in radians (where 1 radian is about 57 degrees). To see what this description means, open the simple model described in Listing 36.1 and examine it with a VRML browser.
Listing 36.1 one.wrl-A Simple World in VRML
#VRML V1.0 ascii Separator { Cylinder { radius 1 height 2 } Translation { translation 3 0 0 } Cone { bottomRadius 1 height 2 } }
The resulting scene is shown in Figure 36.4. In this example, both the cylinder and the cone are 2 meters tall and have a radius of 1 meter. The cone is translated 3 meters along the X axis in the positive direction-to the viewer's right. Since both objects have a radius of 1 meter, there is a 1-meter gap between them.
Figure 36.4 : The simple scene described in Listing 36.1.
The VRML Version 1.0 specification lists 36 different types of nodes, organized into three groups:
There is also one node, WWWInline, that does not fit well into any other category.
Every node has up to four pieces of information associated with it:
If a shape node with no fields appears in the file, the VRML browser supplies default values. For example, the node
Cone { }
puts up a cone with a bottom radius of 1 meter and a height of 2 meters. The node
Cone { bottomRadius 2 height 4 parts SIDES }
puts up a cone with a radius of 2 meters and a height of 4. In addition, the bottom of the cone is left open-only the sides are displayed.
Other shape nodes include AsciiText, Cube, Cylinder, Sphere, IndexedFaceSet, IndexedLineSet, and PointSet.
The last three shapes should be preceded by a Coordinate3 node, like this:
Separator { Coordinate3 { point [ 0 75 25, 12.5 62.5 12.5, 12.5 62.5 37.5, -12.5 62.5 37.5, -12.5 62.5 12.5, 0 50 25, ] } IndexedFaceSet { coordIndex [ 0, 1, 2, -1, 0, 1, 4, -1, 0, 4, 3, -1, 0, 3, 2, -1, 5, 1, 2, -1, 5, 1, 4, -1, 5, 4, 3, -1, 5, 3, 2, -1, ] } }
Figure 36.5 shows the result. The Coordinate3 node declares a set of points. Each point is given in x, y, z notation; decimal points are permitted. A PointSet node just makes the points visible. The IndexedFaceSet and IndexedLineSet play "connect the dots." Each number refers to a point in the Coordinate3 point set, with the count starting at 0. Each sequence ends in a -1.
Figure 36.5 : The IndexedFaceSet node is used to build objects of arbitrary complexity.
In the case of an IndexedFaceSet, each sequence results in a facet. If the node is an IndexedLineSet, the sequence defines an open polygon. With some browsers, it is difficult to see PointSets and IndexedLineSets. The most common node from this set is the IndexedFaceSet.
Most scenes quickly go beyond the capabilities of the Sphere, Cone, Cylinder, and Cube nodes. Most VRML files contain a large number of IndexedFaceSet nodes.
Property nodes affect how shape nodes draw themselves. The Coordinate3 node from the example above is a property node. Other property nodes include FontStyle, Info, LOD, Material, MaterialBinding, Normal, NormalBinding, Texture2, Texture2Transform, TextureCoordinate2, and ShapeHints.
The list of property nodes also includes nodes about transforms: MatixTransform, Rotation, Scale, Transform, and Translation; nodes about cameras: OrthographicCamera and PerspectiveCamera; and nodes about lights: DirectionalLight, PointLight, and Spotlight.
The Info node provides a way to embed comments in the model. Comments can appear in the file by putting a pound sign (#) ahead of them, but servers are free to strip those comments before sending the model. An Info node will survive that process and can be used for titles, copyrights, and other important information.
Some browsers look for info nodes with special names. For example, if WebSpace http: //webspace.sgi.com sees an Info node named Viewer with string value "walk", it sets up the Walk viewer by default. Otherwise, it sets up the Examiner viewer. Listing 36.2 shows the code that tells WebSpace to use the "walk" Viewer.
Listing 36.2 This Info Node Tells WebSpace to Use
the "walk"
Viewer
#VRML V1.0 ascii Separator { DEF Viewer Info { string "walk" } Sphere { } }
The LOD node provides a measure of simplification for both the browser software and the user. The developer specifies a set of ranges and child nodes. For example,
Separator { LOD { range [ 150 ] Separator { WWWInline { name "one.wrl" } Translation { translation 0 10 0 } AsciiText { string "This is the original demo world." } } Cube { } }
says that if the user's point of view is more than 150 meters away from this node, the browser should display a standard cube (2 meters on a side). As the user moves closer, the cube is replaced with the contents of the file "one.wrl"; 10 meters above that world, the text "This is the original demo world" appears.
Using LODs, the developer can implement a hierarchy. Visitors to the site can explore high-level structures (such as rooms) and see objects. As they approach an interesting object, the object acquires more detail.
When visitors actually touch the object, it can show that it is a link (most browsers highlight the edges or flash the color). Visitors can click on the link to bring up a new world or to get HTML-based information.
Materials, textures, and lights work in combination to transform VRML from a simple exercise in geometry to something that approximates the real world. When the Material node is traversed, it sets up a default material for use by subsequent nodes. The following defines the default material:
Material { ambientColor 0.2 0.2 0.2 diffuseColor 0.8 0.8 0.8 specularColor 0 0 0 emissiveColor 0 0 0 shininess 0.2 transparency 0 }
The ambientColor field regulates how much ambient light is reflected from the object's surface. The diffuseColor field works the same way for light from specific light sources such as a PointLight. SpecularColor sets the color of highlights.
If the object itself glows (rather than just reflecting), it should have nonzero emissive color. The value of the shininess field sets the intensity of the surface highlight. Transparency takes on a value between 0 and 1 (0 is totally opaque, whereas 1 is completely transparent).
Most browsers implement most of these fields, so they should be included in any scenes where they make sense. MaterialBinding nodes associate materials with objects. For example,
Coordinate3 { point [ -1 1 1, -1 -1 1, 1 -1 1, 1 1 1, -1 1 -1, -1 -1 -1, 1 -1 -1, 1 1 -1 ] } Material { diffuseColor [1 0 0, 0 1 0, 0 0 1, 1 1 0, 1 0 1, 0 1 1 ] } MaterialBinding { value OVERALL } IndexedFaceSet { coordIndex [ 0, 1, 2, 3, -1, 3, 2, 6, 7, -1, 7, 6, 5, 4, -1, 4, 5, 1, 0, -1, 0, 3, 7, 4, -1, 1, 2, 6, 5, -1 ] } Translation { translation 3 0 0 } MaterialBinding { value PER_FACE_INDEXED } IndexedFaceSet { coordIndex [ 0, 1, 2, 3, -1, 3, 2, 6, 7, -1, 7, 6, 5, 4, -1, 4, 5, 1, 0, -1, 0, 3, 7, 4, -1, 1, 2, 6, 5, -1 ] materialIndex [0, 1, 2, 3, 4, 5 ] }
puts up two cubes (made from IndexedFaceSets). The material has six diffuseColor values associated with it. The first MaterialBinding has value OVERALL, which causes the first material value to be associated with every face of every part. Consequently, the first cube is red.
The second MaterialBinding puts a different material on each of the six faces. The second block is multicolored. It is also possible to use PER_VERTEX_INDEXED and to associate a different material with each vertex of each face. In this case, the material characteristics such as color blend across the face.
Note that even though Cube is available as a primitive in the language, this example built a cube out of an IndexedFaceSet. Primitive shapes do not have explicit faces or vertices; they interpret PER_FACE (and PER_FACE_INDEXED), and PER_VERTEX (and PER_VERTEX_INDEXED) as OVERALL. To use different materials on the different parts of a primitive shape, use PER_PART or PER_PART_INDEXED bindings, as in
MaterialBinding { value PER_PART } Translation { translation -3 3 0 } Cylinder { }
Figure 36.6 shows the effect of these material bindings on the shapes defined above.
Figure 36.6 : Various material bindings.
Two kinds of Texture2 node syntax are available. The first is by URL. The second is by image. Thus,
Texture2 { filename "http://www.dse.com/ETC/vrml/weave_yellow.gif" } tells the renderer to look for a file at the specified URL and wrap it around subsequent shapes. Texture2 { image "2 4 3 0xFF0000 0xFF00 0 0 0 0 0xFFFFFF 0xFFFF00" }
tells the renderer to use the texture given in this image field. The image field is interpreted as follows:
The first two numbers give the width and height of the texture image. The third number specifies how many components are in the image. If the number of components is one, it is interpreted as an intensity. If the number of components is two, the high byte is interpreted as intensity and the low byte as transparency.
If the number of components is three, as it is above, each of the three numbers in the component is interpreted as a color value, in RGB order. If the number of components is four, the first three bytes are interpreted as RGB color, and the last byte is interpreted as transparency.
Within the pattern, values are assigned from left to write, top to bottom. So
image "2 4 3 0xFF0000 0xFF00 0 0 0 0 0xFFFFFF 0xFFFF00"
gives the color pattern shown in Figure 36.7.
Figure 36.7 : The developer can set up custom textures based on an RGB pattern.
A variety of individual transformations are available in VRML. The most common is translation. The syntax is
Translation { translation x y z }
where x, y, and z stand for floating-point translations in each of those axes. Rotation is similar but takes four values. The first three numbers give the axis of rotation and the fourth gives the number of radians of (right-handed) rotation around that axis. Thus, a 180-degree turn around the y axis would be given as
Rotation { rotation 0 1 0 3.14159265 }
Instead of using individual translations, rotations, and so forth, the developer can choose the Transform node, which has fields for translation, rotation, scaleFactor, scaleOrientation, and center.
Two models of camera are available in VMRL. A PerspectiveCamera node defines a viewing volume shaped like a pyramid. From a given viewpoint, users see a "slice" of the world. As the user's point of view moves closer, items in the world become larger and vice versa.
For special purposes, a developer can use an OrthographicCamera node, which works like a drafting projection. Objects do not become smaller or larger as the user's point of view moves.
One use of an orthographic camera, scientific visualization, is
described at http://amber.rc.arizone.edu/. There, Marvin
Landis of the University of Arizona shows how to use an orthographic
camera aimed at an LOD node
to introduce the concept of time into the model. As users move
toward the LOD node, they
are moving forward in time and learning how ink binds at the molecular
level in an ink-jet printer. The file, available at http://amber.rc.arizona.edu/vrml/deymier.wrl.gz
is gzipped. If you don't have that utility, don't worry-most VRML
browsers (like WebSpace) can read gzipped files directly.
Caution |
A few VRML browsers, like WebSpace for AIX, do not correctly implement the LOD node. If Landis's demo does not work correctly on your machine, check out a simple LOD node like the one in this chapter's example to see how your browser handles that code. |
Along with materials and textures, lighting is a key element in making a VRML model feel like reality. VRML affords three types of light: the PointLight, the DirectionalLight, and the SpotLight. The PointLight node is often called an "omni." It radiates light uniformly in all directions. The default specification for a PointLight is
PointLight { on TRUE intensity 1 color 1 1 1 location 0 0 1 }
The DirectionalLight casts parallel rays from its location in a specified direction. The DirectionalLight node mimics the effects of sunlight on a scene. It takes the same fields as a PointLight node except that a direction field replaces the location.
The SpotLight is the most advanced lighting source in VRML. It is specified as follows:
SpotLight { on TRUE intensity 1 color 1 1 1 location 0 0 1 direction 0 0 -1 dropOffRate 0 cutOffAngle 0.785398 }
As the parameters suggest, the intensity of the SpotLight node's light drops off exponentially as the ray of light moves away from the specified direction. The rate of drop-off and the angle of the cone are given by their respective fields.
SpotLight nodes are computationally expensive. For many scenes, there is no visible difference between a SpotLight and a DirectionalLight. Experiment with both-if the scene renders well without a SpotLight, leave it out.
In general, once a property is set it stays set for the duration of the model. This effect is often unintentional and undesirable. VRML affords five nodes that allow other nodes to be grouped and separated in various ways: Group, Separator, Switch, TransformSeparator, and WWWAnchor.
The Group node simply contains an ordered list of children. By itself, it is not very useful. Its cousin, the Separator node, is invaluable. What makes the Separator node useful is the fact that it "pushes" and "pops" state as it is traversed. For example,
Group { Group { PointLight { intensity 0.5 color 1.0 .2 .2 location 5 5 5 } Sphere { } } Translation { translation 0 4 0 } Cube { } }
the developer here has specified a dim red light inside a group. Perhaps to his or her surprise, that light will still be on when the Cube is rendered. Perhaps what the developer intended was
Separator { Separator { PointLight { intensity 0.5 color 1.0 .2 .2 location 5 5 5 } Sphere { } } Translation { translation 0 4 0 } Cube { } }
Figures 36.8 and 36.9 show the difference in the effect between these two files.
Figure 36.8 : A light inside a Group node propagates out of the node.
Figure 36.9 : A light inside a Separator node stays confined in the node.
Separator nodes allow the developer to build "compartments" within the simulated world. Note that all nodes, including Separators, can be given names so the objects inside can be used throughout the file. Thus, the following renders in Figure 36.10:
Separator { DEF aLightedSphere Separator { PointLight { intensity 0.5 color 1.0 .2 .2 location 5 5 5 } Sphere { } } Translation { translation 0 4 0 } USE aLightedSphere Translation { translation 4 0 0 } USE aLightedSphere Translation { translation 0 0 4 } USE aLightedSphere Translation { translation -4 4 -4 } USE aLightedSphere }
A Switch node traverses all, none, or some of its children, depending on the contents of the whichChild field. The WebSpace VRML browser allows a Switch node to control different points of view.
DEF Cameras Switch { whichChild 0 DEF Front PerspectiveCamera { position 0 30 240 orientation 0 0 1 0 focalDistance 5 heightAngle .785 } DEF Overview PerspectiveCamera { position 0 200 240 orientation 0 -1 -1 0 focalDistance 200 heightAngle .785 } DEF One PerspectiveCamera { position 0 0 -80 orientation 0 0 -1 0 focalDistance 20 heightAngle .785 } }
The TransformSeparator node lies conceptually between the Group node and the Separator node. Like a Separator node, it saves the state of the transform when it is entered and pops that state when it is exited. Like a Group node, all other state changes survive this node. The TransformSeparator node can be used to position a camera or a light without distorting the entire scene.
One of the most useful nodes is WWWAnchor. As we saw earlier in this chapter, a WWWAnchor can lead to another wrl file somewhere on the Net or to an HTML file. As HTML browsers become more tightly integrated with VRML browsers, the potential for this node is enormous.
The WWWInline node allows one model to include another. When used with the LOC node, the WWWInline node can connect worlds even beyond the power of the WWWAnchor.
For example, imagine an art gallery in which each picture started out as a simple textured or colored IndexedFrameSet. As the users approach, the simple image is replaced by a more complex one. As users approach the picture, they go "into" the picture (now a WWWInline) and into that other world.
It is the business of VRML to allow developers to describe worlds. It is the business of browsers to get those worlds on the screen. Browsers have the harder job. Rendering 3-D graphics is computationally expensive and pushes the limit of even high-end desktop machines. The OpenGL library (or its workalike, Mesa), which were introduced in Chapter 35, "How to Add Video," form the basis of many browsers.
Most browsers allow the user to trade fidelity for performance. Typically, you can control whether texture is displayed, or even whether shading is on or off, as well as how lighting computations are performed. Most browsers allow the user to switch to a wireframe model when moving.
Figures 36.11, 36.12, 36.13, and 36.14 show the same scene rendered in wireframe, then with hidden lines removed, then flat (one color per face), and finally "smooth," in which faces that meet at less than a developer-specified "creaseAngle" are blended together.
Figure 36.11 : The Wireframe rendering is simple and fast.
Figure 36.13 : Flat shading makes the scene even more recognizable.
Textures are even more complex than shading and most browsers make it possible to run with textures off.
Some browsers give users more than one way to move around the simulated world. WebSpace, for example, has both a "walk" viewer, in which the user moves through the world by controlling a joystick, and the "examine" viewer, in which the user stays in one place and moves the world using a globe.
The example from this chapter is shown in Listing 36.3. View it with any of the VRML browsers. Take it apart and change components to see the effect each of the nodes and attributes provides.
Use one of the VRML browsers on the CD-ROM that comes with this book and visit urlHouse on the CD-ROM. Examine the VRML source code to see how this model is put together. If the model runs slowly, turn down the complexity using the various controls in the browser.
Now visit the VRML repository at the San Diego Supercomputer Center http://sdsc.edu/vrml/ and enjoy a variety of VRML worlds. In particular, visit http://sdsc.edu/SDSC/Partners/vrml/examples.html, a comprehensive list showing how VRML is being used in fields from architecture to chemistry to commerce.
As you explore, think about how this technology can be used on
your site when desktop computers are just a bit faster. If you
don't have access to the CD-ROM or a VRML browser, examine the
much simpler world below, which puts together the major concepts
presented in this chapter.
Tip |
There is an excellent tutorial on VRML at http://www.vrml.wired.com/. |
Listing 36.3 Complete Example World from This Chapter
#VRML V1.0 ascii Separator { Info { string "Example World for Webmasters..." } DEF Cameras Switch { whichChild 0 DEF Front PerspectiveCamera { position 0 30 240 orientation 0 0 1 0 focalDistance 5 heightAngle .785 } DEF Overview PerspectiveCamera { position 0 200 240 orientation 0 -1 -1 0 focalDistance 200 heightAngle .785 } DEF One PerspectiveCamera { position 0 0 -80 orientation 0 0 -1 0 focalDistance 20 heightAngle .785 } } PointLight { on TRUE intensity 1.0 color 1 1 1 location 0 0 120 } PointLight { on TRUE intensity 1.0 color 1 1 1 location 0 100 100 } PointLight { on TRUE intensity 0.5 color 0 0 1 location 50 50 10 } Material { diffuseColor .5 .5 1 shininess 0.75 transparency 0.5 } MaterialBinding { value_DEFAULT } Texture2 { filename "http://www.dse.com/ETC/vrml/weave_yellow.gif" } FontStyle { size 15 family TYPEWRITER style NONE } AsciiText { string "This is the demo world!" spacing 1 justification CENTER width 0 } Separator { Translation { translation 0 0 -100 } LOD { range [ 150 ] center 0 0 0 Separator { WWWInline { name "http://www.dse.com/ETC/vrml/Chap38/one.wrl" } Translation { translation 0 10 0 } AsciiText { string "This is the original demo world." } } Cube { } } } DEF aCone Separator { Translation { translation 0 30 0 } Cone { parts ALL bottomRadius 15 height 30 } } DEF aCube Separator { Transform { rotation 0 1 0 .7 } Translation { translation -45 30 0 } Cube { width 30 height 30 depth 30 } } DEF aCylinder Separator { Translation { translation 45 30 0 } Cylinder { parts ALL radius 15 height 30 } } DEF aSphere Separator { Texture2 { image 2 4 3 0xFF00 0xFF00 0xFF00 0xFF00 0xFF00 0xFF00 } Translation { translation 0 75 0 } WWWAnchor { name "http://www.dse.com/ETC/vrml/Chap38/one.wrl" map NONE } Sphere { radius 15 } } DEF FaceDiamond Separator { DEF DiamondCoords Coordinate3 { point [ 0 75 25, 12.5 62.5 12.5, 12.5 62.5 37.5, -12.5 62.5 37.5, -12.5 62.5 12.5, 0 50 25, ] } USE DiamondCoords IndexedFaceSet { coordIndex [ 0, 1, 2, -1, 0, 1, 4, -1, 0, 4, 3, -1, 0, 3, 2, -1, 5, 1, 2, -1, 5, 1, 4, -1, 5, 4, 3, -1, 5, 3, 2, -1, ] } } DEF LineDiamond Separator { Translation { translation 0 75 0 } USE DiamondCoords IndexedLineSet { coordIndex [ 0, 1, 2, -1, 0, 1, 4, -1, 0, 4, 3, -1, 0, 3, 2, -1, 5, 1, 2, -1, 5, 1, 4, -1, 5, 4, 3, -1, 5, 3, 2, -1, ] } } Separator { Translation { translation 0 150 0 } USE DiamondCoords PointSet { startIndex 0 numPoints -1 } }