by Justin Couch
In the preceding chapter, you looked at using various tools to create 3D worlds for the Internet. These tools were based on the original VRML 1.0 standard. In this chapter, I introduce the latest version of the VRML standard: version 2.0
There are quite a few differences between the two versions. To start with, the new version includes the capability to animate the world within the language and add programmable behaviors, using languages such as JavaScript and Java. To accommodate these additions, the whole language has been reconstructed from the ground up.
Before introducing the differences between the two versions, there are a couple of choices to be made. The first choice to make is how you want to create your worlds. The original version of VRML is now fairly mature. There are many good tools available to create worlds without even looking at source code. The range of products extend from plug-in modules to traditional 3D modeling tools like 3D Studio to stand-alone applications like Caligari's Pioneer.
VRML 2.0 was released on August 4, 1996, which means that the variety of tools is just not there. Like all new technologies, this means you will be reduced to the most basic of editing tools-the text editor. As usual though, there will be an array of tools hitting the market within months of the release, so if you are not into becoming intimate with Notepad, vi, or emacs, then I suggest you wait a few months before diving into creating 2.0 worlds.
The next choice is what do you want to get from VRML? If all you need to do is create a static object that people can wander around in, then it does not matter what version of VRML you require. The differences between the two versions are such that the new browsers will support VRML 2.0, but not the other way around. Designing for the latest version is probably the best choice because you can always start with the static world and then add the dynamic extras at a later date without a ground up rewrite.
Once you have decided to move to version 2.0, then the final decision is to determine to what extent you want to use it. VRML 2.0 provides more than just 3D scenes. As mentioned in the introduction, VRML now includes the ability to create arbitrary behaviors. It includes native support for 3D sound and video input as well.
Having decided that VRML 2.0 sounds like a good thing and you wish to learn a bit about it, you need to learn how it works. What makes 2.0 so different from 1.0?
The first thing that you will probably notice is all the field types are different. Like much of the new version, it was discovered that the old types were no longer sufficient to handle behavior and animation so they were scrapped in favor of the new ones. The old MF/SF prefix still applies, but the rest of the types have changed. Most of them are self-explanatory, so they aren't discussed here.
VRML files are characterized by the first line which states the type of file, the version, and the type of character encoding. The standard VRML 1.0 header looks like the following:
#VRML V1.0 ascii
With the change in specification and the drive towards internationalization of software, version two followed the same path. Now VRML is encoded as UTf8 (a close relative of Unicode used in Windows 95/NT). The new header now looks like the following:
#VRML V2.0 utf8
In some early files, you may see the words Draft #n placed in there as well. This was to indicate to the browser that the file conformed to the draft standard of the specification. This should not worry you because the normal ASCII of your text editor is a subset of UTf8; however, it is good to be warned in case you find any of these floating around.
VRML 1.0, as you may remember, is based on knowing the order in which the nodes are declared to achieve a certain effect. This is no longer the case. Instead, VRML 2.0 uses a tree-type structure in the file. This mimics much of the way the real world works.
Look at your arm and hand, for example. When you decide to move your arm, both the arm and the hand move at the same time. However, you can move the whole hand without moving the arm. The hand represents a child, and the arm is the parent. However, you have two hands, one for each arm, so you only need to declare one hand and include it on both arms.
When you read through a VRML 2.0 file, you will notice that if a node is capable of having children, it includes a field called, naturally enough, children. However, the order that you declare the nodes within that children field no longer matters. If a node is to effect the properties of another, it must be higher up the hierarchy, rather than just simply placed before it in the file.
In VRML 1.0, there is no explicit concept of the parent-child relationship. VRML 2.0 is very strict about it. There are a collection of rules about what nodes are legal and where. For instance, a geometry node like Box cannot exist by itself. It must be a child of the Shape node.
There are two broad categories of nodes: Group and Leaf nodes. Group nodes are those that can contain other nodes, even more Group nodes, but the Leaf nodes cannot contain others.To confuse the issue a bit, a Leaf node is not strictly the end of the hierarchy. A Shape node is classified as a Leaf node, yet it is the only way that geometry nodes can be made visible. The difference is that the Shape node is only allowed two children, and they are of a specified type; the Group node can contain any number of children of (almost) any type.
If you are still confused, read on. A few examples will illustrate how this works.
The final difference is how the various parts of the syntax hang together. In VRML 1.0, there is a bit of cross-fertilization of the functionality between nodes. This has been completely removed with the 2.0.
Each node now is designed for a specific purpose. For example, the geometry nodes only contain information about the geometry; what the radius is or how it is for example. They contain no information about where they are located in space or what color they are.
In many ways, you can control the scene much more than before because it is easy to locate the source of a particular problem. A wrong position means that you need to fix the transformation node, not the color node.
One of the most common items in a VR world based on a real-world theme is the humble tree. This example shows how to use the basics of VRML: color, geometry, transformations, and the node hierarchy.
The basic tree consists of a brown trunk and a couple of cones to produce the leaves. To start the tree, we need a brown cylindrical trunk. In the previous section, I mentioned that you need a certain relationship between the nodes. Examine the code presented in Listing 40.1.
Listing 40.1. A complete VRML file to produce the trunk of a tree.
#VRML V2.0 utf8 Shape { appearance Appearance { material Material { emissiveColor 0.4 0.4 0.1 } } geometry Cylinder { height 1 radius 0.25 } }
In this file, you find there are four nodes used: Shape, Appearance, Material, and Cylinder. The Shape node is the overall controlling parent. None of the other nodes are legal unless they have this parent. Next you see that the word appearance is written twice. The first one is one of the fields of Shape node, and the second is the declaration of the Appearance node. It seems strange to do this; however, you will notice that this is common right across the VRML nodes. If a node (for example, Shape) is to have a particular node as the child for a field (for example, Appearance), then the field is named the same as the node to be used.
It is possible to declare the Shape node without the geometry or Appearance node because defaults have been specified. If you declare it minus the Appearance property, then the cylinder defaults to black in color.
The rest of the description should be fairly straightforward. The Material node defines any color-based properties for the geometry. You will also see shortly that the Appearance node can control other properties as well like texture maps.
Next we need to add the leaves to make it look like a tree. This is done by adding a cone which has been translated to the right position. Anything to do with moving nodes or changing their dimensions is handled by the Transform node, which is illustrated in Listing 40.2.
Listing 40.2. A tree with some leaves.
#VRML V2.0 utf8 # The tree trunk Shape { appearance Appearance { material Material { emissiveColor 0.41 0.4 0.1 } } geometry Cylinder { height 1 radius 0.25 } } # A cone for leaves Transform { translation 0 1.5 0 children [ Shape { appearance Appearance { material Material { emissiveColor 0.1 0.6 0.1 } } geometry Cone {} } ] }
Any translation properties are handled by the translation field and the geometry that is to be translated is placed in the children field. The translation does not effect any other nodes except those declared as its children. In VRML 1.0, you need to hide everything inside a huge collection of Separators, and even then leakage of the state causes problems with parts of the scene lower down in the file.
You should also note that the cone is declared with none of the fields set, which is indicated by the empty set of brackets. If you declare a node this way, the node will use the default values. In this case, these values are a cone with a height of 2 and bottom radius of 1.
All of these geometry dimensions are relative to the origin. When you look at a box, the default for the size is written as 2 2 2, which indicates a box which has the extents from +1 to -1 in each of the three directions.
To make our tree look realistic, you need to make it lean a bit with the breeze. The final tree now has two cones, each of which is tilted a bit to make the tree look like it is leaning with the breeze. This shows the big difference between the two versions of VRML quite dramatically.
Listing 40.3 shows that to produce a compound sway in the top half of the tree you can put in an extra cone and transform it as a child of the original. There is no need to work out relative distances and then how much to rotate each. In this case, you offset this child relative to the parent only.
Listing 40.3. The final tree bending in the breeze.
#VRML V2.0 utf8 # The trunk Shape { appearance Appearance { material Material { emissiveColor .41 .40 .1 } } geometry Cylinder { radius .25 height 1 } } # The leaves. Firstly the bottom cone. Transform { translation 0 1.5 0 rotation 0 0 1 0.1 center 0 -0.75 0 children [ Shape { appearance Appearance { material Material { emissiveColor .1 .6 .1 } } # Default cone values look good geometry Cone {} } # Now put in the second cone Transform { translation 0 .75 0 rotation 0 0 1 .1 center 0 -.375 0 children [ Shape { appearance Appearance { material Material { emissiveColor .1 .6 .1 } } geometry Cone { bottomRadius .8 height 1.5 } } ] } ] }
Besides the translation and rotation fields, which you can expect to do as their name suggests, there is an extra field: center. This locates the center for the rotation to take place about rather than the origin of shape. For these cones, I have set this field to be on the base so that the lean looks more authentic.
If you have to type out (or even cut and paste) the tree every time that you want to use it, you will soon get tired. Luckily, VRML includes a mechanism so that you only need to define an object once and then reuse it.
To define a node to be reused, you use the DEF keyword followed by a name and then the node definition. Our tree example then becomes
DEF tree Group { children [ # Rest of tree definition... ] }
Note that we have to put a Group node around it so that all the parts are collected under one name.
To use that node somewhere else in the file, you use the keyword USE <name>. So if we want to create another tree at another location, you put in a transform (to move it to that location) and then USE the tree.
Transform { translation 6 0 5 children USE tree }
That is all you have to do. There is one thing that you need to watch for. By reusing a node in this way, you only create a pointer to the original. If you change any property in the original, it automatically flows on to all of the copies. For example, if you change the color to red for the leaves, all of the trees appear with red. Depending on what you are trying to achieve, this can be either good or bad.
You are not just limited to defining whole nodes. You can define parts of them but not individual fields. Say you have a nice yellow color that you want to reuse. You can place a DEF in front of the Appearance node and then reuse it in another place as the following code shows:
Shape { appearance DEF gold Appearance { material Material { emissiveColor 0.4 0.41 0.1 } } ..... } Shape { appearance USE gold .... }
DEF can be used for a variety of other circumstances. It gives a node a name that can be referenced later. As you read through the rest of this chapter, you find that it is used for giving access to view points, scripts, and animation as well as a number of other areas.
You can make a scene a lot more realistic by adding a few extra touches. For example, basic color becomes limiting in anything but a moderate world. VRML 1.0 did not include any object collision detection, so you could walk straight through walls. This section shows you how these effects and more are now created. Again, like the previous section, a lot of the syntax has changed but the basic ideas have stayed the same.
A texture map is the addition of an image over a piece of geometry. When the tree example was created, the Appearance node was introduced. At this stage all that was demonstrated was the Material node, which allowed you to change the color of the object.
As its name suggests, the Appearance node controls all aspects of how a piece of geometry looks. This can take two forms of color and images. The color is handled by the Material node, which was introduced earlier, and images are handled by a number of nodes. Two fields control images in the Appearance node: texture and textureTransform.
Texture holds a node that can place an image on the geometry. You might notice that I am being very general with this description. VRML can actually handle three different forms of placing textures. ImageTexture deals with predefined image formats like JPEG or gifs. PixelTexture contains VRML's own internal uncompressed format (the image is actually stored in the VRML file). The most interesting one is the MovieTexture node, which allows you to place animation on any piece of geometry just by using this node.
A tree is a bit lonely when placed by itself in the middle of scene, so let's build on it a little. To make the world interesting, I have used an IndexedFaceSet node to create an irregularly shaped island. This island is then covered with a grass texture using the ImageTexture node as illustrated by the code in Listing 40.4. Combine this with a few trees and you have a nice little virtual woodlands to play in.
Listing 40.4. The VRML code used to create the island.
#VRML V2.0 utf8 Shape { appearance Appearance { texture ImageTexture { url "grass.jpg" } textureTransform { scale 0.1 0.1 0.1 } } geometry IndexedFaceSet { coord Coordinate { point [ 5 0 0, 4.5 0 1, 4 0 2, 3 0 2.5, 2 0 3, 4 0 4, 0 0 5, -1 0 4.5, -2 0 4, -2.5 0 3.2, -3 0 2.5, -3.5 0 2.5, -4 0 0.5, -3.5 0 -1, -3.5 0 -2, -3.2 0 -3, -3 0 -4, -2 0 -4.2, -1 0 -4.6, 0 0 -4.5, 1 0 -4, 2 0 -3.5, 3 0 -3, 4 0 -2, -4.5 0 -1 ] } coordIndex [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 1, -1 ] } }
Note the use of the TextureTransform node. The original grass texture, if unscaled by the file is stretched so that it fits over all of the objects only once. With this new node, I can scale and rotate the texture however I like. In this case, I have reduced the scale by a factor of 10 to get a nice grass effect. The woodlands are shown in Figure 40.1.
Figure 40.1 : The completed woodlands.
Next on the list are the light sources. VRML defines 3 sources: PointLight, SpotLight, and DirectionalLight. The names are fairly self-explanatory. A Pointlight is a point in space representing a place where light shines from. The SpotLight focuses a beam of light in a certain direction, but the DirectionalLight puts out parallel rays as thought the light were from an infinite source like the sun.
DirectionalLights are different from the other two sources-they only illuminate objects in the same group as them. So other objects that belong to parent or sibling nodes are not affected by the light source. PointLight and SpotLight do not suffer these restrictions.
One interesting point to note is that VRML does not define shadow behavior. If there is another object between the source and the other object, no shadow is cast. Sometimes this is very frustrating because some types of lighting effects are not possible anymore.
One of the problems facing the original VRML specification was that there was no way to prevent a user from walking through objects in the scene. VRML 2.0 defines the Collision node which, while it is not drawn itself, it makes its children collidable. That is, you are no longer able to walk straight through.
There are two fields to use for this behavior. The children field is the list of children for which collision detection is turned on. If you want to use some other shape instead, then you can specify a grouping node. These nodes are never drawn, so there is no point associating any color properties with them.
An interesting point to take from this is that if you specify a proxy but no real geometry (for example, the children field is empty), then you can have invisible zones that you cannot pass through. This is a very handy trick if you want to constrain a user to a certain volume of space.
In both versions of VRML you can specify a list of positions to view the world from. In 2.0, these positions are called viewpoints (1.0 called them cameras). A viewpoint uses the same model as the one used in VRML 1.0. First, you place it somewhere and then you give it an axis to rotate about. The rotation is always relative to the axis pointing in the minus z direction. For example, the following code makes you look 45 degrees to the right of the minus z axis:
Viewpoint { orientation 0 1 0 0.782 }
Tip |
Angles are always in radians in VRML. To convert from degrees to radians, divide the angle by 360 and then multiply it by 2p. |
The first viewpoint that you declare in a file is the one that is used when you first enter the world. Any after that are then placed in a list that can be dealt with by the browsers. Early beta versions of SGI's CosmoPlayer allowed you to construct a virtual tour by placing the points that you want to visit in order. Each time that you press the page down/page up keys, you move to the next viewpoint.
Included on the CD is the code for a workshop from Laura Lemay's Web Workshop: 3D Graphics and VRML written by the same author, which illustrates how a virtual tour can be constructed through a virtual art gallery. Just open the top HTML file and the rest of the world is opened for you. It also illustrates a few more points that I will be raising in later sections of this chapter.
One of the great attractions of the Web is that you can place links to any other document anywhere in the world without the user worrying about how to get to it. VRML contains the same capability. Once you have entered the world, you can click on objects to jump to anything that you can place a link to on an HTML page.
The Anchor node is the equivalent of the <anchor> tag in HTML. It provides the link to another place on the Web. Anything that you can do with a HTML <A HREF> tag, you can do within the Anchor node. This link can be a connection to a CGI script, HTML page, VRML world, or anything else.
CGI input is interesting because you can create VRML worlds on the fly just like you can with HTML. A good example of this is Besjon Alavandi's Outland world from Terra Vista (http://www.webking.com/Outland/). The introductory page asks you to specify some sizes for the world and then the VRML is generated on the fly. CGI requests are then filled into the anchor fields so that as you travel around the various segments of the world, each bit is dynamically generated.
To use an Anchor node in your world, you simply create your object that you want to put a link on and then put it in the children field of the Anchor.
Anchor { url "http://www.vlc.com.au/" children [ # your children nodes here ] }
One advantage that VRML has over HTML is the ability to construct a total world from a number of smaller ones. The Inline node allows you to specify the URL to other VRML files and then place them within the world.
A very common use of this node is to create a fast-loading world. This is done by creating a skeleton world that consists of, for example, the basic ground plane, and then the rest of the buildings are inlined. The user gets to see the basic world outline very quickly, and then the details are filled in around them as they wander around.
The syntax of the inline node is very simple:
Inline { url "a_building.wrl" }
The inline can then be placed anywhere in the world either by placing it as a child of a Transform and then translating it or by setting the bboxCenter field (which is the center of the bounding box of the inlined world).
One field that has not been demonstrated in the earlier discussion of the Anchor node, is the parameter field. This string field allows you to pass parameters along to the target world.
One of the most common uses for the parameter field is to use the VRML world to provide links to HTML documents in a multiframed page. The parameter field then contains the string
"target=name_of_frame"
Tip |
Netscape introduced some extensions to the original VRML 1.0 specification by providing a Target field to the Anchor node that does the same thing but only on multiframed documents. |
which then directs the request to that frame. There are also other uses. If the anchor links to a Java applet, it can be used to pass values to it in the same manner as the <PARAM> tag does in HTML.
So far, I have compared the differences between VRML 1.0 and 2.0. Now we are heading into the realms that are completely new areas that have been introduced in the latest version.
Animation and its relative, behavior programming, provide true interactive virtual environments. If you feel comfortable with either JavaScript or Java, then you are in luck, because you can use these skills in VRML as well. Because behaviors and animation are a very big and complex topics, I only scratch the surface so that you can then explore it further with other books if you desire.
The first step along the way to providing interactive behavior is to learn about animation. But there is an even smaller step than this-learning how VRML passes information between nodes to make it all happen.
To pass a bit of information between two nodes, you create an event. To VRML, this is a way of one node informing another that something has changed. An event can be anything from a clock ticking to provide time to the addition of new geometry to the scene.
When you look at the definition of a node in the VRML 2.0 specification, you will notice that some of the fields are specified as eventIn or eventOut. These are the fields that are used to pass information between nodes (exposedFields also does it).
To make two nodes pass events between each other, you must explicitly connect them using the ROUTE statement. ROUTEs connect an eventOut from one node to an eventIn on another and both must be of the same field type. For example, you cannot pass an SFInt32 to an MFNode.
ROUTE from_node.from_event_out TO to_node.to_event_in
ROUTEs can be declared anywhere in the file after they have been declared. That is, they are not part of the normal part of the scene structure.
When you go to create an animation, you need some sense of time. VRML does not explicitly have time built in to its model. Instead, you use a node that is capable of "sensing" time and passes that as an event to the rest of the world.
TimeSensors are fairly complex so I won't try to describe them all here. For the basic worlds that you will first create, you need to know how to set up a continuous time output. Time in a VRML world runs in seconds and time 0 is midnight GMT on the first of January 1970. (This is the way that time is represented internally on most computer systems.) A TimeSensor has two main fields-startTime and stopTime. A third field, loop, is used to control if you want to make the time loop.
To make a TimeSensor create a continuous time output, you set the stopTime to less than the startTime and set loop to True. To make it drive something in the world, you hook the fraction eventOut to any node that you desire. The fraction is a value that runs between zero and one inclusive. If you are using a continuous output, you can control the rate at which the fraction output cycle is repeated by setting the appropriate value in the cycleTime field. When the TimeSensor gets to startTime + n * cycleTime, it outputs a one, and the next value is a zero.
The exact implementation of how the TimeSensor works is up to the browser, and also a close reading of the specification is wise. What has been presented so far should enable you to understand the examples presented in the rest of this chapter.
There are a couple of ways to make an object move. The first is to create a script that generates the movement for you. To do this, you need a fairly good understanding of the math involved. However, if you want to save yourself work, the second option of using the built-in interpolator nodes is the best way to go.
There are a number of different interpolators for different tasks.
The one that you are likely to use most often is the PositionInterpolator.
All the interpolators take the same set of parameters. Only the
output is different. The input is a fraction between zero and
one, and the output is an interpolated value from your defined
set of points. The parameters are a set of key points between
zero and one and a matching set of values that should be output
when the input fraction reaches that key value. If that sounds
a bit confusing, consider the following set of values for a position
interpolator:
This set of values describes a square path that never ends-the last point is the same as the first so that when the TimeSensor returns to zero at the end of a cycle, you are back where you started.
When you are creating a world with animation in it, you also like to get feedback from the user. The problem with 3D worlds is that there is no place to put a collection of buttons. Instead you make objects in the world respond to user input by placing a sensor on them. VRML contains a reasonably large collection of sensors to achieve many common tasks.
Probably the most common sensor that you will be dealing with is the TouchSensor. This sensor creates an event each time the user touches on the object that you have placed it on. The creators of the VRML standards have been thoughtful enough to define this to work not only with standard mouse input, but also with other 3D devices like datagloves.
The sensor nodes operate a bit differently in syntax to the other nodes. Consider the code in Listing 40.5, which creates a red cube that, when clicked, creates an event.
Listing 40.5. How to use a TouchSensor.
#VRML V2.0 utf8 Group { children [ Shape { appearance Appearance { material Material { emissiveColor 0.8 0 0 } } geometry Box {} }, TouchSensor {} ] }
Instead of the TouchSensor having children to look after, it applies to its siblings. If the children you create also have children in them that contain touch or other sensors, then the lowest one in the tree generates the event.
Apart from the TouchSensors and TimeSensors that you have already learned about, there are a number of sensors that are known as drag sensors. These take the user's input (for example, pressing the first mouse button and then dragging) and then translates that into a motion. For example, SphereSensor makes the cursor follow the path of a sphere. "So where is this useful?" I hear you ask. Consider trying to create the VR equivalent of a slide switch that you might find on your stereo system. These sensors force the mouse (or other input device) to follow a specific path. In the slider case, it is a straight track in 3D space.
Basic collections of interpolators can do a fair amount of your animation. However, there comes a point in time where you find you need to do more that is not provided by the language. In this case, you need to move on to scripting.
Scripts can be anything from a basic addition of two numbers to generating VRML on the fly. After starting to play with TouchSensors you notice that when you click down on them, a true event is created, but as soon as you release it, a false event is sent. If you are trying to control animation, it only runs while you are holding down the mouse button. What you really need is a toggle behavior.
Enter scripting. A script allows you to create almost any behavior that you want. All it requires is a little bit of programming. No doubt you have already started dabbling with either JavaScript or Java to enhance your Web pages so getting some scripts going doesn't require any more learning.
A Script node is fairly simple. All you need to do is declare a Script node, fill in a couple of required fields, and then add in whatever other fields you like. The most important of the required fields is url. This specifies the file that is to be used for the behavior program and is explained for each language. There are two other fields that are not explained here: mustEvaluate and directOutputs.
Once you have a basic idea of what you require for your script, you can build the node a piece at a time. Let's examine the toggle switch example. Each time that the user presses on the switch, it must change the state of the output. However, we want to make it a little more complex than that. It should only trigger when the button is lifted and over the object.
The last part of the previous section is fairly important to understand. We need to get events for when the TouchSensor is touched but also when the pointer is over the object (it is possible to have clicked on the object and then drag away and release the button when it is not over it). Reading this should then tell you that you need two input events and one output. Also, we need to store the state in between clicks, so an internal field is needed. Listing 40.6 gives you the outline of what is needed on the VRML side of this script. This can then be wired to the code in Listing 40.5 with some routes for the input, and then the output can be used to control something like an animation.
Listing 40.6. Outline of the script declaration to make a toggle button.
TOGGLE_BUTTON Script { url "" # to be filled in later field SFBool pointer_over FALSE eventIn SFBool isOver eventIn SFBool isActive eventOut SFBool value_changed }
There is not that much difference in using JavaScript in VRML than in a normal Web page. To create a JavaScript behavior, the url must either point to a file that ends in .javascript or it can embed the code within the VRML file by using the javascript:url declaration. With this you can then place all of your code within the one file, but remember-the bigger the file, the longer it takes to download and hence the longer a person must wait before they can start using your world.
So you have decided to place the code inline, what should the code look like? Listing 40.7 shows the code to produce the toggle behavior. Each eventIn that has been declared within the VRML definition has a corresponding function. Every time that field gets an event, that function is called.
Listing 40.7. Completing the toggle button behavior with JavaScript.
url "javascript: function isOver(value) { pointer_over = value; } function isActive(value) { if((value == FALSE) && (pointer_over == TRUE)) value_changed = !value_changed; } "
Of interest here is that you can read the value of the eventOut before assigning a new value to it. In this example, I have only used the first argument. However, it is possible to use two or no arguments for each function. The first argument is the same type as the matching declaration in the VRML code and the second one is the timestamp of when it occurred. If you remember from the TimeSensor description, time is measured in seconds. You can then start an animation five seconds after that event by looking at the timestamp and then adding 5 to it before passing that out as an event to another TimeSensor.
For the more adventurous (or demanding) Web sites, scripts written in Java may be your preferred method. Java is more flexible than VRML script and also runs much faster. One of the prime reasons for using it is the additional capabilities in the libraries such as multithreading and networking.
The process of writing a Java script is different from JavaScript. You need a much closer understanding of how event models work. Java scripting has been designed with the idea that a browser can be written in Java. If you look closely at it, you will notice many similarities to AWT classes.
Due to these restrictions, the Java script lacks some of the nice features like having methods that are directly named after the eventIns as in JavaScript. Instead, there are one of two functions that are to be used. The ProcessEvent() method is used when you are only dealing with one event at a time and ProcessEvents() is used for handling multiple events. The reason for the two separate methods is that browsers may optimize performance by batching a whole heap of events and then send them all to the script at once.
A complete description of the Java API takes more room than is possible so I will show you how to implement the toggle switch example in Java. Listing 40.8 shows the Java source file. Notice that there is much more writing needed to achieve the same effect. In this case, I have used separate methods that are called for each event. However, normally with such a small system, I would write the code all inlined in the event handler.
Listing 40.8. The toggle button code now implemented in Java.
// Java source for the toggle button example import vrml.*; import vrml.field.*; import vrml.node.*; class toggle_button extends Script { private SFBool pointer_over; public SFBool value_changed; // initialisation method public initialize(void) { pointer_over = (SFBool)getField("pointer_over"); value_changed = (SFBool)getEventOut("value_changed"); } private void isOver(ConstSFBool value) { pointer_over.setValue(value.getValue()); } private void isActive(ConstSFBool value) { if((value.getValue() == TRUE) && (pointer_over.getValue() == TRUE)) value_changed.setValue(!value_changed.getValue()); } // the event handler public void process_events(Event[] e, int count) { int i; for(i = 0; i < count; I++) { if(e.getName().eq("isOver")) isOver(e.getValue()); else if (e.getName().eq("isActive")) isActive(e.getValue()); } } }
One method that I have not mentioned yet is initialize(). It is normally the case in VRML that when the class constructor gets called the VRML values, like field defaults, are not yet valid. To solve this problem, the initialize method was added. This method gets called once during the life of that class, just after the VRML world is complete but before the user is allowed to interact with it. In this method, you initialize any Java fields with their VRML equivalent values. This method forms the bridge between the VRML and Java visions of the world.
VRML can have many different uses. However, the one that you are most likely to be interested in is to enhance your corporate or personal Web page on the Internet. With this in mind you may want to incorporate some of the following features into you world.
Not everybody has a high-end workstation on their desk so you need to be careful about performance. Performance is mainly affected by the amount of complexity of your scene. If you load it up with many texture maps and highly complex objects, it will always run slow no matter what sort of machine you are running.
How do you get around this problem? VRML has a node that allows you to control the amount of detail depending on the distance away from an object you are. If you look at an object in the real world, you notice that as you get further away from it you see less and less of the detail. You can simulate this effect with the Level Of Detail (LOD) node.
In this node, you can set a series of distance ranges and what you want to appear within those ranges. Listing 40.9 shows how to use it. The ranges field puts the ranges where you want the transition to be. You need to define one more set of objects than the number of ranges in the range field. This is because they act as the transition point.
It does not really matter what you put for the geometry for each range; they do not even need to look the same. As you see from Listing 40.9, the object for the far distance is a cube and the close object is a sphere.
Listing 40.9. Demonstration of using LOD.
#VRML V2.0 utf8 LOD { range 5 levels [ Shape { appearance Appearance { material Material { emissiveColor 0.8 0 0 } } geometry cube {} }, Shape { appearance Appearance { material Material { emissiveColor 0 0.8 0 } } geometry sphere {} } ] }
When you try this out in your browser, you may notice one of two effects. The first is that as you move closer, you go from a cube to a sphere as you would expect. The other possibility is that it only shows the sphere. Why is this so? If you read the specifications closely, it says that the ranges are only a guide to when to change the detail levels. The browser is free to choose what it likes in order to keep the frame rate up. So what ends up happening with a simple model like this is that it always shows the highest detail model (the sphere) because it knows it can do this and still give you nice smooth motion.
Here is one of the other major nasties that you should be aware of. In most of the early VRML browsers, using a world with LOD in it was slower than using it without and running it at the highest detail level. This was due to some poor implementations of the LOD algorithm and also that the browser would load all of the models into memory. If the machine was low on resources, particularly memory, then it would suffer a great performance hit. In the finicky world of the Web user, this is not a good thing.
Once you start creating some moderately complex worlds, the file size starts increasing dramatically. Using the complex node types like the IndexedFaceSet causes very large file sizes. To minimize the file size, there are a number of approaches.
The first approach that you can take is to remove all of the white space. This is things like the formatting that makes a file readable, extra space at the end of line and between characters. Modeling programs seem to be very good at putting excess white space in-which in itself is not a bad thing because it makes it more readable for you.
Besides removing white space, the next thing to target is too much precision. For just about all worlds, you do not need more than three decimal places. Any more than this and it is, in effect, ignored because it is too small to put on the screen. So to reduce your file size even more then you need, only specify a small amount of precision.
Although reducing white space and precision can save you much in file size, there is one more step for those truly huge files. Compress them. VRML 1.0 allows the use of gzip for compressing large files. This is still the same for the latest version. Essentially all the techniques you use for VRML 1.0 can be used in 2.0.
Looking into the future, one of the forthcoming additions to the specification is the binary file format. This file format uses a binary representation of the VRML world rather than the ASCII text format currently being used. This will reduce file size even more than the methods already mentioned in this chapter. However, it is still in the planning stages at the time of writing.
By now you are getting very familiar with the Shape node and how it is used. It is basically the core of a VRML file description. However you soon get tired of typing appearance Appearance over and over. Also there are a whole collection of standard objects and scripts that you build into your toolkit. This next section looks at how to extend VRML to handle canned behaviors and new node types.
The first thing you may want to do in a file is to create a shortcut to a commonly used function within a world. The Shape node example mentioned in the previous paragraph is a common function. If you want to create a shortcut node only for that file, use the PROTO node. Once you have declared a PROTO node, you can use it just like any other standard VRML node within that file. To create a node that is used by other files, you need to create another PROTO node.
The basic syntax of a PROTO node is as follows:
PROTO node_name [ # field and event in declarations ] { # VRML code to implement it }
Using this syntax as a base, you can create a simple PROTO node for the shape example by providing the geometry and a color. The object is then created. The shape example is declared in Listing 40.10.
Listing 40.10. A simple shape PROTO node.
#VRML V2.0 utf8 PROTO SimpleShape [ field SFNode shapeStyle field SFColor shapeColor ]{ Shape { appearance Appearance { material Material { emissiveColor IS shapeColor } } geometry IS shapeStyle } }
In this example, you see the appearance of the IS keyword. This keyword allows you to associate a field in the declaration with one in the VRML implementation part. You can declare multiple IS relationships for the one field in the PROTO declaration. The type of group that PROTO belongs to is defined by the first node's group typing in the implementation part. In fact, only one top-level node is allowed to be in the implementation.
Now you can use this node however you like in the rest of the VRML file. You need to add this next section of code to the bottom of that listing in 40.10 for it to work.
Transform { translation 1 0 0 children [ SimpleShape { shapeType Cube {} shapeColor 0 0 0.5 } ] } Group { children [ SimpleShape { shapeType Sphere { radius 0.5 } shapeColor 0.5 0 0.5 } ] }
From this you should see a few nice uses already. Notice that in the second shape that we declared a full sphere with all its fields as well. Anything that you can do with a normal VRML node you can do with PROTOs.
Although it is useful to declare a shorthand declaration for a node it is more useful to be able to reuse that node across many files. To do this, you need to use the EXTERNPROTO definition. However, you need use the PROTO definition to declare the actual implementation of a node within a file before it can be used by other files.
Once you have the basic file done, you can reuse it by declaring an EXTERNPROTO at the top of the file in which you wish to use it. This is essentially the same as the declaration used for the PROTO with one small difference. Instead of the implementation, there is a list of the URLs and/or URNs to the actual file that defines it. To use the SimpleShape definition from Listing 40.10, declare the code given in Listing 40.11.
Listing 40.11. Using the SimpleShape definition in another file.
#VRML V2.0 utf8 EXTERNPROTO ExternShape [ field SFNode shapeStyle field SFColor shapeColor ] "shapelibrary.wrl#SimpleShape"
Note the use of #name to get the prototype reference from the file. If you want to create a collection of commonly used prototypes, you can place them all in the one file and reference the individual protos in this way. The name that you use after the # character is the name used in the PROTO definition. This example also demonstrates that you can name an external prototype with a different name than the original definition, which is handy to avoid name clashes if you are using a number of different libraries.
You are in the Web publishing game and want to know how to combine VRML with all those other technologies that you have spent time learning. As you have seen with scripting, effort to learn Java or JavaScript is beginning to pay off. The next step is integrating VRML with your current Web site.
Apart from using anchors and inlines to provide links to other documents, it is useful to incorporate other technologies. The current incarnation of VRML does not allow you to use 2D text within the environment unless you make it up as a bitmap and then use it as a texture. However, the opposite approach is available; including 3D graphics in your Web page.
There are two approaches available: embedding VRML into a page or creating a multiframe document that has one frame devoted to VRML. To embed a VRML world into a HTML page, use the standard <EMBED> tag that you use to incorporate other technologies.
The more interesting-and probably more widely used technique-is the multiframed document approach. When we looked at the parameter field a little earlier, it showed how a VRML world can control the contents of other frames. Well, the reverse is also true. However, you do not want to replace the VRML world every 30 seconds because they take too long to load. VRML does include one nice feature.
If you have declared your viewpoints with DEF names, you can refer to these from an external document using the # syntax. This time the name used is the DEF name. In the example shown in Figure 40.2, the upper right frame is used to take you to different viewpoints in the VRML world.
Figure 40.2 : A complete multiframed document combining VRML and HTML.
Another area that you may wish to keep track of in the near future is an external interface to the VRML world. The VRML development list is working towards getting a standardized external interface so you can do more than just move between camera points. It is likely that the interface will include Java and/or JavaScript variations.
So where does this leave VRML for future development? Already I have mentioned that external interfaces and binary file formats are in the works. One area that needs attention is the scripting languages. At the moment, there isn't a required script language so browser writers are free to support either Java or JavaScript-or any others that they want.
However, one of the biggest areas now missing is the inherent support for multiuser virtual environments. When you enter a standard world there will be you and only you in it. There are a number of products like BlackSun's CyberHub Client (http://www.blacksun.com/) that add extensions to your existing VRML worlds and your favorite browser (only Live3D and Cosmo are supported in early releases but more are in the works). You can even use Java scripts within the VRML world and use socket interfaces to connect to another server. It all depends on how adventurous you are.
VRML 2.0 presents a whole new experience for your Web site. Gone are the static worlds and now the interactive 3D environment is ready for your clients. In this chapter, I have only scratched the surface of what you can do with VRML. I suggest that a book fully devoted to VRML 2.0 is a wise investment if you plan to pursue it seriously.