OpenRM General FAQ |
---|
|
Simply put, a scene graph is just a convenient way for graphics applications to organize data for fast rendering. Many developers prefer to code applications to a scene graph API rather than directly to a graphics platform, like OpenGL, since the scene graph API often encapsulates details of the platform as well as provides services and features not present in the underlying platform.
Scene graph software is not a new thing. There are many commercially available packages, as well as numerous Open Source or free scene graph implementations.
Check out this article we wrote for Tom's Hardware a couple of years ago. It provides a more lengthy explanation of what scene graphs are, and why you should care.
Even though many operations, such as rendering and picking, can be performed on an arbitrary scene graph (of nodes that your application creates and links together), rmRootNode() is special because it contains default state data, such as default text properties, surface reflectence properties, and so forth.
Unfortunately, there is a lot of confusion from implementation to implementation, as terms "pipe", "context", "window" and so forth are inconsistently used in documentation. In OpenRM, a "window" is an object that is under the purview of the host window system; a "context" is an OpenGL rendering context, and a "pipe" is an RM object that is the aggregation of window, context, display device and related parameters. During the course of an application, it is possible for the app to assign a new or different context or window to an RMpipe.
Frame-based rendering is a function of a scene graph containing objects to draw, and a rendering environment, in our case, an RMpipe. Similarly, event management through the RMaux event loop manager is a function of the RMpipe, which in turn knows about windows.
The short answer is that there are no default values - the framebuffer clear operation in OpenRM is a "scene parameter" that is attached to an RMnode. During a render-time traversal, the presence of one of these scene parameters is detected, then the corresponding action is taken. Therefore, these scene parameters must be used with care.
The reason that OpenRM does not provide a default "framebuffer clear" operation attached to the rmRootNode() requires an understanding of the OpenRM multipass rendering framework, RMnode traversal masks, and the types of framebuffer clear operations available to the application. See the following three FAQs for more information.
There are four types of "framebuffer clear" scene parameters: a single color value used to clear the color planes, a single floating point value used to initialize the depth buffer, a background image tile that will be tiled into the color buffer (within the currently active viewport, also a RMnode scene parameter), and a background depth image tile that is tiled into the depth buffer.
Color values and color tiles are mutually exclusive: you can use one or the other, but not both. Similarly, depth values and depth image tiles are mutually exclusive.
Multipass rendering is the process of making multiple passes through the scene graph, using different parameters for rendering and/or object "selection" during each pass. One pass might render only opaque objects, while a later pass renders transparent objects. When you use the new multistage rendering capabilities (starting with v1.4.0-alpha2), you encounter the notion of "view stage" and "render stage" traversals.
The OpenRM multipass rendering model supports up to three rendering passes in possibly two scene graph traverals. In order, these are: 3D opaque, 3D transparent, and then all 2D objects. There is a view stage traversal, and a render stage "traversal." (The scene graph isn't really traversed at render time, as the view stage generates a compact list of state changes and things to be drawn during render. However, it is possible for an application to request a node callback that is invoked during the render, rather than view traversal.)
Since you probably don't want to draw the same set of objects in all three passes, there are "traversal masks" that are a part of the scene graph nodes. Whether or not a node (and it's children) is processed during a given pass is a function of these traversal masks and the current rendering pass. The RMnode's traversal masks (3D/2D, opaque/transparent) are specified at the time the RMnode is created, but may be later modified if desired. In addition, there is support for a traversal masks for left and right stereo channels, so you can have some objects rendered only in the left- or right-eye channels.
RMpipe's can be configured to the extent that one or more of the three rendering passes can be turned off or on. The order is immutable. In the future, we'll provide some additional flexibility along these lines.
The use of framebuffer clear operations requires careful consideration and awareness of multipass rendering issues.
Multistage rendering is essentially a divide-and-conquer strategy for decomposing rendering into a number of smaller subtasks that can be arranged in "assembly-line" fashion. When the multistage model is multithreaded, each of the stations in the assembly line runs concurrently. A multistage, multithreaded architecture can significantly increase rendering performance in some, but not all circumstances.
OpenRM uses a two-stage task decomposition in rendering. The first stage is where all view dependent operations occur. The second stage is where things are actually drawn. When a complex scene is well organized (by the application or modeler), those objects that lie outside the view frustum (the region of space where you can actually see objects) will not be drawn at all, thereby reducing the load on the graphics pipe. Other view stage operations include accumulation of transformations, computation of matrix inverses and evaluation of certain functions that select between several possible models (so called "level-of-detail" nodes). The view stage generates a streamlined list of commands that are quickly and efficiently executed by the rendering stage. The overall goal is to keep the graphics pipeline full at all times.
One drawback with multistage rendering is that the multistage task decomposition introduces a fixed latency into the rendering pipeline. In other words, if you have a two-stage pipeline (which OpenRM has: view and render), it takes two calls to the renderer before anything appears on the screen. This is because the first call executes the view stage on the first frame. The render stage has nothing to do on the first frame, so it sits idle and produces no output. The second time you call the frame renderer, the results from the view stage, computed in the previous frame, are processed by render, while the view stage executes again, producing output for render that will be consumed on the next frame.
When each stage of the multistage rendering pipeline runs concurrently, it is possible to improve overall rendering throughput. OpenRM uses Posix threads to implement parallel processing on all platforms (Unix, Linux and Win32). We have provided four types of processing modes, each of which has advantages and disadvantages (see the following table). The processing mode is assigned at the RMpipe level using the OpenRM function rmPipeSetProcessingMode.
Processing Mode | Comments |
---|---|
RM_PIPE_MULTISTAGE | View and render traversals are executed sequentially in one frame rendering call. This mode is essentially serial and multistage. Both view and render execute in the same processing thread as the caller. This mode of processing is guaranteed to be thread safe. |
RM_PIPE_MULTISTAGE_PARALLEL | View and render traversals are each placed into detached pthreads that execute concurrently. This mode is OpenRM's fully parallelized and multistage processing. This mode of processing is guaranteed to be thread safe. (However, this processing mode does not work with all OpenGL implementations. Refer to the current RELEASENOTES for more information) |
RM_PIPE_MULTISTAGE_VIEW_PARALLEL | The view traversal is placed into a detached pthread, while render remains in the same thread as the caller. View and render do execute concurrently. This mode of processing is guaranteed to be thread safe. This processing mode works reliably on all OpenGL implementations we have tested. This mode was created to support multistage and multithreaded processing in "host environments", like CAVELib, that require that a process outside of the scene graph own and manage the OpenGL rendering context. |
RM_PIPE_SERIAL | View and render execute in one pass through the scene graph. This is the pre-1.4.0-alpha-* code. This code may not be thread safe in all circumstances. |
The most common error we've seen is attaching a framebuffer clear on an RMnode that is processed in more than one pass during multipass rendering. If you attach a background clear color scene parameter to the rmRootNode(), you'll get an image that's empty, but painted the background color. Assuming the default three rendering passes are used, the framebuffer will be cleared a total of three times in one frame!
As an OpenRM developer, you have two options. The first is to place the framebuffer clear at an RMnode that is processed exactly once during a multipass rendering. You will manipulate the traversal masks in an RMnode to achieve this. The other option is to disable some of the multiple rendering passes at the RMpipe level. The former option is more precise and robust, while the latter option may accelerate rendering, depending upon the scene graph, as fewer trips through the scene graph are performed.
As of v1.4.0-alpha-2 (March 31, 2001), OpenRM provides a multistage and multithreaded rendering engine. The developer can specify one of several processing modes, ranging from serial to fully parallelized.
Starting with version 1.5.1 (19 January 2004), OpenRM now supports constant-rate rendering. To activate constant-rate rendering, use the routine rmPipeSetFrameRate to set the target frame rate.
The constant-rate rendering feature is intended to limit the rate at which frames are rendered to some maximum number of frames per second. This feature does nothing to reduce model complexity to accelerate rendering rates for complex scenes. That activity is best performed in the application. Future work will provide the ability to obtain a measurement of graphics load so that your application can detect when the graphics load is too high, and react accordingly.
Bugs: We have observed problems with constant-rate rendering performance on some, but not all platforms. Platforms known to work without any problem: Fedora Core 3 and 4. Platforms with known issues are: SuSE 9.2 Pro, SuSE 9.3 Pro, and Windows.
OpenRM includes support to read and write JPEG and PPM raster images. There is no support at this time to load geometric models, such as MultiGen's OpenFlight format.
At this time, OpenRM's demonstration programs vis2d and vis3d can read a home-brew file format (called "dio") for representing structured grids. The same library can read and write AVS-format images, for use as textures or raster dumps of the framebuffer. There are no IGES, MCAD, VRML or other file format loaders at this time. Would you like to write and contribute some? Would your organization like to fund their development? Please contact the OpenRM project admin at wbethel at users dot sourceforge dot net for details.
This question comes up mostly from those with Performer experience. It is straightforward to implement a detached thread that loads models then synchronizes with the main app thread to page in the model. It would be useful (ease of use) for the scene graph system to provide such a framework out of the box, but it's not huge technical issue. Future versions of OpenRM may include a code skeleton that could be used to implement an asynchronous model loader thread.
What is more important, IMO, is that the SG system is thread safe, which OpenRM is. That means that two separate threads can asynchronously build their own separate SGs that get merged together at a later time.
Effective with OpenRM 1.5.0, the OpenRM Programming Guide may be purchased online for a nominal charge at the R3vis website. Like the model used by sendmail.com, the previous version of the manual may be downloaded free of charge. Those who make a contribution to the community demonstration portion of the OpenRM Gallery will be given a copy of the manual in exchange.
In addition, there's always the source code... Most of the OpenRM routines have been documented in "man page" style format, and this information is being made available online (until such a time as SourceForge.net asks us to remove it). The same documentation can be auto-generated from the OpenRM source tarball. This online collection comprises the RM Scene Graph/OpenRM Scene Graph Technical Reference Manual. Also, the demonstration programs exercise nearly every possible OpenRM parameter, so they are good sources of coding examples.
As a preface, let's say that this topic borders on being a "Holy War." That said, we have made a conscious design choice to implement in C rather than C++ primarily to have a more portable code base. We wish to avoid problems that stem from different name-mangling strategies in different compilers, problems with different implementations of STL, and so forth. We simply feel that simpler is better. In addition, the C implementation results in a much more compact and efficient scene graph API, which benefits you - the developer.
There is absolutely no problem using OpenRM from C++ programs - many have done so successfully.
There has been discussion amongst the OpenRM developers to migrate more towards a C++ model, but we have taken no steps in that direction.
OpenRM supports both immediate and retained mode. By default, all RMprimitives (the equivalent of a Performer pfGeoSet) are reduced/compiled into display lists. That behavior can be disabled on a per-RMprimimitive basis, if desired. In general, OpenRM is aggressive about using retained mode whenever possible.
The short answer is "no, pbuffers are not used for offscreen rendering." Pbuffers do make an appearance as part of the GLX 1.3 specification, and, according to rumors we've heard, are accessible on Win32 via a rather circuitous route. Unfortunately, there has not yet been a universal adoption of the 1.3 GLX specification. E.g., Mesa implements the 1.3 GLX API, but the pbuffers routines are only stubs.
On X11, we use offscreen pixmaps for h/w accelerated rendering, and on Win32 we use the "Device Independent Bitmap." Presumably, the wglGetProcAddress() required to access pbuffers is a veneer layer over the DIB. If so, there is little to be gained by going this route on Win32.
As support for pbuffers becomes more universal, we will migrate towards consistent use of pbuffers for all offscreen rendering. When there is sufficient demand, we will implement platform-specific optimizations to access pbuffers.
In order to make your objects transparent, a number of conditions must be satisfied. All are straightforward, but require attention to detail when you construct your scene graph.
Also, check the demonstration programs. There are several that demonstrate the use of transparency.
#define RM_JPEG 1to:
#define RM_JPEG 0
We believe the following list to be technically accurate and even-handed in presentation, but do believe the list is incomplete. This list was written by an OpenRM developer who has experience writing commercial Performer applications.
Both OpenRM and Performer have multistage rendering (cull, draw) capabilities.
Performer has, but OpenRM doesn't have:
OpenRM has, but Performer doesn't have:
Caveat: the following comparison is based upon a survey of Inventor literature and intimate knowledge of OpenRM. We've not developed any commercial applications using Inventor (or Open Inventor). Comments and clarifications are welcome.
This document descibes the component manager, the context cache and synchronization mechanisms that form the basis of the thread-safe OpenRM Scene Graph implementation. Also, check at R3vis OpenRM/RM Scene Graph technical publications page for additional information.
A couple of years ago, support for multithreaded-apps and nVidia hardware was not so great. More recently, nVidia has done a good job with their drivers, and we know of no issues with any of the multithreaded-RMdemo applications using current nVidia drivers (circa August 2005).
If all your software is up to date, you may want to have a look at your hardware. Not all AGP chipsets are created equally. We have had excellent results using AGP chipsets on Intel motherboards (i815, i840, i7505, etc.) and varying results with others (VIA for x86 and Athlon, 761, 762 for Athlon).
See the previous section. Our development enviroment consists of a stock Suse 9.3 Professional distro. We use a h/w accelerated setup that includes nVidia cards and the 1.0-7667 drivers, along with s/w only systems that use Mesa 6.2.1.
This information is copied from VRCO's website:
The CAVELib(tm) is an Application Programmers Interface (API) that provides general support for building virtual environments for Spatially Immersive Displays and head-mounted displays including desk-type devices, cubic displays, multi-piped curved displays, and some dome styled displays. The CAVELib is not an application, it's a building block used to create applications for a variety of virtual environments.
The CAVELib configures the display device, synchronizes processes, draws stereoscopic views, creates a viewer-centered perspective and provides basic networking between remote Virtual Environments. The CAVELib allows a single program to be available on a wide variety of virtual display devices without rewriting or recompiling. The CAVELib uses a resource configuration file that can be modified to change display and input devices, making the programs written on the CAVELib portable to a wide-variety of display devices.
The CAVELib is an API of functions that can be used by programmers to create robust programs for virtual display devices or desktops. The CAVELib is not the product of choice for the non-programmers and end-users simply wanting to interact with a virtual environment. For those customers there are a variety of applications available including the VRCO's VRScape[tm] model viewer.
Yes. R3vis and VRCO have both created a number of applications that use both OpenRM and CAVELib. These applictions use VR input devices, and render to stereo-capable tiled surface displays. See the CAVE demonstration section (below) for more details about demonstration versions of these programs.
While a complete description of the technical issues that have bearing upon OpenRM and CAVELib applications is beyond the scope of this FAQ, a summary of the most important issues follow. For more details, please refer to this OpenRM and CAVELib white paper created by R3vis Corp.
If we assume a standalone OpenRM application, such as one of the OpenRM demonstration programs, the following list of changes are required to enable compatibility with CAVELib.
At this time (Sept 2000), this FAQ and the example code are your best sources of information. Visit the VRCO website to obtain technical information about the CAVE library, and study the example programs available from the OpenRM download page.
We have written a paper that describes using the multithreaded OpenRM Scene Graph with two environments that manage multi-display environments, one of which is CAVELib. More information.
Yes. Visit the OpenRM download page to obtain the OpenRM+CAVELib demonstration programs. These examples use CAVELib to gather VR device information, and OpenRM for rendering.
The answer to this question is somewhat complex, involving both business and technical issues. From a business standpoint, we feel that our customers would be better served by having an Open Source scene graph product than one that is closed and proprietary. From a technical standpoint, we feel that the OpenRM base will experience more substantial technical growth being positioned as an Open Source product. Ultimately, our focus is on creating new technology and applying that technology in new, exciting and useful ways, rather than writing software that sells our hardware (we don't have any).
In December of 1999, R3vis announced the intention to launch OpenRM, and that it would be licensed under the MPL. Instead, we are launching it under LPGL, and the accompanying demonstration programs under GPL. The primary reason for the change is due to feedback from the user community.
Unfortunately, the MPL, while an excellent license, shows substantial Netscape-centricism's that rendered its use for this project inappropriate. We created a lightly-modified version of MPL, effectively replacing "Netscape" with "R3vis" and submitted this modified MPL to OSI for approval. The position of the community, including OSI, is that "fewer Open Source licenses are better." There were concerns raised that use of a derivative MPL, no matter how minor in difference, could cast a shadow of doubt over the future of the OpenRM Open Source project.
To alleviate those concerns while maintaining our position of promoting the use of OpenRM in all applications, not just Open Source (eg, GPL) applications, we chose the LGPL.
OpenRM is a forked version of RM Scene Graph. We are assigning a version number of 1.2 to the initial OpenRM release, so that RM Scene Graph and OpenRM have the same version numbers, at least at the outset.
RM - stands for "Render Monster." R3vis had a product called "Render Monster" way before a well-known graphics workstation vendor came out with a product named "Reality Monster." We chose to use just "RM" in order to avoid legal entanglements.
|
This page last modified -- Sunday, 07-Aug-2005 17:02:37 PDT |