Carna  Version 3.0.1
Carna Documentation

Carna provides classes for simple and fast visualization of CT data. It is named after the greek god of organs (yup, they really did have even one for organs). It is based on OpenGL 3.3 and Eigen 3.

Jump to: Quick Start | API 3.x Version Log | Test Suite Documentation


  1. The Frame Renderer: Built-in Rendering Stages and Geometry Types
  2. The Scene Graph: Spatial Class Specializations and Positioning CT Data
  3. Geometry Features: The Lifecycle of Geometry Features and Materials
typical.png
rendering result from example code below

Quick Start

  1. Implement the abstract base::GLContext class. If you're using Qt, you can simply instantiate the base::QGLContextAdapter template.
  2. Instantiate and configure a base::FrameRenderer, e.g. like this:
    const static unsigned int GEOMETRY_TYPE_VOLUMETRIC = 0;
    const static unsigned int GEOMETRY_TYPE_PLANE = 1;
    const static unsigned int GEOMETRY_TYPE_OPAQUE = 2;
    using namespace Carna;
    base::FrameRenderer fr( glContext, 800, 600, true );
    frHelper << new presets::CuttingPlanesStage( GEOMETRY_TYPE_VOLUMETRIC, GEOMETRY_TYPE_PLANE );
    frHelper << new presets::OpaqueRenderingStage( GEOMETRY_TYPE_OPAQUE );
    frHelper << new presets::DRRStage( GEOMETRY_TYPE_VOLUMETRIC );
    frHelper.commit();
    fr.findStage< presets::CuttingPlanesStage >()->setWindowingWidth( 2000 );
    fr.findStage< presets::CuttingPlanesStage >()->setRenderingInverse( true );
    fr.findStage< presets::DRRStage >()->setRenderingInverse( true );
    fr.setBackgroundColor( base::Color::WHITE_NO_ALPHA );
    The values of the GEOMETRY_TYPE_ variables can be chosen arbitrary, but must be distinct.
  3. Assuming you have somehow loaded data and know the voxel spacings, build the scene graph, e.g. like this:
    base::Node root;
    /* Configure camera.
    */
    base::Camera* const cam = new base::Camera();
    cam->localTransform = base::math::rotation4f( 0, 1, 0, base::math::deg2rad( 20 ) ) * base::math::translation4f( 0, 0, 350 );
    cam->setProjection( base::math::frustum4f( base::math::deg2rad( 45 ), 1, 10, 2000 ) );
    root.attachChild( cam );
    /* Configure geometry node for volume data.
    */
    typedef helpers::VolumeGridHelper< base::HUVolumeUInt16 > UInt16HUGridHelper;
    UInt16HUGridHelper gridHelper( data.size );
    gridHelper.loadData( data );
    root.attachChild( gridHelper.createNode( GEOMETRY_TYPE_VOLUMETRIC, UInt16HUGridHelper::Spacing( spacings ) ) );
    /* Configure cutting planes.
    */
    base::Geometry* const plane1 = new base::Geometry( GEOMETRY_TYPE_PLANE );
    plane1->localTransform = base::math::plane4f( base::math::Vector3f( 1, 1, 1 ).normalized(), 0 );
    root.attachChild( plane1 );
    /* Configure opaque geometries.
    */
    base::ManagedMeshBase& boxMesh = base::MeshFactory< base::VertexBase >::createBox( 40, 40, 40 );
    base::Material& boxMaterial = base::Material::create( "unshaded" );
    boxMaterial.setParameter( "color", base::Color::GREEN );
    base::Geometry* const boxGeometry = new base::Geometry( GEOMETRY_TYPE_OPAQUE );
    boxGeometry->putFeature( presets::OpaqueRenderingStage::ROLE_DEFAULT_MATERIAL, boxMaterial );
    boxGeometry->putFeature( presets::OpaqueRenderingStage::ROLE_DEFAULT_MESH, boxMesh );
    boxGeometry->localTransform = base::math::translation4f( 0, -15, 0 );
    root.attachChild( boxGeometry );
    /* Release geometry features.
    */
    boxMesh.release();
    boxMaterial.release();
    gridHelper.releaseGeometryFeatures();
  4. Issue the base::FrameRenderer::render method:
    fr.render( *cam );
    This code produces the rendering above.

Now lets take a closer look at what the code presented above actually does.

The Frame Renderer

The FrameRenderer consists of multiple components, each of which defines a particular aspect of the rendering logic. Those components are called rendering stages because their rendering logic is executed sequentially. The rendering results depends heavily on the stages order. There is no generally "correct" order, because this always depends on what one actually expects. The FrameRendererHelper class assumes a default order that will lead to the desired results in presumably almost all cases. Of course one could also leave out this helper and add the stages to the renderer in any desired order manually.

See also
The rendering process is explained here in detail.

Built-in Rendering Stages

In the example code above, several typical rendering stages are used:

At the moment the following other stages are available out-of-the-box:

Geometry Types

As one may guess from this list, each scene might contain multiple types of renderable objects. At least one could distinguish between polygonal and volumetric objects. Planes are certainly a third type: They are neither polygonal because they are infinitely extended, nor they are volumetric. This is the very breakdown that was used in the example, but it is up to the user to choose a more detailed classification if required. Note that each rendering stage expects to be told which geometry type it should render. By using two CuttingPlanesStage instances with different geometry types for example, one could render multiple cutting planes with different windowing settings.

The Scene Graph

Carna represents spatial entities with instances of the Spatial class. Such entities can be renderable objects, as well as imaginary points in space. The location of each spaital entity is determined relatively to another one that is called its parent. This parent-child relationship induces a tree structure, that is commonly referred to as scene graph. Such a scene graph represents a scene. This has two implications: First, that each scene contains exactly one node that has no parent, namely the tree's root. Second, that it is sufficient to specify an arbitrary node in order to reach any other Spatial of the scene.

Spatial Class Specializations

The specific type of a Spatial decides upon whether it is an inner node or a leaf of the scene graph. If it is allowed to have children, the spatial entity will be realized by an instance of the base::Node class, even if it has no children in a particular situation. In contrast, visible scene elements, i.e. such that can be rendered, must always be leafs. They will be realized by instances of the base::Geometry class usually. Another leaf type is the Camera that not only has a location within the scene, but also specifies how the 3D space is to be projected into 2D.

It should be clear from the above why the root of a scene graph always is a base::Node instance. The coordinate system of the root is often called world space. You can read more on the different coordinate systems and how they are related to each other here.

See also
Following classes simplify the Camera handling: presets::PerspectiveControl, presets::CameraShowcaseControl, presets::CameraNavigationControl
The base::SpatialMovement class makes the implementation of drag-&-drop like behaviour for Geometry objects very easy.

Positioning CT Data

The VolumeGridHelper class takes care of two things. First, it partitions the volumetric data into multiple smaller, box-shaped volumes. This partitioning is done according to an upper limit of each volume's memory size. It reduces the probability of out-of-memory exceptions due to memory fragmentation. In the example no specific limit is set, thus the default is used. Second, the VolumeGridHelper class creates a scene Node that represents the partitioned data within the scene. This Node has one Geometry child per volume partition.

Geometry Features

Each Geometry node is rendered by the rendering stage with a matching geometry type. Usually that rendering stage will query particular features from this Geometry object: Features are like components that make up the Geometry object in its entirety, but the Geometry object aggregates them, i.e. does not take their possession. This allows the same feature to be reused many times across the scene, or even across many scenes. Rendering stages identify features through the roles they take when associated with a Geometry object.

The geometry features API is documented here.

The Lifecycle of Geometry Features

Because of their shared-use nature, the lifecycle of geometry features is worth taking a look at. They have neither a public constructor, nor a public destructor:

The idea is that the user invokes release on the feature as soon as it is sure that the feature is not going to be attached to any further Geometry objects. The feature object is leaked when it never is released, in which case you will also get an error message on the log output.

Materials

Materials determine how polygonal geometries are rendered. The core part of each material is a shader. Besides that, it can also have a set of parameters that will be uploaded to the shader when the material is applied, and it could enforce a particular OpenGL render state.

There are a few material shaders available out-of-the-box:

The creation of custom materials is explained here.