Pinned Update #1

The Darc Library (C++) is now updated as of March, 2012. Click ▷here too browse the entire solution.

Friday, September 23, 2011

Multi Jar

Here is how to implement a java application/applet which works as a standalone version (without a browser) as well as web deployed (using an HTML launcher). This demo opens the main panel in a new window so that multiple launcher buttons can be placed on the same website.

Create Java Application
First of all, you need to create a standard java application. The class (e.g., mypackage.MyApp) must extend JApplet and implement ActionListener. This will be the entry point for the applet version.

Create Launcher Button
Create a JButton in the init() method of JApplet and add MyApp as its action listener.

Create User Interface
Clicking the button will open the user interface (UI). Usually this can be a simple JFrame object (e.g., mypackage.MyGUI). The class provides its own main() method which will be the entry point for the standalone version. Note, that the main() method is irrelevant for the applet version. It is only called if the jar file is clicked without a web launcher.

Create Web Launcher
If started from inside a browser, the jar requires a web launcher. The applet tag will place the launcher button in the HTML document. This will instantiate the main panel of the UI. Note that the applet dimension should match the size of the launcher button as defined in the applet code.

:: CODE BLOCK ::
<applet code="mypackage.MyApp.class"
   archive="dist/MyApp.jar" 
   width="200" height="30" />
Web Launcher

A demonstration project (NetBeans IDE 7.1.1) can be downloaded ▷here. It countains the source code, the HTML launcher and the jar file.

Y-101

This is the result of one of my last student projects, i.e., coming up with a concept of an innovative hockey stick. Of course, my part was the product visualization.

Darc Library

The Darc Library is a slightly modified wrapper of the former Protox project. It now contains Protox as well as a bunch of other modules filled with more or less useful stuff that has accumulated over the years. It currently consists of the following modules.

Darc
Global definitions, typedefs, enumerations and helper functions.

Geometry
Basic geometry types like vectors, matrices, triangles, segments, planes etc.

Math
Strictly mathematical stuff, usually generalized classes for the Geometry module.

Protox
Windows and DirectX stuff.

The whole thing can be explored ▷here, nicely formatted with doxygen.

Monday, July 11, 2011

Prime Factorization

This script uses a brute force approach.

Number Primes

Thursday, July 7, 2011

Matrix to Euler Angles

Every orientation in 3D space can be described using the product \(T\) of three rotation matrices, usually around the main axes \(R_X\), \(R_Y\) and \(R_Z\). Given such a matrix \(T\), it is often necessary to decompose it into three seperate angles - the Euler Angles. These can be used as input arguments for a sequence of basic rotation operations, so that the following equation holds:

\begin{align} T = R_i(\alpha_0) R_j(\alpha_1) R_k(\alpha_2) \end{align}

The choice of \(i, j, k\) determines the rotation convention. One common possibility would be Yaw-Pitch-Roll (XYZ, depending on the local coordinate system) but the following implementation works for arbitrary conventions.

Mathematics

The decomposition of \(T\) depends highly on the definition (internal implementation) of \(R_i\) so make sure you know what your matrix class is doing. The standard definition of rotation matrices around X, Y and Z looks like this:

\begin{align} R_X=\begin{pmatrix} 1 & 0 & 0 \\ 0 & A & -B \\ 0 & B & A \end{pmatrix} \end{align} \begin{align} R_Y=\begin{pmatrix} C & 0 & D \\ 0 & 1 & 0 \\ -D & 0 & C \end{pmatrix} \end{align} \begin{align} R_Z=\begin{pmatrix} E & -F & 0 \\ F & E & 0 \\ 0 & 0 & 1 \end{pmatrix} \end{align}

Here \(A, C, E\) denote the cosine and \(B, D, F\) the sine of the unknown Euler Angles. Let's take a look at \(T\) for the convention YXY.

\begin{align} T=R_{YXY}= \begin{pmatrix} CC-DDA & DB & CD+DAC \\ DB & A & -CB \\ -CD-DAC & CB & -DD+CCA \end{pmatrix} \end{align}

Apparently the element (indexing starting with 0) at \(T(1,1)\) is the cosine of the Euler Angle \(\alpha_1\) for the middle rotation matrix \(R_X\). This is true for all of the 12 possible conventions. However the location of this pivot element is not always the same. The coordinates are determined by the first and third rotation component by using the simple conversion \(X \rightarrow 0\), \(Y \rightarrow 1\) and \(Z \rightarrow 2\). The next step is to compute \(\alpha_0\) and \(\alpha_2\) using the elements horizontal and vertical of the pivot. Knowing \(A\) one can easily compute \(\alpha_1\) and therefore \(B\). You can now use the horizontal neighbors of the pivot element to get \(\alpha_2\) and the vertical neighbors to get \(\alpha_0\). Since the applied convention uses the same axis for the first and third rotation there are no \(E, F\) components in this matrix.

Gimbal Lock

Now what happens if \(B\) is close to zero in the above case? This means a so called Gimbal Lock has occured because the divisions in the row and column of the pivot element are undefined. This can happen if the orientation of \(T\) is so that one angle becomes ambiguous which means it can be set to any value, e.g., \(\alpha_0 = 0\). Since one Euler Angle is now zero the corresponding rotation matrix yields \(R_i=I\) and the new reduced transformation matrix looks like this:

\begin{align} T=R_{XY}= \begin{pmatrix} C & 0 & D \\ DB & A & -CB \\ -DA & B & CA \end{pmatrix} \end{align}

Interpreting the axes as coordinates again a new pivot element is found at \(T(0,1)\), which is always zero. Once again it's horizontal neighbors can be used to determine the remaining Euler Angle \(\alpha_2\). Keep in mind that the order of cosine and sine elements may vary depending on the convention applied. As previously mentioned there are 12 possibilities.

:: CODE BLOCK ::
\(R_X\) - pivot\(R_Y\) pivot\(R_Z\) pivot
Y-X-YX-Y-XX-Z-X
Y-X-ZX-Y-ZX-Z-Y
Z-X-YZ-Y-XY-Z-X
Z-X-ZZ-Y-ZY-Z-Y
Listing: A - "Conventions"

If you want to construct an Euler Angle decomposition for a special convention just follow these instructions to extract the necessary equations. I recommend a symbolic calculation, e.g., in MatLab to make this task easier.

Wednesday, July 6, 2011

City Gen

City Gen was a group project of 2008 based on OpenGL using advanced techniques, mainly Shader (implemented Phong shading) and L-Systems. It demonstrates the procedural generation of a virtual city with regard to several environmental parameters.

The scene consists of an initially planar mesh which can be modeled using the elevation tool by simply brushing over the target area. Brush diameter and pressure can be adapted. Figure A shows the subsequent steps in particular the application of the population map, the automatic generation of streets and finally the extraction of the blue blocks representing buildings and other structures.

Examples 1 to 3 show possible output after the street generation. The street generator uses a two phase L-System. In a preprocessing step up to 3 population centers are calculated. These centers are then connected using a "highway"-style L-System. Finally each street node spawns new street segments based on the population density, terrtain height and conflicting street segments. New street segments vary in length and angle and are placed so that they fit best into the existing street network for example by avoiding dead ends.

A more detailed description may follow in the future. For now, take a look at the source which will be made accessible if requested.

Tuesday, July 5, 2011

Binary Filter

Have you ever wondered how the sequence of natural numbers looks like if you filter out all the numbers that don't match a given number of ones or zeros in its binary representation? Well, i did, a while ago and that's why i came up with this little piece of code, the Binary Filter. Although it may not seem like it, the solution for this particular problem was somewhat tricky to obtain, hence i decided to share it with the world.

The key idea to my solution is the observation that the problem "Give me all numbers with \(k\) ones in its binary representation" is very similar to calculating the binomial coefficient. Consider the following table containing all 5 digit numbers with 3 ones.

:: TABLE BLOCK ::
\(i\)\(N_{10}\)\(N_2\)
02811100
12611010
22511001
32210110
42110101
51910011
61401110
71301101
81101011
97 00111
Listing: A - "Binary Filter in action"

Naturally, the number of items equals the binomial coefficient 5 over 3.
In order to calculate the correct number at an index \(i \in [0,9]\), we need to know how many ones are at the top and how many zeros are at the bottom for the first column. This happens to be the quotient of \(3 \over 5\) times the binomial coefficient 5 over 3. This is true for any combination of \(n\) and \(k\). Finally, after knowing whether we're in the upper or lower half of the table, we can dismiss the first column and analyze the remaining \(n-1\) columns.

The Code

As usual, here the necessary code snippets for your own project. You're going to need an implementation of the binomial coefficient. Instead of using the standard version, you might want to use this optimized version without explicitly calling the factorial function.

:: CODE BLOCK ::
int binomial(int _n, int _k)
{
    if(_k < _n-_k) _k = _n-_k;
    int c = 1;

    for(int i=0; i<_k; ++i){
        c = c*(_n-i);
        c = c/(i+1);
    }

    return c;
}
Listing: B - "Binomial Coefficient"

And finally the actual binary filter function in a recursive implementation. The function will return the \(i\)-th, \(n\) digit number with \(k\) ones in its binary representation. The order of values is descending.

:: CODE BLOCK ::
int bfilter(int _n, int _k, int _i)
{
    if(_n < =0) return 0;

    //Number of "1"s
    int n_x = (_k*binomial(_n, _k))/_n;

    //_i is in "1" block
    if(_i < n_x){
        return pow(2, (_n-1)) + bfilter(_n-1, _k-1, _i);
    }
    //_i is in "0" block
    else{
        return bfilter(_n-1, _k, _i-n_x);
    }
}
Listing: C - "Binary Filter"

Check out the ▷docs for further information.

Monday, March 28, 2011

ParaView Plugins

Today, I'm going to show you how to compile your own plugins for ParaView 3.8.0 on Windows using CMake 2.8.4, QT 4.6.3 and Visual Studio 2008. Other combinations may work as well but I cannot guarantee that, therefore I suggest you stick to the ones I used.

QT Installer
Although this is not required for simple plugins I highly recommend you use QT because it is much easier to design an appealing user interface using the editor rather than scripting them drily using XML files. Go ▷here and download and install the qt-win-opensource-4.6.3-vs2008.exe file.

CMake
You're going to need CMake in order to generate the VS project. I used Version 2.8.4 but anything later than that should be fine. Go ▷here and download the Win32 installer.

ParaView
Of course you're going to need the actual software. I used Version 3.8.0 so either get that version to play safe or use a newer one. In either case make sure the versions of your ParaView installer and the Development installation match. Click ▷here to download ParaView.

ParaView Development
In addition to ParaView you're going to need the ParaView ▷development binaries in order to compile your plugin. Remember this has to be the same version as your ParaView installation. Also your plugin will only work for this specific version of ParaView.

CMake

Before you can generate your Visual Studio project you will have to prepare a bunch of files in order to tell the plugin how it's supposed to interact with ParaView. Put the files in the same folder. The most important one is CMakeLists.txt which can look like this.

:: CMAKE BLOCK ::
cmake_minimum_required(VERSION 2.6)

PROJECT(MyTestFilter)

FIND_PACKAGE(ParaView REQUIRED)
INCLUDE(${PARAVIEW_USE_FILE})

ADD_PARAVIEW_PLUGIN(
    MyTestFilterSMPlugin
    "1.0"
    SERVER_MANAGER_XML MyTestFilter.xml
    SERVER_MANAGER_SOURCES vtkMyTestFilter.cxx
    GUI_RESOURCE_FILES MyTestFilterGUI.xml
)
CMakeLists.txt

This tells CMake that we're about to create a plugin with the name MyTestFilterSMPlugin, Version 1.0, and using the respective Server Manager (Configuration), source (VTK code) and GUI files. The filenames are case sensitive here. Now lets take a look at the other 3 files.

Server Manager Configuration

:: XML BLOCK ::
<ServerManagerConfiguration>
  
  <ProxyGroup name="filters">
    <SourceProxy name="MyTestFilter"
                    class="vtkMyTestFilter">

      <InputProperty
        name="Input"
        command="SetInputConnection">

        <ProxyGroupDomain name="groups">
          <Group name="sources"/>
          <Group name="filters"/>
        </ProxyGroupDomain>

        <DataTypeDomain name="input_type">
          <DataType value="vtkPolyData"/>
        </DataTypeDomain>
      </InputProperty>

    </SourceProxy>
  </ProxyGroup>
  
</ServerManagerConfiguration>
MyTestFilter.xml

This example shows the server manager configuration for a filter plugin, that takes ▷vtkPolyData as input. Depending on the implementation of your plugin these attributes can change. For example you can specify multiple outputs or different input types like String, Vector or ComboBox even without using QT.

GUI Resource File

Finally you need to tell the plugin where it's supposed to be located once loaded into ParaView. Using the resource file you can specify the name and the category of the plugin. The following example will put the filter in the Extensions group, however it will be grayed out unless you select a valid ▷vtkPolyData object as input.

:: XML BLOCK ::
<ParaViewFilters>
   <Category name="Extensions" menu_label="&Extensions">
      <Filter name="MyTestFilter" />
   </Category>
</ParaViewFilters>
MyTestFilerGUI.xml

So much for the setup files. Of course CMake is going to ask for a code file to generate the project from so let's take a quick look at that.

VTK Code

So this is where the real fun starts. Every ParaView plugin must be implemented as an extended ▷vtkAlgorithm since it must be usable within the VTK pipeline. The VTK class (and only this class!) should have the prefix vtk. I dont know whether this is mandatory but for some reason this convention avoids many nasty bugs.

This is the header file for a simple filter plugin that processes ▷vtkPolyData. Note that the input type must match the extended class. You might want to take a look at the VTK ▷documentation at this point.

:: XML BLOCK ::
#ifndef _vtkMyTestFilter_h
#define _vtkMyTestFilter_h

#include "vtkPolyDataAlgorithm.h"

class VTK_EXPORT vtkMyTestFilter :
          public vtkPolyDataAlgorithm
{
public:
  static vtkMyTestFilter *New();
  vtkTypeRevisionMacro(vtkMyTestFilter,vtkPolyDataAlgorithm);
  void PrintSelf(ostream& os, vtkIndent indent);

  int RequestInformation([...]);
  int RequestData([...]);
  int FillInputPortInformation([...]);
  int FillOutputPortInformation([...]);

protected:
  vtkMyTestFilter();
  ~vtkMyTestFilter();

private:
  vtkMyTestFilter(const vtkMyTestFilter&);
  void operator = (const vtkMyTestFilter&);
};

#endif
vtkMyTestFilter.h

Despite the confusing details the general setup is rather simple. The plugin sets the number of input and output ports and the respective data types. This is important because other plugins will only work (connect to the pipeline) if the data types match.

The next function is the core of the algorithm. This is where all the data processing is to be implemented. In case of our simple test plugin the most basic implementation would look something like this. The filter takes the input and passes it on to the output without doing anything.

:: CODE BLOCK ::
int vtkMyTestFilter::RequestData([...])
{
    vtkPolyData *input = vtkPolyData::GetData([...]);
    vtkPolyData *output = vtkPolyData::GetData([...]);

    output->CopyStructure(input);

    return 1;
}
int RequestData([...]);

That's it. Put all the files in the same folder, fire up CMake, select the Paraview Development directory and QT and you're done. When you start the Visual Studio project, set it to "Release" mode and compile the DLL. In case of linker errors check the settings for the correct library paths (especially QT) and remove the Python stuff. That did the trick in my case.

Saturday, February 26, 2011

1908


After having successfully created a high-energy prototype, the project enters the second stage. Since the ▷Virus is supposed to operate autonomously, further experiments are necessary to increase communication abilities. The prototypes are cultivated in order to react to external stimuli in a deterministic way. The procedure spawns several subcategories of hybrid cells which show programmable behavior if assembled in certain configurations. Using this strategy, small networks of hybrid cells are designed to perform primitive tasks. The project enters the final stage. Dozens of stable networks-- each one with billions of cells-- are stored in ▷Containers for further conditioning. To satisfy the increasing energy throughput, the scientists include radioactive material from nearby ▷Mines into the molecular structure.

On the 30th of June-- and the world's most powerful weapon within the scientists' grasp-- one of the final experiments with a high energy subtype causes a far-reaching disaster. In an attempt to test the performance under maximum stress, several networks destabilize, leading to a chain reaction in which the damaged container releases all stored energy. A massive explosion tears the lab to shreds, incinerating any organism within range. Seconds after the first explosion, the shock wave reaches the storage depot, causing hundreds of other containers to share a similar fate. Unable to withstand the final blast, the reinforced ceiling shatters and releases tons of contaminated material into the atmosphere, devastating the surrounding area.

In the chaos of fire and radiation, with clouds of dying hybrid cells roaming free, some of them finally reach their ultimate stage. The virus slowly awakens and commences the task it has been designed for. Survive and destroy. Shortly after the first manifestation of the virus, all survivors of the explosion are fully infected. Due to the hull breach, major parts of the area are also contaminated before the authorities even realize what has happened. In order to keep bystanders away and to exclude a military strike, the government isolates the perimeter to conduct a detailed analysis of the incident. This includes the establishing of an emergency security zone, ▷SC-0, for scientific research around the location that had been identified as ground zero. Not far from the science camps lies the military equivalent, know as ▷M-0, a continuous ring of military camps, watchtowers and electric fences with the only purpose of keeping unwanted visitors outside.

Friday, February 25, 2011

1907

Two years after the initialization of the project, the underground ▷Facility is complete and fully operational, producing promising results. Hidden in the wastelands of Siberia, the group of scientists, led by military virologist  ▷Dr. William Griffith, work on their task ambitiously. Referred to by the code name ▷R.E.E.V.A. (Radio Emission via Energized viral Aggregation), the group has combined human cells with various other bacterial organisms and viruses in order to create a powerful hybrid cell type. These new cells are able to absorb and emit electromagnetic radiation at will. Based on this unique feature, the scientists plan to arrange huge clusters of such hybrid cells to enable intercellular communication, similar to a neural network.

In order to make the cells resistant against electromagnetic interference, it is necessary to choose a radiation type of very high energy that would overcome all types of natural disturbances. After months of failure and misfortune, the scientists finally succeed in cultivating a ▷Prototype that is able to operate on the required energy level-- a combination of several extremophile bacteria based on human cell substrate.

Next

▶Year 1908
Read the next part of the background story.

LAPACK

LAPACK is a Fortran based linear algebra library, which can be used for solving systems of linear equations, eigenvalue problems and many other things. Although there are several wrappers for all kinds of programming lanugage one should consider writing a custom designed wrapper. First of all it's really easy to implement and secondly you know what you're wrapper is doing. Apart from that i don't think anyone ever uses the full LAPACK functionality.

On this page i will show you how to include LAPACK into your project using C++ and Visual Studio. Before we can get started make sure to obtain the following items.

LAPACK
In order to use the library you must either get the source code, or the precompiled binaries for your system. To save some time just get the DLL and LIB. If you're programming on a Win32 platform, click ▷here.

BLAS
Since LAPACK makes use of BLAS internally, you will also need the BLAS binaries. The file you just downloaded contains both.

Documentation
LAPACK (and BLAS for that matter) functionality is included into a project by declaring the corresponding Fortran function header. In order to do that you need to know what the function you're looking for looks like in LAPACK. Assuming your program provides single precision data consider the following ▷overview. The procedure is simple. Select a function, following the link and try to understand the description.

Example
Consider the following function. As you can extract from the cooresponding ▷description the SGESV routine computes the solution to \(Ax=b\) if \(A\) is a \(n\) by \(n\) matrix, by using a LU decomposition. All you need to do now is to declare the Fortran header in a C/C++ conform manner, embedded in an extern "C" block. Don't forget to link the downloaded librabries and to put the DLLs in the right directory.

:: CODE BLOCK ::
extern "C" {
    //SUBROUTINE SGESV(N, NRHS, A, LDA, IPIV, B, LDB, INFO)

    void sgesv_(
         int* n,
         int* nrhs,
         float* a,
         int* lda,
         int* ipiv,
         float* b,
         int* ldb,
         int* info);
}
Wrapper for SGESV([...])

That's it! The function sgesv_([...]) will now call the respective LAPACK function. The arguments may look a little cryptic at first but it shouldn't be too much of a problem with the documentation at hand. Despite being fully functional the current implementation is clearly not very user friendly. At the expense of a slightly higher funtion overhead you should consider writing a wrapper function for an easier and more intuitive access.

:: CODE BLOCK ::
/** On exit A will contain the factors L and U,
    b will contain x.
*/
DC_VALUE MA_API ma_sgesv(GE_Matrix& _A, GE_Matrix& _b){
   int t_info;
   int* t_p = new int[_A.m_cols];

   sgesv_(
       &_A.m_cols,      // Number of rows in A
       &_b.m_cols,      // Number of columns in b
       _A.m_data,       // A
       &_A.m_rows,      // LDA - Number of columns in A
       t_p,             // P
       _b.m_data,       // b
       &_b.m_rows,      // LDB - Number of columns in b
       &t_info);        // Info

   delete[] t_p;
   if(t_info == 0) return DC_OK;
   return DC_ERROR;
}
Wrapper for SGESV([...])

The ▷Darc Library uses the above wrapper structure to access LAPACK functions.

Thursday, February 24, 2011

1905

The world is ruled by the allied ▷Empire, a status that had been achieved by destroying countless smaller nations, causing resistance inside and outside the Empires territory. The growing number of Guerilla enemies requires new tactics and weapons in order to sustain predominance. With advanced technology and a blind desire for ultimate power, the Alliance cannot resist to step upon a dangerous path. Labeled as a secret research program to increase national security, the Empire puts an underground research ▷Lab in operation. The purpose is to create an intelligent biolocial weapon. A ▷Virus that is able to identify and destroy any enemy, regardless his hideout or equipment. Soon the world would tremble facing the efficiency of their latest masterpeace.


Next

▶Year 1907
Read the next part of the background story.

Wednesday, February 23, 2011

R.E.E.V.A.

Within these posts you can read more about the concept, story and characters, and access a lot of other material concerning the R.E.E.V.A. project. If everything goes as planned, the whole idea will be turned into a computer game within the next 20 years.

R.E.E.V.A. is intended to be an Open World game although the world is rather closed as you will find out. The genre is best described as a 3rd Person (mostly) Survival-Action, Role-Play Game. All game mechanics and technical details are as realistic and scientifically as plausible as possible, without limiting the fictional character.

The player takes over the part of a young biologist, who has been tricked into working for a corrupt and militant organization fighting a mysterious disease desperately. Withholding the terrifying truth about the background of the outbreak the player is asked to help fighting the mere symptoms of the problem. In his curiosity he agrees to join the group as hell breaks loose. A not entirely coincidental incident...


Next

▶Story
Read the first part of the background story.

▶Art
Gives you a list of all R.E.E.V.A. related art.

Friday, February 18, 2011

CG Particle System

The CG Particle application was a project i worked on in my second semester based on GLUT and OpenGL. It's a simple particle system using basic effects and techniques like alpha blending, scene picking, materials and lighting aswell as a primitive collision detection causing the particles to interact with the ground.
There are four types of particle generators. The first two "Fog" and "Torch" are automatic which means that particles are produced at a constant rate. The other two require user interaction. Pressing the respective key generates one or a predefined sequence of particles which disappear after a while.

You can download the runnable ▷version (for Windows) or the Microsoft Visual Studio ▷solution.

Thursday, February 17, 2011

B-Splines

The interpolation between a set of points is a common task in computer graphics. The ▷Darc Library for example uses interpolation to compute smooth camera trajectories. One very elegant way to do this are B-Splines. The following section offers a short mathematical introduction and some code snippets to get started.

Mathematics

Without going into too much details let's agree on the following definitions. Every B-Spline \(c(t)\) requires an \(m\)-sized knot vector:

\begin{align} [t_0 \leq ... \leq t_{m-1}] \end{align}

However the B-Spline is only defined on a shorter subset of this \(m\)-sized vector. That subset contains the 'internal knots'. For a B-Spline with degree \(n\) and \(p\) control points they are:

\begin{align} [t_n \leq ... \leq t_p] \end{align}

This equals the B-Spline domain which means \(t\) must lie within the interval spanned by \(t_n\) and \(t_p\), otherwise the function is undefined. The relation between the number of knots \(m\), control points \(p\) and the degree \(n\) is:

\begin{align} p + n + 1 = m \end{align}

Usually \(c(t)\) is a smooth function if set up properly. However sometimes it is desired to have sharp corners. This can be implemented by assigning identical values to adjacent knots. Each duplicated knot reduces the smoothness \(C^k\) by one. That means if \(n\) adjacent knots share the same value \(x\), the spline gets a corner at \(t_x\). Duplicating the first and last \(n+1\) knots will cause endpoint interpolation. Note that the first and last knot are mathematically irrelevant as far as the B-Spline domain is concerned. However most implementations need those extra knots to evaluate base function values easier without any case differentiation. For the B-Spline domain this is equivalent to duplicating \(n\) knots on the shortened vector:

\begin{align} [t_1 \leq ... \leq t_{m-2}] \end{align}

Consequently \(t_0\) and \(t_{m-1}\) remain available both in the theoretical definition and in the actual implementation.

Base Function

To evaluate your B-Spline function \(c(t)\) at \(t\) you need to multiply each control point with a base function \(b_{i,n}\) and sum it all up. In particular the evaluation looks like this:

\begin{align} c(t) = \sum_{n=0}^{p-1} p_i \cdot b_{i,n}(t) \end{align}

As you can see in the applet each interval (within the internal knots) is spanned by \(n\) base functions. Thus you can accelerate the above equation by selecting the contributing base functions in a pre-processing step. The base functions are defined recursively as follows:

\begin{align} b_{i,n}(t) = \frac{t-t_i}{t_{i+n}-t_i}b_{i,n-1}(t) + \frac{t_{i+n+1}-t}{t_{i+n+1}-t_{i+1}}b_{i+1,n-1}(t) \end{align} \begin{align} b_{i,0}(t)=\begin{cases} 1 & t\in[t_i, t_{i+1}]\\ 0 & else \\ \end{cases} \end{align}

When \(n\) reaches zero you simply return whether the value \(t\) lies within the interval spanned by the base function at \(i\). Here \(n\) is the initial degree of \(c(t)\). You might want to take a look at the applet to get a better understanding.

Demo

This interactive Java applet demonstrates non-uniform B-Splines of arbitrary degree. The corresponding knot vector is generated automatically. You can also download this applet as a ▷jar file for your machine. The file can be executed as a standard Java application and doesn't need to be started from a browser.


You can also download this applet as a ▷jar file for your machine. The file can be executed as a standard Java application and doesn't need to be started from a browser.

Ridge Combo

The implementation of the Ridge Combo plugin was the core task of my BSc. thesis titled "Development of a Ridge based Distance Measure on Scalar Fields". It's purpose was loading, processing and visualizing 3D scalar data by generating ridge surfaces and then applying the developed distance measure. The following paragraphs offer a short introduction to the topic along with selected algorithmic details. Additionally you can download the complete ▷thesis (currently available in german only) and/or the actual ▷plugin for your ParaView installation (requires version 3.4.1). Follow the links to get more information.

The Theory

Here is the theoretical background for ridge surfaces in a nutshell. Let's assume the given 3-dimensional scalar field is a continuous function. That means we can take a look at the second derivatives (Hessian \(H\)) to determine the curvatures in each direction at a specific point. A point \(x\) belongs to the ridge surface \(R\) if the directional derivative in the direction of the smallest eigenvector \(v_1\) of \(H\) equals zero. Aditionally, the eigenvalue corresponding to \(v_1\) has to be negative to assure maximum concavity.

\(v_{1}g(x)=0\) (1)
\(\lambda_{1}<0\) (2)

The ridge surface is equivalent to the zero-level isocontour defined by (1) and can be generated by simply applying a marching cubes algorithm to the field, followed by filtering vertices according to (2). In order to obtain better results in the actual implementation, (1) should be replaced by the following equation which produces the same field but ensures the continuity of eigenvectors.

\(det(H^{0}g|H^{1}g|H^{2}g)=0\)(3)

Metaballs

The following normal distribution function might help you generating random scalar fields. Here \((m_X,m_Y,m_Z)\) are the center coordinates of a smooth (without infinity at its center) Metaball. Just place an arbitrary number of Metaballs in your simulation cube and assign to each voxel the sum of all Metaball distance values (you may right click to find an option to zoom in).

\(m(X,Y,Z)=\frac{1}{(\sqrt{2\pi}\sigma)^{3}} e^{-\frac{(X-m_{X})^{2}+(Y-m_{Y})^{2}+(Z-m_{Z})^{2}}{2\sigma^{2}}}\)(4)

These figures show several Metaball based scalar fields represented by a set of evenly spaced isocontours. Note, that this representation has nothing to do with ridge surfaces yet.


Basic Functions

The following distance functions are needed to define the ridge metric. The first one is the quadratic distance between two points.

\(d_{E}^{2}(p_{1},p_{2})=(p_{1}-p_{2})^{2}\)(5)

Similarly the quadratic difference between two weights (\(\lambda_1\)).

\(d_{L}^{2}(l_{1},l_{2})=(l_{1}-l_{2})^{2}\)(6)

And finally the relevance function of two weights. This means if the arithmetic mean of two weights is close to zero the corresponding points should not influence the ridge metric because they are considered irrelevant.

\(m_{L}(l_{1},l_{2})=-\frac{1}{2}(l_{1}+l_{2})\)(7)

Error Function

To measure the error between a point and its weight to a target ridge surface the following equation must be minimized. The idea is similar to the Hausdorff distance with a new definition of distance being a composition of (5), (6) and (7) (in the ridge metric sense).

\(d_{e}(p_{1},l_{1},R_{2})=min\left[(d_{E}^{2}(p_{1},p_{2})+d_{L}^{2}(l_{1},l_{2}))\cdot m_{L}(l_{1},l_{2})\right].\)(8)
\((p_{2},l_{2})\in R_{2}\)

One Sided Ridge Metric

Equation (8) can now be used to calculate the one sided distance between two ridge surfaces. This is done by applying it to all points of \(R_1\).

\(d_{r}(R_{1}, R_{2}) = \frac{1}{M_{1}}\sum\limits_{(p_{1},l_{1})\in R_{1}} d_{e}(p_{1}, l_{1}, R_{2})\)(9)

Finally the result needs to be divided by the sum of the relevance of all point pairs between the two surfaces. In other words, whenever (8) finds a point pair that minimizes the distance, its relevance will be added to \(M_1\).

\(M_{1} = \sum\limits_{l_{1}\in R_{1}} m_{L}(l_{1}, l_{2}), \hphantom{x} l_{2} \in R_{2}, \hphantom{x} d_{e}(p_{1}, l_{1}, R_{2}) \rightarrow min\)(10)

Final Ridge Metric

Usually the value of the one sided distance does not stay the same if you swap the surfaces. In order to achieve symmetry, which is necessary for a metric space, equation (9) must be computed twice.

\(d_{R}(R_{1}, R_{2}) =\)
\(\frac{1}{M_{1} + M_{2}}\left(\sum\limits_{(p_{1},l_{1})\in R_{1}} d_{e}(p_{1}, l_{1}, R_{2}) + \sum\limits_{(p_{2},l_{2})\in R_{2}} d_{e}(p_{2}, l_{2}, R_{1})\right)\)(11)

The next section shows some results computed with ParaView using the Ridge Combo plugin.

Ridge Surfaces

To get an idea of the behavior of ridge surfaces please take a look at the following examples first. You can see how the surfaces connect the maxima generated by the previously mentioned Metaball based scalar field.

These are the same surfaces with the result of the ridge metric. Each line corresponds to a minimizing point pair in (8).

This approach is very insensitive to low weighted background noise. While marching cubes may still produce the surface at such a point, it will make very little difference to the actual metric since all point pairs are divided by the accumulated relevance as shown in (8). Also note that (8) needs a preprocessing step where all weights are rescaled to a value that matches the extent in euclidean space. The plugin does not provide a prodcedure to estimate a good scaling so this factor has to be set manually.

The Art Gallery

Did ya guys know that it all started with the idea of having a place to dump all my drawings?Here's a ▷shortcut to the artsy stuff. Have fun!