Thursday, July 26, 2012

Qt Interface for Modifying Parameters in Real-Time

TL;DR Made some cool enhancements to the open source RTPS library checkout the video.

Improved rendering and Qt interface for interactivity from Andrew Young on Vimeo.


I bring some exciting news about my latest developments in the Real-time Particle System (RTPS) library. I created an interface which allows users to modify the system and graphics parameters in real time. I have also improve the screen-space rendering implementation. All the rendering code and shaders are now OpenGL 3.3 compliant.

I have decided to finish graduate school with a M.S. rather than pursuing a PhD. In order to finish my M.S. thesis, I wanted to add an interface to the RTPS library which allows users to modify the parameters in real-time. I chose the Qt library to accomplish the task. Qt works across many platforms which makes it an excellent choice. Qt has a large community and extensive documentation.  Also, they have a license which allows for inclusion into an open-source projects.

Beginning development in Qt wasn't too difficult. Qt has several example applications which use OpenGL widgets. I used these examples as a starting point. However, my venture into Qt was not without it's troubles. One problem I soon ran into was attempting to use GLEW with Qt. I found several posts about the problem and most users suggested the solution was to use QGLWidget instead of QOpenGL. That was indeed part of the problem. The other problem was issue with Qt 4.8. Apparently, when initializing the QGLWidget, but before calling glewinit(), you must call makeCurrent(). My understanding of the problem is limited. However, I believe the problem stems from Qt 4.8's introduction of multi-threaded OpenGL. Without calling make current the GLcontext doesn't become active.

Many of the parameters in the RTPS library are floating point numbers. I wanted a slider which would return floating point numbers. To my surprise, Qt has no native support for a float slider bars. All the solutions I found on the internet suggested simply dividing the result by some scaling factor. The solution made sense. Of course, the approach is rather inconvenient because I can't attach multiple slider signals to a single slot unless the scaling factor was the same on each slot. I decided the best approach was to inherit from the QSlider class to create my own FloatSlider class. Overriding the Qt class was far easier than expected.

//--------- floatslider.h ----------
#ifndef FLOATSLIDER_H
#define FLOATSLIDER_H
#include 

class FloatSlider : public QSlider
{
    Q_OBJECT
public:
    FloatSlider(Qt::Orientation orientation, QWidget* parent = 0);
    void setScale(float scale);
public slots:
    void sliderChange(SliderChange change);
    void setValue(float value);
signals:
    void valueChanged(float value);
private:
    float scale;
};
#endif


//---------- floatslider.cpp ------------
#include "floatslider.h"

FloatSlider::FloatSlider(Qt::Orientation orientation, QWidget* parent)
:QSlider(orientation, parent)
{
    scale =1.0f;
}
void FloatSlider::setValue(float value)
{
    QSlider::setValue(value/scale);
}
void FloatSlider::setScale(float scale)
{
    this->scale=scale;
}
void FloatSlider::sliderChange(SliderChange change)
{
    if(change==QAbstractSlider::SliderValueChange)
    {
         emit valueChanged(value()*scale);
    }
    //for completeness. Also, one could send integer and float signals
    //if desired.
    QSlider::sliderChange(change);
}



The above class signal sends a float upon sliderChange. Therefore you can set a custom scale for the float slider and will be multiplied by the scaling factor. For example, the following code will return a number between 0.01 and 1.00 by increments of 0.01.

xSPHSlider = new FloatSlider(orientation,this);
xSPHSlider->setObjectName("xsph_factor");
xSPHSlider->setTickPosition(QSlider::TicksBelow);
xSPHSlider->setTickInterval(10);
xSPHSlider->setSingleStep(1);
xSPHSlider->setRange(1,100);
xSPHSlider->setValue(15);
xSPHSlider->setScale(0.01);

connect(xSPHSlider,SIGNAL(valueChanged(float)),this,SLOT(triggerValue(float)));


Sunday, March 18, 2012

Improved rendering support for rigid-body fluid interaction

Hello everyone,
I have added a lot of new features to the fluid/rigid-body simulator. Here is a bulleted list:

  • Added mesh importing support via AssImp Library.
  • Fixed lots of bugs in mesh voxelization. Now you should be able to voxelize any arbitrary closed mesh.
  • Added parameter files so you can change many system parameters without the need to recompile.
  • Improved rendering of rigid-bodies.
    • I implemented some basic instance rendering. I plan to reintroduce flocking simulations implemented by Myrna Merced. Then I can take advantage of intanced mesh rendering to render birds, fish etc.
  • I have added configurable point source gravity for interesting effects.
I have completed many of the desired features for the stand-alone library. I now plan to dedicate much of my time to integrating the library into Bullet physics engine. Hopefully I can get most of the work integrated within a month. Then I will work on upgrading blenders Bullet interface to include all these new features. In parallel to those two task I would like to also improve the fluid surface extraction.

Monday, January 16, 2012

Improved interaction between rigidbodies and fluid

I fixed a lot of bugs in my simulation framework. Here is a video showing my progress.



Bugs:

  • I wasn't properly scaling the positions for the rigidbody simulation to match the fluid simulation.
  • Some of my quaternion calculations were not correct when updating the rotation of the rigid body.
  • The coefficients for the interaction forces weren't calculated correctly.  I found a detailed formulation for the dampening coefficient in [1].
Features:

  • I refactored the rendering code so you can switch rendering techniques on the fly.
  • The test program has run-time configurable mass and dimensions for rigid bodies being added.
  • Velocity visualization can be turned on/off during run-time.
My plans now are to push my changes into the blender version that Ian already created. Once I have done that I will create some examples and work on writing up my Master's thesis. From then, I will return to working on integrating the library into Bullet. Hopefully, I will get a chance to work on some visualization as well.

[1] Cleary, P. W., & Prakash, M. (2004). Discrete-element modelling and smoothed particle hydrodynamics: potential in the environmental sciences. hilosophical Transactions of the Royal Society - Series A: Mathematical, Physical and Engineering Sciences362(1822), 2003-2030. The Royal Society. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15306427

Saturday, January 14, 2012

Geometry Shaders for visualizing velocity.

A couple months ago, I posted a blog with my updated rigidbody/fluid interaction. I mentioned I would discuss how I draw the velocity vectors efficiently. In the simulation, all the information for each particle is held on the gpu. Therefore, to render lines in the standard approach I would need to transfer to vectors from the gpu to the host, the position and the vector. Then I would have to loop through to create a third vector which held the position plus the vector. And then draw lines for each set of vertices.

Here is the video.

Geometry shaders (GS) give us the ability to take in one primitive type (points, lines, triangles,..) and output a different primitive type. That makes GSs perfect for our simulation. We already have the position and vector, we just need to calculate the second vertex from this information and then draw a line between the two points. Therefore, we can define the vertices for a line for each vector as follows:

$v_{i,start}=pos_{i}$
$v_{i,end}=pos_{i}+vector_{i}$

I then define the color for the vector as:

$col_{i}=\frac{|vector_{i}|}{||vector_{i}||}$

I chose this color representation primarily for its simplicity. Essentially this encodes direction into the color of each vector. Thus, red represents the x direction, green represents the y direction, and blue represents the z direction. The absolute value is necessary because negative colors don't exist (In OpenGL it chooses black automatically if any component is negative).

Now onto the code.

C code: 

//Tell opengl the vector vbo is the color vector.
        glBindBuffer(GL_ARRAY_BUFFER, vecVBO);
        glColorPointer(4, GL_FLOAT, 0, 0);

        glBindBuffer(GL_ARRAY_BUFFER, posVBO);
        glVertexPointer(4, GL_FLOAT, 0, 0);

        glEnableClientState(GL_VERTEX_ARRAY);
        glEnableClientState(GL_COLOR_ARRAY);

        glUseProgram(m_shaderLibrary.shaders["vectorShader"].getProgram());
        glUniform1f(glGetUniformLocation(m_shaderLibrary.shaders["vectorShader"].getProgram(), "scale"),scale);
        glDrawArrays(GL_POINTS, 0, num);
        glUseProgram(0);

        glDisableClientState(GL_COLOR_ARRAY);
        glDisableClientState(GL_VERTEX_ARRAY);

The variable m_shaderLibrary.shaders["vectorShader"].getProgram() is specific to my library. If you want to use this in your own code you should pass in the program(GLuint) which is the compiled shader program from the vector shaders provided below.
Vertex Shader: 

#version 120
varying vec4 vector;
varying out vec4 color;

void main() 
{
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    //We must project the vector as well.
    vector = gl_ModelViewProjectionMatrix * gl_Color;
    //normalize the vector and take the absolute value.
    color = vec4(abs(normalize(gl_Color.rgb)),1.0);
}


Geometry Shader: 

#version 150
//#geometry shader

layout(points) in;
layout(line_strip, max_vertices=2) out;
in vec4 vector[1];
in vec4 color[1];
uniform float scale;

void main() 
{
    //First we emit the position vertex with the color calculated
    //in the vertex shader.
    vec4 p = gl_in[0].gl_Position;
    gl_Position = p; 
    gl_FrontColor = color[0];
    EmitVertex();

    //Second we emit the vertex which is the xyz position plus a
    //scaled version of the vector we want to visualize. Also, the
    //color is the same as calculated in the vertex shader.
    gl_Position = vec4(p.rgb+(scale*vector[0].rgb),p.a);
    gl_FrontColor = color[0];
    EmitVertex();
    EndPrimitive();
}

Fragment Shader: 

//simple pass through fragment shader.
void main(void) {
    gl_FragColor = gl_Color;
}

Sunday, November 27, 2011

Systems now have interaction

Before I could have interacting rigid body/fluid systems I first had to create a rigid body class where the rigid bodies are composed of particles. To have a rigid body system, I needed to discritize the body into a set of particles. To do this I utilized a method from [2], which uses opengl to create a 3d texture representing voxels inside and outside the rigid-body. From the resulting 3D texture, I can then generate a particle field. For each of the particles, we need position, velocity, and force for each particle that is contained in the rigid body. Also, for each rigid body we should track the linear and torque forces on the center of mass, velocity of center of mass, angular velocity, and the center of mass position.

To calculate the total linear force on a single rigid body, simply sum all the forces of each particle that belongs to that rigid body. [1]

$F_{total} = \displaystyle \sum_{i=0}^N f_i$

And the total torque can be calculated by summing up the cross product of the linear force with the radius from the center of mass for each particle. [1]

$F_{torque} = \displaystyle \sum_{i=0}^N ((pos_i-pos_{com}) \times f_i)$

Currently I don't have any constraints implemented. Also, I still need to implement the segmented scan to improve the calculation of force totals on the rigid body. Currently I naively loop and sum for each rigid body which is inefficient. I also need to create a renderer which will render rigid body meshes at the appropriate locations.

Here is a video of my progress so far.


Rigid-body and Fluid interaction with velocity vectors from Andrew Young on Vimeo.

In the video above, I render the velocity vectors of each particle. This helps to visualize the flow of the fluid. I plan to create a separate blog to discuss my implementation of the rendering code for the velocity vectors.

The interaction between fluid and rigid-body is modeled by a spring/dampening system. The calculations are based on the following two equations as described in [1].

$f_s = -k(d-r_{ij})\frac{r_{ij}}{\left|r_{ij}\right|}$

$f_d = \eta(v_{j}-v_{i})$

Where k is the spring constant and $\eta$ is the dampening coefficient. I need to experiment with these coefficients to get more accurate results.

I also focused a lot of time to refactoring the code around the System class. The System class was abstract and had very little functionality. When creating the rigid body class, I noticed I was just copying and pasting a lot of code. That was a perfect reason to push things up to the parent class.

I moved a lot of generic functionality as well as several buffers (position, velocity, force) that were common to all systems up to the System class. I then added some rudimentary code to all for interaction between systems. Currently this only works between SPH and Rigidbody systems. Eventually, I hope to have a better interaction framework where you can define new rules of interaction quickly and easily. Then I could add interaction between Flocks and SPH which could provide some very interesting simulations.

Todo:

  1. Verify spacing of the rigid-body system is correct.
  2. Experiment with coefficients for interaction.
  3. Explore the alternative formulations for system interaction.
  4. Continue refactoring the code.
  5. Create a class to better handle particle shapes.



References

[1] Harada, T., Tanaka, M., Koshizuka, S., & Kawaguchi, Y. (2007). Real-time Coupling of Fluids and Rigid Bodies. Proc of the APCOM, 1-13. Retrieved from http://individuals.iii.u-tokyo.ac.jp/~yoichiro/report/report-pdf/harada/international/2007apcom.pdf

[2] Fang, S., & Chen, H. (2000). Hardware accelerated voxelization. Computers & Graphics, 24(3), 433-442. Elsevier. Retrieved from http://www.sciencedirect.com/science/article/pii/S0097849300000388

Tuesday, October 25, 2011

Fluid and Rigid Body Interaction

I have finally settled on a topic for my master's thesis. I will implement some techniques discussed in GPU Gems 3 and "Two-way rigid-fluid interaction Heterogeneous Architectures"  , into Ian's EnjaParticles library to allow for rigid-fluid interaction. I will now be the primary maintainer of the library and thus I have forked it to my github account.

The original goal of Ian's project was to integrate the fluid library into Blender's Game Engine(BGE). He has a working build of blender with the fluids integrated into the BGE. The fluid simulation works quite nicely. However, it currently can't handle two-way coupling between rigidbody-fluid systems.

My project will involve upgrading EnjaParticles library to include the coupling physics. Also, I plan to integrate this library into Bullet Physics Library. If I integrate it into the Bullet library, I can then more naturally integrate the library into the BGE. The BGE already uses Bullet for rigidbody and softbody simulations. With the addition of our library into Bullet, the BGE could then have rigidbody, softbody, and fluid simulations.

Goals for the project:

  • Create OpenCL and C++ infrastructure for rigidbodies in Enjaparticles
    • Voxelize an arbitrary [closed] triangular mesh. Already Completed
    • Create data structures to hold particles position, velocity, force, torque... and a related structure which holds these values for the center of mass of the rigid body.
    • Create OpenCL kernels to perform Segmented scan. This is necessary to sum the total forces for each rigid body from its particles.
  • Restructure and optimize the EnjaParticle library.
    • Do not force Opencl/gl interoperability. Could run on CPU's in addition to GPUS.
      • This will also allow for offline simulations that could be serialized.
    • Ensure all kernels are optimized.
    • Implement Morton ordering. This can speed up the SPH simulations substantially.
    • Implement Radix Sorting.
    • General refactoring
      • Some of the methods/attributes of each system class can be abstracted.
      • Enforce encapsulation and utilize inheritance.
      • Add Extensive Documentation.
      • Consider Making renderer part of the main file or at least seperate it from the simulation. Thus giving flexability to the caller on how they wish to render the VBO.
  • Integrate the Upgraded library into Bullet.
  • Upgrade BGE with the latest Bullet library.
  • Other Features that are wishes and not requirements.
    • Improve Screenspace rendering.
      • reflection
      • refraction
      • curvature flow
    • Implement Histopyramid marching cubes into the library for fast surface reconstruction.

Saturday, April 2, 2011

Screen Space Fluid rendering Phase 1

As I mentioned in my previous post, I have currently coded three of the necessary phases for rendering a set of points as fluid surface. The technique used is referred to as "Screen Space Fluid Rendering." The three phases are "Render points as Spheres", "Smooth the depth buffer", and finally "Calculate normals from the Smoothed Depth Buffer". In this blog, I will elaborate more on the screen space fluid rendering technique and I will discuss the first step.

Introduction to Screen Space Fluid Rendering(SSFR)

Before going into the details of each phase, I would like to introduce a little theory behind SSFR. The primary goal behind SSFR is to reconstruct the fluid surface in the viewer's (camera's) space. Concerning ourselves with only the viewer's perspective have potential speed up over methods like marching cubes which attempt to reconstruct the fluids entire surface. SSFR is not without limitations but for generating real time fluid animations it is among the fastest and highest quality currently available. An improvement in the smoothing phase, was developed and titled SSFR with Curvature flow.

Figure 1: The viewer typically can only see a subset of the particles. Also, they can almost never see the opposite surface. These factors motivate the need for creating the surface in user's perspective only.
Figure 2: Points outside the viewer's perspective are clipped. This is another way to save time.
Figure 3: The green line represents the surface we hope to obtain by the end phase.
Creating Spheres from points

The first phase is to create spheres from a collection  of points. If we choose to actually render sphere meshes at each point the algorithm would become very expensive as the number of particles grows. Instead we choose to employ a GLSL shader similar to the one found in Nvidia's oclParticle demo. We only need to use a shader because after this phase we no longer use vertex information. All subsequent phases take output from previous phases and process the image.

Vertex Shader: 

uniform float pointRadius;
uniform float pointScale;   // scale to calculate size in pixels

varying vec3 posEye;        // position of center in eye space

void main()
{

    posEye = vec3(gl_ModelViewMatrix * vec4(gl_Vertex.xyz, 1.0));
    float dist = length(posEye);
    gl_PointSize = pointRadius * (pointScale / dist);

    gl_TexCoord[0] = gl_MultiTexCoord0;
    gl_Position = gl_ModelViewProjectionMatrix * vec4(gl_Vertex.xyz, 1.0);

    gl_FrontColor = gl_Color;
}


Fragment Shader:

uniform float pointRadius;  // point size in world space
uniform float near;
uniform float far;
varying vec3 posEye;        // position of center in eye space

void main()
{
    // calculate normal from texture coordinates
    vec3 n;
    n.xy = gl_TexCoord[0].st*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
    //This is a more compatible version which works on ATI and Nvidia hardware
    //However, This does not work on Apple computers. :/
    //n.xy = gl_PointCoord.st*vec2(2.0, -2.0) + vec2(-1.0, 1.0);

    float mag = dot(n.xy, n.xy);
    if (mag > 1.0) discard;   // kill pixels outside circle
    n.z = sqrt(1.0-mag);

    // point on surface of sphere in eye space
    vec4 spherePosEye =vec4(posEye+n*pointRadius,1.0);

    vec4 clipSpacePos = gl_ProjectionMatrix*spherePosEye;
    float normDepth = clipSpacePos.z/clipSpacePos.w;

    // Transform into window coordinates coordinates
    gl_FragDepth = (((far-near)/2.)*normDepth)+((far+near)/2.);
    gl_FragData[0] = gl_Color;
}


NOTE: The fragment shader above has some compatibility issues with ATI cards. Apparently the appropriate way to handle a point sprites texture coordinates is through gl_PointCoord. However this is not compatible with Apples OpenGL implementation.
Figure 4: Turn the points from the Figure 2 into point sprites. Point sprites are useful because they always face the viewer.


Figure 5: Point sprites which are "below" the surface do not need to be rendered.
The main goal of this shader is to modify the depth values of the rasterized image. To do this we must determine the z value from a 2D texture coordinate. First, take a look at the formula for a sphere.

$
x^2+y^2+z^2 = 1
$

Notice that we are outside the sphere if the following condition occurs:

$
x^2 + y^2 > 0
$


...
    // calculate normal from texture coordinates
    vec3 n;
    n.xy = gl_TexCoord[0].st*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
    float mag = dot(n.xy, n.xy);
    if (mag > 1.0) discard;   // kill pixels outside circle
...

If we are in the sphere then the following calculation will give us the z value of the point on the sphere.

$
z = \sqrt{1-x^2-y^2}
$


...
    n.z = sqrt(1.0-mag);
...


The z value from this calculation is normalized on the interval [0,1]. The next step is to project this back into 3D camera coordinates. After projecting it into camera coordinates we must manually transform it back into window coordinates.



...
    // point on surface of sphere in camera space
    vec4 spherePosEye =vec4(posEye+n*pointRadius,1.0);

    vec4 clipSpacePos = gl_ProjectionMatrix*spherePosEye;
    float normDepth = clipSpacePos.z/clipSpacePos.w;

    // Transform into window coordinates coordinates
    gl_FragDepth = (((far-near)/2.)*normDepth)+((far+near)/2.);
...


Figure 6: Morph the point sprites into hemispheres.


In my next post, I plan to explain how to modify the depth values in a way to make these bumpy spheres look more like a continuous surface.

All shader code and c++ code are available from enjalot's github repository. Rendering code can be found in rtps/rtpslib/Render/.