tag:blogger.com,1999:blog-89431043929094776402024-03-13T16:53:01.106-07:00Particle Simulations in OpenCLAndrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-8943104392909477640.post-46796622901886715962012-07-26T22:12:00.000-07:002012-08-02T09:47:59.568-07:00Qt Interface for Modifying Parameters in Real-TimeTL;DR Made some cool enhancements to the <a href="https://github.com/ayoung200/EnjaParticles">open source RTPS library</a> checkout the video.<br />
<iframe src="http://player.vimeo.com/video/46469992" width="500" height="281" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe> <p><a href="http://vimeo.com/46469992">Improved rendering and Qt interface for interactivity</a> from <a href="http://vimeo.com/user5996991">Andrew Young</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<br />
I bring some exciting news about my latest developments in the Real-time Particle System (RTPS) library. I created an interface which allows users to modify the system and graphics parameters in real time. I have also improve the screen-space rendering implementation. All the rendering code and shaders are now OpenGL 3.3 compliant.<br />
<br />
I have decided to finish graduate school with a M.S. rather than pursuing a PhD. In order to finish my M.S. thesis, I wanted to add an interface to the RTPS library which allows users to modify the parameters in real-time. I chose the Qt library to accomplish the task. <span style="background-color: white;">Qt works across many platforms which makes it an excellent choice.</span><span style="background-color: white;"> Qt has a large community and extensive documentation. Also, they have a license which allows for inclusion into an open-source projects.</span><br />
<br />
Beginning development in Qt wasn't too difficult. Qt has several example applications which use OpenGL widgets. I used these examples as a starting point. However, my venture into Qt was not without it's troubles. One problem I soon ran into was attempting to use GLEW with Qt. I found several posts about the problem and most users suggested the solution was to use QGLWidget instead of QOpenGL. That was indeed part of the problem. The other problem was issue with Qt 4.8. Apparently, when initializing the QGLWidget, but before calling glewinit(), you must call makeCurrent(). My understanding of the problem is limited. However, I believe the problem stems from Qt 4.8's introduction of multi-threaded OpenGL. Without calling make current the GLcontext doesn't become active.<br />
<br />
Many of the parameters in the RTPS library are floating point numbers. I wanted a slider which would return floating point numbers. To my surprise, Qt has no native support for a float slider bars. All the solutions I found on the internet suggested simply dividing the result by some scaling factor. The solution made sense. Of course, the approach is rather inconvenient because I can't attach multiple slider signals to a single slot unless the scaling factor was the same on each slot. I decided the best approach was to inherit from the QSlider class to create my own FloatSlider class. Overriding the Qt class was far easier than expected.
<br />
<pre class="prettyprint linenums:1"><span style="font-size: medium;">
//--------- floatslider.h ----------
#ifndef FLOATSLIDER_H
#define FLOATSLIDER_H
#include <qslider>
class FloatSlider : public QSlider
{
Q_OBJECT
public:
FloatSlider(Qt::Orientation orientation, QWidget* parent = 0);
void setScale(float scale);
public slots:
void sliderChange(SliderChange change);
void setValue(float value);
signals:
void valueChanged(float value);
private:
float scale;
};
#endif
//---------- floatslider.cpp ------------
#include "floatslider.h"
FloatSlider::FloatSlider(Qt::Orientation orientation, QWidget* parent)
:QSlider(orientation, parent)
{
scale =1.0f;
}
void FloatSlider::setValue(float value)
{
QSlider::setValue(value/scale);
}
void FloatSlider::setScale(float scale)
{
this->scale=scale;
}
void FloatSlider::sliderChange(SliderChange change)
{
if(change==QAbstractSlider::SliderValueChange)
{
emit valueChanged(value()*scale);
}
//for completeness. Also, one could send integer and float signals
//if desired.
QSlider::sliderChange(change);
}
</qslider></span></pre>
<br />
The above class signal sends a float upon sliderChange. Therefore you can set a custom scale for the float slider and will be multiplied by the scaling factor. For example, the following code will return a number between 0.01 and 1.00 by increments of 0.01.
<br />
<pre class="prettyprint linenums:1"><span style="font-size: medium;">
xSPHSlider = new FloatSlider(orientation,this);
xSPHSlider->setObjectName("xsph_factor");
xSPHSlider->setTickPosition(QSlider::TicksBelow);
xSPHSlider->setTickInterval(10);
xSPHSlider->setSingleStep(1);
xSPHSlider->setRange(1,100);
xSPHSlider->setValue(15);
xSPHSlider->setScale(0.01);
connect(xSPHSlider,SIGNAL(valueChanged(float)),this,SLOT(triggerValue(float)));
</span></pre>
<br />Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com1tag:blogger.com,1999:blog-8943104392909477640.post-71971630284058221402012-03-18T20:23:00.000-07:002012-03-18T20:23:21.654-07:00Improved rendering support for rigid-body fluid interactionHello everyone,<br />
I have added a lot of new features to the fluid/rigid-body simulator. Here is a bulleted list:<br />
<br />
<ul><li>Added mesh importing support via <a href="http://assimp.sourceforge.net/">AssImp</a> Library.</li>
<li>Fixed lots of bugs in mesh voxelization. Now you should be able to voxelize any arbitrary closed mesh.</li>
<li>Added parameter files so you can change many system parameters without the need to recompile.</li>
<li>Improved rendering of rigid-bodies.</li>
<ul><li>I implemented some basic instance rendering. I plan to reintroduce flocking simulations implemented by Myrna Merced. Then I can take advantage of intanced mesh rendering to render birds, fish etc.</li>
</ul><li>I have added configurable point source gravity for interesting effects.</li>
</ul>I have completed many of the desired features for the stand-alone library. I now plan to dedicate much of my time to integrating the library into Bullet physics engine. Hopefully I can get most of the work integrated within a month. Then I will work on upgrading blenders Bullet interface to include all these new features. In parallel to those two task I would like to also improve the fluid surface extraction.<br />
<iframe allowfullscreen="" frameborder="0" height="300" mozallowfullscreen="" src="http://player.vimeo.com/video/38756325?title=0&byline=0&portrait=0" webkitallowfullscreen="" width="400"></iframe>Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com0tag:blogger.com,1999:blog-8943104392909477640.post-68908327144931027882012-01-16T07:56:00.000-08:002012-01-16T07:56:24.730-08:00Improved interaction between rigidbodies and fluidI fixed a lot of bugs in my simulation framework. Here is a video showing my progress.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="300" mozallowfullscreen="" src="http://player.vimeo.com/video/35122154?title=0&byline=0&portrait=0" webkitallowfullscreen="" width="400"></iframe><br />
<br />
<b>Bugs:</b><br />
<br />
<ul><li>I wasn't properly scaling the positions for the rigidbody simulation to match the fluid simulation.</li>
<li>Some of my quaternion calculations were not correct when updating the rotation of the rigid body.</li>
<li>The coefficients for the interaction forces weren't calculated correctly. I found a detailed formulation for the dampening coefficient in [1].</li>
</ul><b>Features:</b><br />
<br />
<ul><li>I refactored the rendering code so you can switch rendering techniques on the fly.</li>
<li>The test program has run-time configurable mass and dimensions for rigid bodies being added.</li>
<li>Velocity visualization can be turned on/off during run-time.</li>
</ul><div>My plans now are to push my changes into the blender version that Ian already created. Once I have done that I will create some examples and work on writing up my Master's thesis. From then, I will return to working on integrating the library into Bullet. Hopefully, I will get a chance to work on some visualization as well.</div><div><br />
[1] <span class="Apple-style-span" style="background-color: white; color: #333333; font-family: 'Helvetica Neue', Arial, Helvetica, Verdana, sans-serif; font-size: 12px;">Cleary, P. W., & Prakash, M. (2004). Discrete-element modelling and smoothed particle hydrodynamics: potential in the environmental sciences. </span><i style="background-color: white; color: #333333; font-family: 'Helvetica Neue', Arial, Helvetica, Verdana, sans-serif; font-size: 12px; text-align: left;">hilosophical Transactions of the Royal Society - Series A: Mathematical, Physical and Engineering Sciences</i><span class="Apple-style-span" style="background-color: white; color: #333333; font-family: 'Helvetica Neue', Arial, Helvetica, Verdana, sans-serif; font-size: 12px; text-align: left;">, </span><i style="background-color: white; color: #333333; font-family: 'Helvetica Neue', Arial, Helvetica, Verdana, sans-serif; font-size: 12px; text-align: left;">362</i><span class="Apple-style-span" style="background-color: white; color: #333333; font-family: 'Helvetica Neue', Arial, Helvetica, Verdana, sans-serif; font-size: 12px; text-align: left;">(1822), 2003-2030. The Royal Society. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15306427</span></div>Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com1tag:blogger.com,1999:blog-8943104392909477640.post-75389564541503258442012-01-14T09:19:00.000-08:002012-01-14T09:19:48.307-08:00Geometry Shaders for visualizing velocity.<div>A couple months ago, I posted a blog with my updated rigidbody/fluid interaction. I mentioned I would discuss how I draw the velocity vectors efficiently. In the simulation, all the information for each particle is held on the gpu. Therefore, to render lines in the standard approach I would need to transfer to vectors from the gpu to the host, the position and the vector. Then I would have to loop through to create a third vector which held the position plus the vector. And then draw lines for each set of vertices.<br />
<br />
Here is the video. <iframe src="http://player.vimeo.com/video/35024052?title=0&byline=0&portrait=0" width="400" height="300" frameborder="0" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe><br />
<br />
Geometry shaders (GS) give us the ability to take in one primitive type (points, lines, triangles,..) and output a different primitive type. That makes GSs perfect for our simulation. We already have the position and vector, we just need to calculate the second vertex from this information and then draw a line between the two points. Therefore, we can define the vertices for a line for each vector as follows:<br />
<br />
$v_{i,start}=pos_{i}$<br />
$v_{i,end}=pos_{i}+vector_{i}$<br />
<br />
I then define the color for the vector as:<br />
<br />
$col_{i}=\frac{|vector_{i}|}{||vector_{i}||}$<br />
<br />
I chose this color representation primarily for its simplicity. Essentially this encodes direction into the color of each vector. Thus, red represents the x direction, green represents the y direction, and blue represents the z direction. The absolute value is necessary because negative colors don't exist (In OpenGL it chooses black automatically if any component is negative).<br />
<br />
Now onto the code.<br />
<br />
<span style="font-size: large;"><span style="font-size: small;"><i><b>C code:</b></i> </span> </span><br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums:1">//Tell opengl the vector vbo is the color vector.
glBindBuffer(GL_ARRAY_BUFFER, vecVBO);
glColorPointer(4, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, posVBO);
glVertexPointer(4, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glUseProgram(m_shaderLibrary.shaders["vectorShader"].getProgram());
glUniform1f(glGetUniformLocation(m_shaderLibrary.shaders["vectorShader"].getProgram(), "scale"),scale);
glDrawArrays(GL_POINTS, 0, num);
glUseProgram(0);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
</pre></span><br />
The variable m_shaderLibrary.shaders["vectorShader"].getProgram() is specific to my library. If you want to use this in your own code you should pass in the program(GLuint) which is the compiled shader program from the vector shaders provided below.<br />
<span style="font-size: large;"><span style="font-size: small;"><i><b>Vertex Shader:</b></i> </span> </span><br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums:1">#version 120
varying vec4 vector;
varying out vec4 color;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
//We must project the vector as well.
vector = gl_ModelViewProjectionMatrix * gl_Color;
//normalize the vector and take the absolute value.
color = vec4(abs(normalize(gl_Color.rgb)),1.0);
}
</pre></span><br />
<br />
<span style="font-size: large;"><span style="font-size: small;"><i><b>Geometry Shader:</b></i> </span> </span><br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums:1">#version 150
//#geometry shader
layout(points) in;
layout(line_strip, max_vertices=2) out;
in vec4 vector[1];
in vec4 color[1];
uniform float scale;
void main()
{
//First we emit the position vertex with the color calculated
//in the vertex shader.
vec4 p = gl_in[0].gl_Position;
gl_Position = p;
gl_FrontColor = color[0];
EmitVertex();
//Second we emit the vertex which is the xyz position plus a
//scaled version of the vector we want to visualize. Also, the
//color is the same as calculated in the vertex shader.
gl_Position = vec4(p.rgb+(scale*vector[0].rgb),p.a);
gl_FrontColor = color[0];
EmitVertex();
EndPrimitive();
}
</pre></span><br />
<span style="font-size: large;"><span style="font-size: small;"><i><b>Fragment Shader:</b></i> </span> </span><br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums:1">//simple pass through fragment shader.
void main(void) {
gl_FragColor = gl_Color;
}
</pre></span><br />
</div>Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com0tag:blogger.com,1999:blog-8943104392909477640.post-39985379221614281772011-11-27T17:57:00.000-08:002011-11-28T08:19:01.607-08:00Systems now have interactionBefore I could have interacting rigid body/fluid systems I first had to create a rigid body class where the rigid bodies are composed of particles. To have a rigid body system, I needed to discritize the body into a set of particles. To do this I utilized a method from [2], which uses opengl to create a 3d texture representing voxels inside and outside the rigid-body. From the resulting 3D texture, I can then generate a particle field. For each of the particles, we need position, velocity, and force for each particle that is contained in the rigid body. Also, for each rigid body we should track the linear and torque forces on the center of mass, velocity of center of mass, angular velocity, and the center of mass position.<br />
<br />
To calculate the total linear force on a single rigid body, simply sum all the forces of each particle that belongs to that rigid body. [1]<br />
<br />
$F_{total} = \displaystyle \sum_{i=0}^N f_i$<br />
<br />
And the total torque can be calculated by summing up the cross product of the linear force with the radius from the center of mass for each particle. [1]<br />
<br />
$F_{torque} = \displaystyle \sum_{i=0}^N ((pos_i-pos_{com}) \times f_i)$<br />
<br />
Currently I don't have any constraints implemented. Also, I still need to implement the segmented scan to improve the calculation of force totals on the rigid body. Currently I naively loop and sum for each rigid body which is inefficient. I also need to create a renderer which will render rigid body meshes at the appropriate locations.<br />
<br />
Here is a video of my progress so far.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="300" mozallowfullscreen="" src="http://player.vimeo.com/video/32679396?title=0&byline=0&portrait=0" webkitallowfullscreen="" width="400"></iframe><br />
<a href="http://vimeo.com/32679396">Rigid-body and Fluid interaction with velocity vectors</a> from <a href="http://vimeo.com/user5996991">Andrew Young</a> on <a href="http://vimeo.com/">Vimeo</a>.<br />
<br />
In the video above, I render the velocity vectors of each particle. This helps to visualize the flow of the fluid. I plan to create a separate blog to discuss my implementation of the rendering code for the velocity vectors.<br />
<br />
The interaction between fluid and rigid-body is modeled by a spring/dampening system. The calculations are based on the following two equations as described in [1]. <br />
<br />
$f_s = -k(d-r_{ij})\frac{r_{ij}}{\left|r_{ij}\right|}$<br />
<br />
$f_d = \eta(v_{j}-v_{i})$<br />
<br />
Where k is the spring constant and $\eta$ is the dampening coefficient. I need to experiment with these coefficients to get more accurate results.<br />
<br />
I also focused a lot of time to refactoring the code around the System class. The System class was abstract and had very little functionality. When creating the rigid body class, I noticed I was just copying and pasting a lot of code. That was a perfect reason to push things up to the parent class.<br />
<br />
I moved a lot of generic functionality as well as several buffers (position, velocity, force) that were common to all systems up to the System class. I then added some rudimentary code to all for interaction between systems. Currently this only works between SPH and Rigidbody systems. Eventually, I hope to have a better interaction framework where you can define new rules of interaction quickly and easily. Then I could add interaction between Flocks and SPH which could provide some very interesting simulations.<br />
<br />
Todo:<br />
<br />
<ol><li>Verify spacing of the rigid-body system is correct.</li>
<li>Experiment with coefficients for interaction.</li>
<li>Explore the alternative formulations for system interaction.</li>
<li>Continue refactoring the code.</li>
<li>Create a class to better handle particle shapes.</li>
</ol><br />
<br />
<br />
References<br />
<br />
[1] Harada, T., Tanaka, M., Koshizuka, S., & Kawaguchi, Y. (2007). Real-time Coupling of Fluids and Rigid Bodies. Proc of the APCOM, 1-13. Retrieved from <a href="http://individuals.iii.u-tokyo.ac.jp/~yoichiro/report/report-pdf/harada/international/2007apcom.pdf">http://individuals.iii.u-tokyo.ac.jp/~yoichiro/report/report-pdf/harada/international/2007apcom.pdf</a><br />
<br />
[2] Fang, S., & Chen, H. (2000). Hardware accelerated voxelization. Computers & Graphics, 24(3), 433-442. Elsevier. Retrieved from <a href="http://www.sciencedirect.com/science/article/pii/S0097849300000388">http://www.sciencedirect.com/science/article/pii/S0097849300000388</a>Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com0tag:blogger.com,1999:blog-8943104392909477640.post-11779570075937182242011-10-25T14:59:00.000-07:002011-10-26T14:07:05.778-07:00Fluid and Rigid Body InteractionI have finally settled on a topic for my master's thesis. I will implement some techniques discussed in <a href="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch29.html">GPU Gems 3</a> and <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5772283&tag=1">"Two-way rigid-fluid interaction Heterogeneous Architectures"</a> , into <a href="https://github.com/enjalot/EnjaParticles">Ian's EnjaParticles</a> library to allow for rigid-fluid interaction. I will now be the primary maintainer of the library and thus I have forked it to <a href="https://github.com/ayoung200/EnjaParticles">my github account</a>.<br />
<br />
The original goal of Ian's project was to integrate the fluid library into <a href="http://www.blender.org/">Blender's Game Engine(BGE)</a>. He has a working build of blender with the <a href="https://github.com/enjalot/BGERTPS">fluids integrated into the BGE</a>. The fluid simulation works quite nicely. However, it currently can't handle two-way coupling between rigidbody-fluid systems.<br />
<br />
My project will involve upgrading EnjaParticles library to include the coupling physics. Also, I plan to integrate this library into <a href="http://bulletphysics.org/wordpress/">Bullet Physics Library</a>. If I integrate it into the Bullet library, I can then more naturally integrate the library into the BGE. The BGE already uses Bullet for rigidbody and softbody simulations. With the addition of our library into Bullet, the BGE could then have rigidbody, softbody, and fluid simulations.<br />
<br />
Goals for the project:<br />
<br />
<ul><li>Create OpenCL and C++ infrastructure for rigidbodies in Enjaparticles</li>
<ul><li>Voxelize an arbitrary [closed] triangular mesh. Already Completed</li>
<li>Create data structures to hold particles position, velocity, force, torque... and a related structure which holds these values for the center of mass of the rigid body.</li>
<li>Create OpenCL kernels to perform Segmented scan. This is necessary to sum the total forces for each rigid body from its particles.</li>
</ul><li>Restructure and optimize the EnjaParticle library.</li>
<ul><li>Do not force Opencl/gl interoperability. Could run on CPU's in addition to GPUS.</li>
<ul><li>This will also allow for offline simulations that could be serialized.</li>
</ul><li>Ensure all kernels are optimized.</li>
<li>Implement Morton ordering. This can speed up the SPH simulations substantially.</li>
<li>Implement Radix Sorting.</li>
<li>General refactoring</li>
<ul><li>Some of the methods/attributes of each system class can be abstracted.</li>
<li>Enforce encapsulation and utilize inheritance.</li>
<li>Add Extensive Documentation.</li>
<li>Consider Making renderer part of the main file or at least seperate it from the simulation. Thus giving flexability to the caller on how they wish to render the VBO.</li>
</ul></ul><li>Integrate the Upgraded library into Bullet.</li>
<li>Upgrade BGE with the latest Bullet library.</li>
<li>Other Features that are wishes and not requirements.</li>
<ul><li>Improve Screenspace rendering.</li>
<ul><li>reflection</li>
<li>refraction</li>
<li>curvature flow</li>
</ul><li>Implement Histopyramid marching cubes into the library for fast surface reconstruction.</li>
</ul></ul>Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com0tag:blogger.com,1999:blog-8943104392909477640.post-81527329411503717972011-04-02T09:16:00.000-07:002011-10-26T14:08:54.534-07:00Screen Space Fluid rendering Phase 1As I mentioned in my previous post, I have currently coded three of the necessary phases for rendering a set of points as fluid surface. The technique used is referred to as "Screen Space Fluid Rendering." The three phases are "Render points as Spheres", "Smooth the depth buffer", and finally "Calculate normals from the Smoothed Depth Buffer". In this blog, I will elaborate more on the screen space fluid rendering technique and I will discuss the first step.<br />
<br />
<span style="font-size: large;">Introduction to Screen Space Fluid Rendering(SSFR)</span><br />
<br />
Before going into the details of each phase, I would like to introduce a little theory behind SSFR. The primary goal behind SSFR is to reconstruct the fluid surface in the viewer's (camera's) space. Concerning ourselves with only the viewer's perspective have potential speed up over methods like marching cubes which attempt to reconstruct the fluids entire surface. SSFR is not without limitations but for generating real time fluid animations it is among the fastest and highest quality currently available. An improvement in the smoothing phase, was developed and titled <a href="http://industrialarithmetic.blogspot.com/2009/01/our-paper-screen-space-fluid-rendering.html">SSFR with Curvature flow</a>.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXH4OIKV-N84CluaN_bW_iStl8cx65CZKyVp-4fYcO90ziaoZLhRlCPOz4Dmhw_mqNLMypNR0_a0qBdw_Jh2CjB99XPBXbtrHghKkcsYFW6QNmTL79VhF1iB_E_42wlugBDapA-E9dYuNn/s1600/01_Particles.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXH4OIKV-N84CluaN_bW_iStl8cx65CZKyVp-4fYcO90ziaoZLhRlCPOz4Dmhw_mqNLMypNR0_a0qBdw_Jh2CjB99XPBXbtrHghKkcsYFW6QNmTL79VhF1iB_E_42wlugBDapA-E9dYuNn/s320/01_Particles.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1: The viewer typically can only see a subset of the particles. Also, they can almost never see the opposite surface. These factors motivate the need for creating the surface in user's perspective only.</td></tr>
</tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-HZsdzFJbKfbTkOR7i689SJxdINxgYAm9ZNTPpa8VC9TFQ5VMl1gfJ5Ti-WFu6shf7ccgmRkf66N4kIvAtwsOClPBprdqYuEoN1z96h7f_er5CdXcTrs4HelHEtMMdrS-66l7dud6g-qx/s1600/02_clipping.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-HZsdzFJbKfbTkOR7i689SJxdINxgYAm9ZNTPpa8VC9TFQ5VMl1gfJ5Ti-WFu6shf7ccgmRkf66N4kIvAtwsOClPBprdqYuEoN1z96h7f_er5CdXcTrs4HelHEtMMdrS-66l7dud6g-qx/s320/02_clipping.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 2: Points outside the viewer's perspective are clipped. This is another way to save time.</td><td class="tr-caption" style="text-align: center;"></td></tr>
</tbody></table><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOsX9EIKJ47WqjuKR0Wr4jmOtcrnFA8rYYRzBgkJEzXq9sAFkVCb1ZQB4se33434aGLstHo0EOyS1Uw7WLbfhUlJQktwZqjd3XBU_P9o-W_sDH3H6s8QzK-Syj6fXLtVXRSOicrCt1qtvM/s1600/03_surface.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOsX9EIKJ47WqjuKR0Wr4jmOtcrnFA8rYYRzBgkJEzXq9sAFkVCb1ZQB4se33434aGLstHo0EOyS1Uw7WLbfhUlJQktwZqjd3XBU_P9o-W_sDH3H6s8QzK-Syj6fXLtVXRSOicrCt1qtvM/s320/03_surface.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 3: The green line represents the surface we hope to obtain by the end phase.</td></tr>
</tbody></table><span style="font-size: large;">Creating Spheres from points</span><br />
<br />
<span style="font-size: large;"><span style="font-size: small;">The first phase is to create spheres from a collection of points. If we choose to actually render sphere meshes at each point the algorithm would become very expensive as the number of particles grows. Instead we choose to employ a GLSL shader similar to the one found in <a href="http://developer.download.nvidia.com/compute/opencl/sdk/website/samples.html#oclParticles">Nvidia's oclParticle demo</a>. We only need to use a shader because after this phase we no longer use vertex information. All subsequent phases take output from previous phases and process the image.</span></span><br />
<br />
<span style="font-size: large;"><span style="font-size: small;"><i><b>Vertex Shader:</b></i> </span> </span><br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums:1">uniform float pointRadius;
uniform float pointScale; // scale to calculate size in pixels
varying vec3 posEye; // position of center in eye space
void main()
{
posEye = vec3(gl_ModelViewMatrix * vec4(gl_Vertex.xyz, 1.0));
float dist = length(posEye);
gl_PointSize = pointRadius * (pointScale / dist);
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * vec4(gl_Vertex.xyz, 1.0);
gl_FrontColor = gl_Color;
}
</pre></span><br />
<br />
<span style="font-size: large;"><span style="font-size: small;"><i><b>Fragment Shader:</b></i> </span></span><br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums">uniform float pointRadius; // point size in world space
uniform float near;
uniform float far;
varying vec3 posEye; // position of center in eye space
void main()
{
// calculate normal from texture coordinates
vec3 n;
n.xy = gl_TexCoord[0].st*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
//This is a more compatible version which works on ATI and Nvidia hardware
//However, This does not work on Apple computers. :/
//n.xy = gl_PointCoord.st*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
float mag = dot(n.xy, n.xy);
if (mag > 1.0) discard; // kill pixels outside circle
n.z = sqrt(1.0-mag);
// point on surface of sphere in eye space
vec4 spherePosEye =vec4(posEye+n*pointRadius,1.0);
vec4 clipSpacePos = gl_ProjectionMatrix*spherePosEye;
float normDepth = clipSpacePos.z/clipSpacePos.w;
// Transform into window coordinates coordinates
gl_FragDepth = (((far-near)/2.)*normDepth)+((far+near)/2.);
gl_FragData[0] = gl_Color;
}
</pre></span><br />
<br />
<b><span style="font-size: small;">NOTE:</span></b><span style="font-size: small;"> The fragment shader above has some compatibility issues with ATI cards. Apparently the appropriate way to handle a point sprites texture coordinates i</span>s through gl_PointCoord. However this is not compatible with Apples OpenGL implementation.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyd4X0IPhgguJdjoxAlcHJmtR71Xe-IemLBybvQyrL4Aw-DyQQxXPvsx8TDPbWVP2FLg7B8_nOnkBfvi1qkb8HXwd7XB6bEyV7m-OWfGFNRn8jBOHGGoUAnELsOAg8Ow-YuZsSBNYtrn4s/s1600/04_pointsprites.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyd4X0IPhgguJdjoxAlcHJmtR71Xe-IemLBybvQyrL4Aw-DyQQxXPvsx8TDPbWVP2FLg7B8_nOnkBfvi1qkb8HXwd7XB6bEyV7m-OWfGFNRn8jBOHGGoUAnELsOAg8Ow-YuZsSBNYtrn4s/s320/04_pointsprites.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 4: Turn the points from the Figure 2 into point sprites. Point sprites are useful because they always face the viewer.</td></tr>
</tbody></table><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJKFZyCtLrP7IQZBUDTD1mc7CDxvaemtBFZTX5P4Dbt64XpHEezFJbMOUnoMySBne_b9d1xMzxE_Pr78agDgOOsKycA5iEbdwJfr-YqlsRDsW9Ve8Tjdn7eTnHRMa3UrmzJYjccwemyRnk/s1600/05_culled2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJKFZyCtLrP7IQZBUDTD1mc7CDxvaemtBFZTX5P4Dbt64XpHEezFJbMOUnoMySBne_b9d1xMzxE_Pr78agDgOOsKycA5iEbdwJfr-YqlsRDsW9Ve8Tjdn7eTnHRMa3UrmzJYjccwemyRnk/s320/05_culled2.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 5: Point sprites which are "below" the surface do not need to be rendered.</td></tr>
</tbody></table>The main goal of this shader is to modify the depth values of the rasterized image. To do this we must determine the z value from a 2D texture coordinate. First, take a look at the formula for a sphere.<br />
<br />
$<br />
x^2+y^2+z^2 = 1<br />
$<br />
<br />
Notice that we are outside the sphere if the following condition occurs:<br />
<br />
$<br />
x^2 + y^2 > 0<br />
$<br />
<br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums">...
// calculate normal from texture coordinates
vec3 n;
n.xy = gl_TexCoord[0].st*vec2(2.0, -2.0) + vec2(-1.0, 1.0);
float mag = dot(n.xy, n.xy);
if (mag > 1.0) discard; // kill pixels outside circle
...
</pre></span><br />
If we are in the sphere then the following calculation will give us the z value of the point on the sphere.<br />
<br />
$<br />
z = \sqrt{1-x^2-y^2}<br />
$<br />
<br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums">...
n.z = sqrt(1.0-mag);
...
</pre></span><br />
<br />
The z value from this calculation is normalized on the interval [0,1]. The next step is to project this back into 3D camera coordinates. After projecting it into camera coordinates we must manually transform it back into window coordinates.<br />
<br />
<br />
<span style="font-size: medium;"><br />
<pre class="prettyprint linenums">...
// point on surface of sphere in camera space
vec4 spherePosEye =vec4(posEye+n*pointRadius,1.0);
vec4 clipSpacePos = gl_ProjectionMatrix*spherePosEye;
float normDepth = clipSpacePos.z/clipSpacePos.w;
// Transform into window coordinates coordinates
gl_FragDepth = (((far-near)/2.)*normDepth)+((far+near)/2.);
...
</pre></span><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi60fUV2hvgvvLgVpoXLySUvhcuHLJV_71MOiJ6wl4IqXFLpVGp0hOHjSHyXTfZrZo6jY_DrWJRxEBkMGZFEGM2ym2-oXoGyntf-4Z6H_7e1RLrAm_LedLckCD_UxZafBRV_XD0Qx-qiJPs/s1600/06_curved.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi60fUV2hvgvvLgVpoXLySUvhcuHLJV_71MOiJ6wl4IqXFLpVGp0hOHjSHyXTfZrZo6jY_DrWJRxEBkMGZFEGM2ym2-oXoGyntf-4Z6H_7e1RLrAm_LedLckCD_UxZafBRV_XD0Qx-qiJPs/s320/06_curved.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 6: Morph the point sprites into hemispheres.</td></tr>
</tbody></table><br />
<br />
In my next post, I plan to explain how to modify the depth values in a way to make these bumpy spheres look more like a continuous surface.<br />
<br />
All shader code and c++ code are available from <a href="https://github.com/enjalot/EnjaParticles">enjalot's github repository</a>. Rendering code can be found in rtps/rtpslib/Render/.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir3ep85ENtL6DPNUGtd6N03HwVi7jQaFadV15WjN5FxVUqjdnjdHJ74cMQBuXycNlnHzSPuSn3ryDxerJ90Dxzyz0ePe-Ot8xQ8nj2hHWyI2zuBIr3DRU3norHPRnJVB0iNBLEv3eWNytT/s1600/07_depth_blurred_attempt_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br />
</a></div>Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com13tag:blogger.com,1999:blog-8943104392909477640.post-23948368807894012022011-02-10T17:00:00.000-08:002011-02-10T21:09:30.289-08:00Particles to fluidI have been working with <a href="http://enja.org/">Ian Johnson</a> in the Department of Scientific Computing at FSU on implementing fluid rendering into his RTPS library. So far, I have had success in rendering the fluid surface using only a few steps out of the <a href="http://www.geeks3d.com/20100809/siggraph-2010-screen-space-fluid-rendering-for-games/">presentation</a> given at GDC '10. The technique is referred to as Screen Space Fluid Rendering.<br />
<br />
<iframe frameborder="0" height="225" src="http://player.vimeo.com/video/19794084" width="400"></iframe><br />
<a href="http://vimeo.com/19794084">[RTPS] Fluid Surface and Collisions in the works</a> from <a href="http://vimeo.com/user4640702">Ian Johnson</a> on <a href="http://vimeo.com/">Vimeo</a>.<br />
<br />
The three phases I have implemented so far are "Render points as spheres", "Gaussian Blur on Depth", and "Calculate Normals from Depth". The end result of adding this rendering technique to <a href="http://enja.org/">Ian's</a> RTPS library which actually simulates the fluid is demoed in the video above. Next week, I will be posting code snippets along with explanations of how these three steps are combined to create this "surface" like effect.Andrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com0tag:blogger.com,1999:blog-8943104392909477640.post-22661975867856527722011-02-10T15:09:00.000-08:002011-02-10T15:09:03.426-08:00Hello WorldHello everyone,<br />
My name is Andrew Young. I am creating this blog in particular to detail progress in my research in the Department of Scientific Computing (DSC) at FSU. I may also add post about other interesting adventures I have throughout my time in graduate school.<br />
<br />
My next post will be about my struggles as a beginner in OpenGL programming. I rushed head first into the project and quickly became buried in the complexities of Graphics programming.<br />
<br />
Thanks,<br />
Andrew YoungAndrew Younghttp://www.blogger.com/profile/09989543242574885835noreply@blogger.com0