Threejs isolated reflection shader using cubeCamera not working - reflection

jsFiddle https://jsfiddle.net/t2pv9ku0/8/ demonstrates my problem.
I'm trying to re-create the reflection effect in ThreeJS using a RawShaderMaterial. There are several examples of this effect, which is pretty standard.
A cubeCamera is created and passed into the shader to provide the samplerCube:
material.uniforms.reflectionSampler.value = cubeCamera.renderTarget;
A reflection vector is calculated in the vertex shader:
varying vec3 vReflect;
...
vec3 inverseTransformDirection( in vec3 normal, in mat4 matrix ) {
return normalize( ( vec4( normal, 0.0 ) * matrix ).xyz );
}
...
vec3 worldNormal = transformDirection( objectNormal, modelMatrix );
vec3 cameraToVertex = normalize( worldPosition.xyz - cameraPosition );
vReflect = reflect( cameraToVertex, worldNormal );
It's used in the fragment shader to read from the cubeCamera's render:
gl_FragColor = vec4( vNormal.y ) + textureCube( reflectionSampler, vReflect );
However, as you can see in the jsFiddle, the effect does not appear to be working correctly. It appears to be reading the sample position incorrectly from the samplerCube. I've tried fiddling with everything

Your cube camera is not part of the scene, so you must call
cubeCamera.updateMatrixWorld();
updated fiddle: https://jsfiddle.net/t2pv9ku0/9/
Alternatively, you can add the cubeCamera to the scene.
three.js r.71
Edit by Andy Here is a fiddle with reflection code https://jsfiddle.net/t2pv9ku0/11/ that mimics three's reflection

Related

Is there a way to batch render textures in Qt?

I have been trying to batch render two different pictures. I have 2 different QOpenGLTexture objects I want to draw in a single draw call with batch rendering but am struggling. Both texture objects have id's but only the last texture objects image is drawn. I believe my problem is with setting up the or frag shader.
//..............Setting up uniform...............//
const GLuint vals[] = {m_texture1->textureId(), m_texture2->textureId()};
m_program->setUniformValueArray("u_TextureID", vals, 2);
//..............frag Shader.....................//
#version 330 core
out vec4 color;
in vec2 v_textCoord; // Texture coordinate
in float v_index; // (0, 1) Vertex for which image to draw.
// 0 would draw the image of the first texture object
uniform sampler2D u_Texture[2];
void main()
{
int index = int(v_index);
color = texture(u_Texture[index], v_textCoord);
};
I've tried experimenting with the index value in the frag shader but it only draws the last texture image or blacks out. I tried implementing it how you would with openGL but have had no luck.

Convert fragment shader gradient result from black to transparent in GL ES

I'm trying to generate realistic stars for an open source game I'm working on. I'm generating the stars using principles covered here. I'm using the three.js library in a Chromium engine (NW.js). The problem I've found is that the star glow fades into black instead of into transparency.
Whilst it looks nice for single star,
multiple stars have a serious problem:
My code is as follows:
Vertex shader
attribute vec3 glow;
varying vec3 vGlow;
void main() {
vGlow = glow;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = 100.0;
gl_Position = projectionMatrix * mvPosition;
}
Fragment shader
varying vec3 vGlow;
void main() {
float starLuminosity = 250.0;
float invRadius = 60.0;
float invGlowRadius = 2.5;
// Get position relative to center.
vec2 position = gl_PointCoord;
position.x -= 0.5;
position.y -= 0.5;
// Airy disk calculation.
float diskScale = length(position) * invRadius;
vec3 glow = vGlow / pow(diskScale, invGlowRadius);
glow *= starLuminosity;
gl_FragColor = vec4(glow, 1.0);
}
I've tried discarding pixels that are darker, but this does not solve the problem, it only hides it a tad:
if (gl_FragColor.r < 0.1 && gl_FragColor.g < 0.1 && gl_FragColor.b < 0.1) {
discard;
}
The actual effect I'm after is as follows,
but I have no idea how to achieve this.
Any advice will be appreciated.
You cannot achieve this effect in the fragment shader because you are rendering multiple meshes or primitives. You have to enable Blending before rendering the geometry:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
Also make sure that the Depth test is disabled.
Additionally you must set the alpha channel from. e.g.:
gl_FragColor = vec4(glow, 1.0);
vec4(glow, (glow.r+glow.g.+glow.b)/3.0 * 1.1 - 0.1);

OpenGL FPS Camera movement relative to lookAt target

I have a camera in OpenGL.I had no problem with it until adding FPS controller.The problem is that the basic FPS behavior is ok. The camera moves forward,backward,left and right+ rotates towards the direction supplied by mouse input.The problems begin when the camera moves to the sides or the back of the target position.In such a case camera local forward,backward,left,right directions aren't updated based on its current forward look but remain the same as if it was right in front of the target.Example:
If the target object position is at (0,0,0) and camera position is at (-50,0,0) (to the left of the target) and camera is looking at the target,then to move it back and forth I have to use the keys for left and right movement while backward/forward keys move the camera sideways.
Here is the code I use to calculate camera position, rotation and LookAt matrix:
void LookAtTarget(const vec3 &eye,const vec3 &center,const vec3 &up)
{
this->_eye = eye;
this->_center = center;
this->_up = up;
this->_direction =normalize((center - eye));
_viewMatrix=lookAt( eye, center , up);
_transform.SetModel(_viewMatrix );
UpdateViewFrustum();
}
void SetPosition(const vec3 &position){
this->_eye=position;
this->_center=position + _direction;
LookAtTarget(_eye,_center,_up);
}
void SetRotation(float rz , float ry ,float rx){
_rotationMatrix=mat4(1);
vec3 direction(0.0f, 0.0f, -1.0f);
vec3 up(0.0f, 1.0f, 0.0f);
_rotationMatrix=eulerAngleYXZ(ry,rx,rz);
vec4 rotatedDir= _rotationMatrix * vec4(direction,1) ;
this->_center = this->_eye + vec3(rotatedDir);
this->_up =vec3( _rotationMatrix * vec4(up,1));
LookAtTarget(_eye, _center, up);
}
Then in the render loop I set camera's transformations:
while(true)
{
display();
fps->print(GetElapsedTime());
if(glfwGetKey(GLFW_KEY_ESC) || !glfwGetWindowParam(GLFW_OPENED)){
break;
}
calculateCameraMovement();
moveCamera();
view->GetScene()->GetCurrentCam()->SetRotation(0,-camYRot,-camXRot);
view->GetScene()->GetCurrentCam()->SetPosition(camXPos,camYPos,camZPos);
}
lookAt() method comes from GLM math lib.
I am pretty sure I have to multiply some of the vectors (eye ,center etc) with rotation matrix but I am not sure which ones.I tried to multiply _viewMatrix by the _rotationMatrix but it creates a mess.The code for FPS camera position and rotation calculation is taken from here.But for the actual rendering I use programmable pipeline.
Update:
I solved the issue by adding a separate method which doesn't calculate camera matrix using lookAt but rather using the usual and basic approach:
void FpsMove(GLfloat x, GLfloat y , GLfloat z,float pitch,float yaw){
_viewMatrix =rotate(mat4(1.0f), pitch, vec3(1, 0, 0));
_viewMatrix=rotate(_viewMatrix, yaw, vec3(0, 1, 0));
_viewMatrix= translate(_viewMatrix, vec3(-x, -y, -z));
_transform.SetModel( _viewMatrix );
}
It solved the problem but I still want to know how to make it work with lookAt() methods I presented here.
You need to change the forward direction of the camera, which is presumably fixed to (0,0,-1). You can do this by rotating the directions about the y axis by camYRot (as computed in the lookat function) so that forwards is in the same direction that the camera is pointing (in the plane made by the z and x axes).

Why does my GLSL shader lighting move around the scene with the objects it's shining on?

I'm following a tutorial on OpenGL ES 2.0 and combining it with a tutorial on GLSL lighting that I found, using a handy Utah teapot from developer.apple.com.
After a lot of fiddling and experimentation I have the teapot drawn moderately correctly on the screen, spinning around all three axes with the 'toon shading' from the lighting tutorial working. There's a few glitches in the geometry due to me simply drawing the whole vertex list as triangle strips (if you look in the teapot.h file there are '-1' embedded where I'm supposed to start new triangle strips, but this is only test data and not relevant to my problem).
The bit I am really confused about is how to position a light in the scene. In my Objective-C code I have a float3 vector that contains {0,1,0} and pass that into the shader to then calculate the intensity of the light.
Why does the light appear to move in the scene too? What I mean is the light acts as though it's attached to the teapot by an invisible stick, always pointing at the same side of it no matter what direction the teapot is facing.
This is the vertex shader
attribute vec4 Position;
attribute vec4 SourceColor;
attribute vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
varying vec3 normal;
void main(void) {
normal = Normal;
gl_Position = Projection * Modelview * Position;
}
'Position' is set by the Obj-C code and is the vertices for the object, 'Normal' is the list of normals both from a vertex array (VBO), 'Projection' and 'Modelview' are calculated like this:
(A CC3GLMatrix is from the Cocos3D library, mentioned in the GLES tutorial linked above)
CC3GLMatrix *projection = [CC3GLMatrix matrix];
float h = 4.0f * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-2 andRight:2 andBottom:-h/2 andTop:h/2 andNear:1 andFar:100];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
CC3GLMatrix *modelView = [CC3GLMatrix matrix];
[modelView populateFromTranslation:CC3VectorMake(0, 0, -7)];
[modelView scaleBy:CC3VectorMake(30, 30, 30)];
_currentRotation += displayLink.duration * 90;
[modelView rotateBy:CC3VectorMake(_currentRotation, _currentRotation, _currentRotation)];
glUniformMatrix4fv(_modelViewUniform, 1, 0, modelView.glMatrix);
And I set the light in the scene by doing
float lightDir[] = {1,0,1};
glUniform3fv(_lightDirUniform, 1, lightDir);
The fragment shader looks like this
varying lowp vec4 DestinationColor; // 1
varying highp vec3 normal;
uniform highp vec3 LightDir;
void main(void) {
highp float intensity;
highp vec4 color;
intensity = dot(LightDir,normal);
if (intensity > 0.95)
color = vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.25)
color = vec4(0.4,0.2,0.2,1.0);
else
color = vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
While trying to work this out I come across code that references the (non-existant in GLES) 'gl_LightSource' and 'gl_NormalMatrix' but don't know what to put into equivalents I have to pass into the shaders from my code. The references to 'eye space' 'camera space' 'world space' and so on are confusing, I know I should probably be converting things between them but don't understand why or how (and where - in code, or in the shader?)
Every frame do I need to modify the light source? The code I have for setting it looks too simplistic. I'm not really moving the teapot around, am I, instead I'm moving the entire scene - light and all around?
First of all some definitions:
world space: the space your whole world is defined in. By convention it is a static space that never moves.
view space/camera space/eye space: the space your camera is defined in. it is usually a position and rotation relative to world space
model space: the space your model is defined in. Like camera space, it is usually a position and rotation relative to world space
light space: same as model space
In simple examples (and i guess in your's) model space and world space are the same. In addition OpenGL by itself doesn't have a concept of world space, which doesn't mean you cannot use one. It comes in handy when you want to have more than one object moving around independently in your scene.
Now, what you are doing with your object before rendering is creating a matrix that transforms the vertices of a model into viewspace, hence 'modelViewMatrix'.
With light in this case it's a little different. Light calculation in your shader is done in modelspace, so you have to transform your lightposition every frame into modelspace.
This is done by calculating something like:
_lightDirUniform = inverseMatrix(model) * inverseMatrix(light) * lightPosition;
The lightposition is transformed from light into world and then into model space. If you don't have a world space, just leave out the model space transformation and you should be fine.

OpenGL ES 2.0 specifying normals for vertex shader

I am not able to get the right shading on all the faces of the cube I drew. I get a smooth transition from one of the face to another face which does not show the edge properly.
One a different face (where I get the desired edge), I get the shading such that it shows the two triangles which make up that face.
I believe the problem is with the normals I am specifying. I am attaching my vertex and normal matrix and my vertex and fragment shader code. vertex and normal matrix are same.
I guess the problem is with the normals but I have tried almost everything with them but the effect does not change.
//normal matrix and vertex matrix are same
static const float normals[]=
{
//v0,v1,v2,v3,
//v4,v5,v6,v7
0.5,0.5,0.5, -0.5,0.5,0.5, -0.5,-0.5,0.5, 0.5,-0.5,0.5,
0.5,0.5,-0.5, -0.5,0.5,-0.5, -0.5,-0.5,-0.5, 0.5,-0.5,-0.5
};
//vertex shader
attribute vec4 color;
attribute vec4 position;
attribute vec4 normal;
uniform mat4 u_mvpMatrix;
uniform vec4 lightDirection;
uniform vec4 lightDiffuseColor;
uniform float translate;
varying vec4 frontColor;
//varying vec4 colorVarying;
void main()
{
vec4 normalizedNormal = normalize(u_mvpMatrix* normal);
vec4 normalizedLightDirection = normalize(lightDirection);
float nDotL = max(dot(normalizedNormal, normalizedLightDirection), 0.0);
frontColor = color * nDotL * lightDiffuseColor;
gl_Position = u_mvpMatrix * position;
}
//fragment shader
varying lowp vec4 frontColor;
void main()
{
gl_FragColor = frontColor;
}
Please help? Thanks in advance !!
Your problem is not related to your normal matrix, it is related to your input normals.
From what I read, you provide 8 normals, one for each vertex, meaning that you provide pre-averaged normals ; there is no way your vertex shader can un-average them.
To have proper discontinuities in your lighting, you need discontinuous normals : for each quad, you have 4 vertices, each with a normal matching the quad normal. So you end up with 6x4 = 24 normals (so as many vertices).
You may want to take a look at this for more detailed explanation :
http://www.songho.ca/opengl/gl_vertexarray.html

Resources