I am not able to get the right shading on all the faces of the cube I drew. I get a smooth transition from one of the face to another face which does not show the edge properly.
One a different face (where I get the desired edge), I get the shading such that it shows the two triangles which make up that face.
I believe the problem is with the normals I am specifying. I am attaching my vertex and normal matrix and my vertex and fragment shader code. vertex and normal matrix are same.
I guess the problem is with the normals but I have tried almost everything with them but the effect does not change.
//normal matrix and vertex matrix are same
static const float normals[]=
{
//v0,v1,v2,v3,
//v4,v5,v6,v7
0.5,0.5,0.5, -0.5,0.5,0.5, -0.5,-0.5,0.5, 0.5,-0.5,0.5,
0.5,0.5,-0.5, -0.5,0.5,-0.5, -0.5,-0.5,-0.5, 0.5,-0.5,-0.5
};
//vertex shader
attribute vec4 color;
attribute vec4 position;
attribute vec4 normal;
uniform mat4 u_mvpMatrix;
uniform vec4 lightDirection;
uniform vec4 lightDiffuseColor;
uniform float translate;
varying vec4 frontColor;
//varying vec4 colorVarying;
void main()
{
vec4 normalizedNormal = normalize(u_mvpMatrix* normal);
vec4 normalizedLightDirection = normalize(lightDirection);
float nDotL = max(dot(normalizedNormal, normalizedLightDirection), 0.0);
frontColor = color * nDotL * lightDiffuseColor;
gl_Position = u_mvpMatrix * position;
}
//fragment shader
varying lowp vec4 frontColor;
void main()
{
gl_FragColor = frontColor;
}
Please help? Thanks in advance !!
Your problem is not related to your normal matrix, it is related to your input normals.
From what I read, you provide 8 normals, one for each vertex, meaning that you provide pre-averaged normals ; there is no way your vertex shader can un-average them.
To have proper discontinuities in your lighting, you need discontinuous normals : for each quad, you have 4 vertices, each with a normal matching the quad normal. So you end up with 6x4 = 24 normals (so as many vertices).
You may want to take a look at this for more detailed explanation :
http://www.songho.ca/opengl/gl_vertexarray.html
Related
I have been trying to batch render two different pictures. I have 2 different QOpenGLTexture objects I want to draw in a single draw call with batch rendering but am struggling. Both texture objects have id's but only the last texture objects image is drawn. I believe my problem is with setting up the or frag shader.
//..............Setting up uniform...............//
const GLuint vals[] = {m_texture1->textureId(), m_texture2->textureId()};
m_program->setUniformValueArray("u_TextureID", vals, 2);
//..............frag Shader.....................//
#version 330 core
out vec4 color;
in vec2 v_textCoord; // Texture coordinate
in float v_index; // (0, 1) Vertex for which image to draw.
// 0 would draw the image of the first texture object
uniform sampler2D u_Texture[2];
void main()
{
int index = int(v_index);
color = texture(u_Texture[index], v_textCoord);
};
I've tried experimenting with the index value in the frag shader but it only draws the last texture image or blacks out. I tried implementing it how you would with openGL but have had no luck.
As mentioned in my previous question, I am trying to import/export glTF models in R. The R 3d graphics engine that I'm using (rgl) is really old, and the rendering within R is done using OpenGL 1.x methods: material colors like GL_DIFFUSE, GL_AMBIENT, GL_SPECULAR and GL_EMISSION colors, as well as GL_SHININESS. It also uses WebGL 1 in web output.
I need to translate existing code using these parameters into PBR parameters for output to glTF, and translate glTF PBR parameters into the older model when reading.
Currently I have the following:
baseColorFactor in material.pbrMetallicRoughness and textures corresponds to the diffuse color.
emissiveFactor corresponds to the emission color.
However, I have no idea how to approximate the other material components in the old style.
I'm hoping this has been done before; can anyone provide formulas for the conversions, or a pointer to a source so I can work them out myself?
Unfortunately, there is no direct conversion between PBR and legacy OpenGL material models.
Maybe the following pseudo-formula might help as a starting point:
struct PbrMaterial
{
vec4 BaseColor; //!< base color + alpha
vec3 Emission; //!< emission
float Metallic; //!< metalness factor
float Roughness; //!< roughness factor
};
struct CommonMaterial
{
vec4 Diffuse; //!< diffuse RGB coefficients + alpha (GL_DIFFUSE)
vec4 Ambient; //!< ambient RGB coefficients (GL_AMBIENT)
vec4 Specular; //!< glossy RGB coefficients (GL_SPECULAR)
vec4 Emission; //!< material RGB emission (GL_EMISSION)
float Shininess; //!< shininess (GL_SHININESS in 0..128 range)
};
CommonMaterial pbrToCommon (const PbrMaterial& thePbr)
{
CommonMaterial aCommon;
aCommon.Diffuse = thePbr.BaseColor;
aCommon.Ambient = thePbr.BaseColor * 0.25;
aCommon.Specular = vec4 (thePbr.Metallic, thePbr.Metallic, thePbr.Metallic, 1.0);
aCommon.Emission = vec4 (thePbr.Emission, 1.0);
aCommon.Shininess = 128.0 * (1.0 - thePbr.Roughness);
return aCommon;
}
As an extra note, PBR normally (like in glTF) uses linear RGB color values, while legacy OpenGL usually performed rendering without conversion into non-linear sRGB color space used by most of displays. If referred WebGL 1.0 renderer doesn't perform gamma correction, than it could be cheated on conversion into Diffuse/Ambient/Specular/Emission vectors to have more corellated visual results (but still inconsistent a lot)...
I'm trying to make a simple noise effect on a sphere with shaders.
I tried to use ashima's perlin noise but the effect wasn't what I expected so I create my own shader based on Phong.
Here is what I get with this code in my vertex shader:
attribute int index;
uniform float time;
vec3 newPosition = position + normal * vec3(sin((time * 0.001) * float(index)) * 0.05);
gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
where index is the index of the vertex and time the current elapsed time.
The noise effect is exactly what I expected but the sphere mesh is open...
How can I keep this effect and keep the sphere mesh closed?
Most likely your sphere contains duplicated vertices. Get rid of them and your shader will works well. Or get rid of your shader dependency on "index".
I have a little program that render a yellow triangle twice, once on the left half of a framebuffer and once on the right side.
Dump of the texture
Now, after that I render the content of this framebuffer on the screen.
It works if I use GL_TEXTURE_RECTANGLE in the framebuffer constructor:
https://github.com/elect86/Joglus/blob/master/Joglolus/src/joglus/example1/FrameBuffer.java
In binding the texture, function renderFullScreenQuad, line 372:
https://github.com/elect86/Joglus/blob/master/Joglolus/src/joglus/example1/GlViewer.java
And using sampler2DRect in the fragment shader:
#version 330
out vec4 outputColor;
uniform sampler2DRect texture0;
void main() {
outputColor = texture(texture0, gl_FragCoord.xy);
}
But if I change all the RECTANGLE to 2D and I use sample2D in the fs, I get a total black image at the end of the display(), although the dump of the texture shows always the correct image... I would like to know why.
Texture coordinates work differently between textures of types GL_TEXTURE_RECTANGLE and GL_TEXTURE_2D:
For GL_TEXTURE_RECTANGLE, the range of texture coordinates corresponding to the entire texture image is [0.0, width] x [0.0, height]. In other words, the unit of the texture coordinates is in pixels of the texture image.
For GL_TEXTURE_2D, the range of texture coordinates is [0.0, 1.0] x [0.0, 1.0].
With this statement in your fragment shader:
outputColor = texture(texture0, gl_FragCoord.xy);
you are using coordinates in pixel units as texture coordinates. Based on the above, this will work for the RECTANGLE texture, but not for 2D.
Since your original input coordinates in the vertex shader appear to be in the range [0.0, 1.0], the easiest approach to fix this is to pass the untransformed coordinates from vertex shader to fragment shader, and use them as texture coordinates. The vertex shader would then look like this:
#version 330
layout (location = 0) in vec2 position;
out vec2 texCoord;
uniform mat4 modelToClipMatrix;
void main() {
gl_Position = modelToClipMatrix * vec4(position, 0, 1);
texCoord = position;
}
And the fragment shader:
#version 330
in vec2 texCoord;
out vec4 outputColor;
uniform sampler2D texture0;
void main() {
outputColor = texture(texture0, texCoord);
}
I'm following a tutorial on OpenGL ES 2.0 and combining it with a tutorial on GLSL lighting that I found, using a handy Utah teapot from developer.apple.com.
After a lot of fiddling and experimentation I have the teapot drawn moderately correctly on the screen, spinning around all three axes with the 'toon shading' from the lighting tutorial working. There's a few glitches in the geometry due to me simply drawing the whole vertex list as triangle strips (if you look in the teapot.h file there are '-1' embedded where I'm supposed to start new triangle strips, but this is only test data and not relevant to my problem).
The bit I am really confused about is how to position a light in the scene. In my Objective-C code I have a float3 vector that contains {0,1,0} and pass that into the shader to then calculate the intensity of the light.
Why does the light appear to move in the scene too? What I mean is the light acts as though it's attached to the teapot by an invisible stick, always pointing at the same side of it no matter what direction the teapot is facing.
This is the vertex shader
attribute vec4 Position;
attribute vec4 SourceColor;
attribute vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
varying vec3 normal;
void main(void) {
normal = Normal;
gl_Position = Projection * Modelview * Position;
}
'Position' is set by the Obj-C code and is the vertices for the object, 'Normal' is the list of normals both from a vertex array (VBO), 'Projection' and 'Modelview' are calculated like this:
(A CC3GLMatrix is from the Cocos3D library, mentioned in the GLES tutorial linked above)
CC3GLMatrix *projection = [CC3GLMatrix matrix];
float h = 4.0f * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-2 andRight:2 andBottom:-h/2 andTop:h/2 andNear:1 andFar:100];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
CC3GLMatrix *modelView = [CC3GLMatrix matrix];
[modelView populateFromTranslation:CC3VectorMake(0, 0, -7)];
[modelView scaleBy:CC3VectorMake(30, 30, 30)];
_currentRotation += displayLink.duration * 90;
[modelView rotateBy:CC3VectorMake(_currentRotation, _currentRotation, _currentRotation)];
glUniformMatrix4fv(_modelViewUniform, 1, 0, modelView.glMatrix);
And I set the light in the scene by doing
float lightDir[] = {1,0,1};
glUniform3fv(_lightDirUniform, 1, lightDir);
The fragment shader looks like this
varying lowp vec4 DestinationColor; // 1
varying highp vec3 normal;
uniform highp vec3 LightDir;
void main(void) {
highp float intensity;
highp vec4 color;
intensity = dot(LightDir,normal);
if (intensity > 0.95)
color = vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.25)
color = vec4(0.4,0.2,0.2,1.0);
else
color = vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
While trying to work this out I come across code that references the (non-existant in GLES) 'gl_LightSource' and 'gl_NormalMatrix' but don't know what to put into equivalents I have to pass into the shaders from my code. The references to 'eye space' 'camera space' 'world space' and so on are confusing, I know I should probably be converting things between them but don't understand why or how (and where - in code, or in the shader?)
Every frame do I need to modify the light source? The code I have for setting it looks too simplistic. I'm not really moving the teapot around, am I, instead I'm moving the entire scene - light and all around?
First of all some definitions:
world space: the space your whole world is defined in. By convention it is a static space that never moves.
view space/camera space/eye space: the space your camera is defined in. it is usually a position and rotation relative to world space
model space: the space your model is defined in. Like camera space, it is usually a position and rotation relative to world space
light space: same as model space
In simple examples (and i guess in your's) model space and world space are the same. In addition OpenGL by itself doesn't have a concept of world space, which doesn't mean you cannot use one. It comes in handy when you want to have more than one object moving around independently in your scene.
Now, what you are doing with your object before rendering is creating a matrix that transforms the vertices of a model into viewspace, hence 'modelViewMatrix'.
With light in this case it's a little different. Light calculation in your shader is done in modelspace, so you have to transform your lightposition every frame into modelspace.
This is done by calculating something like:
_lightDirUniform = inverseMatrix(model) * inverseMatrix(light) * lightPosition;
The lightposition is transformed from light into world and then into model space. If you don't have a world space, just leave out the model space transformation and you should be fine.