Convert fragment shader gradient result from black to transparent in GL ES - math

I'm trying to generate realistic stars for an open source game I'm working on. I'm generating the stars using principles covered here. I'm using the three.js library in a Chromium engine (NW.js). The problem I've found is that the star glow fades into black instead of into transparency.
Whilst it looks nice for single star,
multiple stars have a serious problem:
My code is as follows:
Vertex shader
attribute vec3 glow;
varying vec3 vGlow;
void main() {
vGlow = glow;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = 100.0;
gl_Position = projectionMatrix * mvPosition;
}
Fragment shader
varying vec3 vGlow;
void main() {
float starLuminosity = 250.0;
float invRadius = 60.0;
float invGlowRadius = 2.5;
// Get position relative to center.
vec2 position = gl_PointCoord;
position.x -= 0.5;
position.y -= 0.5;
// Airy disk calculation.
float diskScale = length(position) * invRadius;
vec3 glow = vGlow / pow(diskScale, invGlowRadius);
glow *= starLuminosity;
gl_FragColor = vec4(glow, 1.0);
}
I've tried discarding pixels that are darker, but this does not solve the problem, it only hides it a tad:
if (gl_FragColor.r < 0.1 && gl_FragColor.g < 0.1 && gl_FragColor.b < 0.1) {
discard;
}
The actual effect I'm after is as follows,
but I have no idea how to achieve this.
Any advice will be appreciated.

You cannot achieve this effect in the fragment shader because you are rendering multiple meshes or primitives. You have to enable Blending before rendering the geometry:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
Also make sure that the Depth test is disabled.
Additionally you must set the alpha channel from. e.g.:
gl_FragColor = vec4(glow, 1.0);
vec4(glow, (glow.r+glow.g.+glow.b)/3.0 * 1.1 - 0.1);

Related

Use glBlendFunc in QOpenGLWidget

I'm trying to use glBlendFunc in QOpenGLWidget (in paintGL), but objects do not mix (alpha is works).
My code:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glBlendFunc(blenFunc, GL_ONE);
m_world.setToIdentity();
m_world.rotate((m_xRot / 16.0f), 1, 0, 0);
m_world.rotate(m_yRot / 16.0f, 0, 1, 0);
m_world.rotate(m_zRot / 16.0f, 0, 0, 1);
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
m_tex->bind();
fillYoffsetLightning();
const GLfloat scaleFactor = 0.05f;
m_world.scale(scaleFactor, scaleFactor, 0.0f);
m_world.translate(0.f, 0.0f, 0.0f);
const GLfloat fact = 1 / scaleFactor;
const uint8_t X = 0, Y = 1;
for(int i = 0; i < maxElem; ++i) {
const GLfloat offX = m_ELECT[i][X] * fact;
const GLfloat offY = m_ELECT[i][Y] * fact;
m_world.translate(offX, offY);
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = m_world.normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
glDrawArrays(GL_TRIANGLE_FAN, 0, m_logo.vertexCount());
update();
m_world.translate(-offX, -offY);
}
m_program->release();
shaders are simple:
// vertex
"attribute highp vec4 color;\n"
"varying highp vec4 colorVertex;\n"
//......... main:
"colorVertex = color;\n"
// fragment
"varying highp vec4 colorVertex;\n"
//......... main:
"gl_FragColor = colorVertex;\n"
Color is:
a pentagon with a gradient from white from center to blue to the edges is drawn (center color is 1,1,1, edges is 0,0,0.5)
screenshoot
Why is this happening?
If you want to achieve a blending effect, the you have to disable the depth test:
glDisable(GL_DEPTH_TEST);
Note, the default depth test function is GL_LESS. If a fragment is draw on a place of a previous fragment, then it is discarded by the depth test, because this condition is not full filled.
If the depth test is disabled, then the fragments are "blended" by the blending function (glBlendFunc) and equation (glBlendEquation).
I recommend to use the following blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
In my case (Qt 5.15.2) I found that using a color call with no alpha component (eg. glColor3f(1,0,0) ) causes the blending to be disabled for any subsequent rendering. To my surprise I could not even recover it by re-issuing these commands:
glEnable(GL_BLEND); // wtf has no effect
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Blending simply remained disabled until the next paint begins. This did not happen with the original QGLWidget class. It only happens with QOpenGLWidget and only on Windows (Mac + Linux are fine).
The good-enough solution for me was to replace any non-alpha color calls with alpha equivalents, at least for cases where you need to use blending later in the render. Eg.
glColor3f(1,0,0); // before
glColor4f(1,0,0,1); // after
Another issue that might come up is if you use QPainter along with direct rendering, because the QPainter will trash your OpenGL state. See the mention of 'beginNativePainting' in the docs:
https://doc.qt.io/qt-5/qopenglwidget.html#painting-techniques
EDIT: I'll add this here because my comment on Rabbid's answer was deleted for some reason - the depth test does NOT need to be disabled to use blending. Rabbid might be thinking of disabling depth buffer writes which is sometimes done to allow drawing all translucent objects without having to sort them in order of furthest to nearest:
Why we disable Z-write in blending

When rotating 2D sprite towards cursor via angularVelocity, it spins at one point

Intro
I've created a spaceship sprite in my Unity project, I wanted it to rotate towards the cursor via angular velocity, because I'd like make my game to be heavily physics based.
Problem
Now my problem with rotating the sprite via by angular velocity is the following:
At -180° / 180° rotation my ship spins around, because while my mouse's angle is already 180°, while my ship's rotation is still -180°, or the other way around.
I tried
I tried to solve it mathematically, wasn't too successful, I could make it spin the right way just much slower/faster, I could fix the 180/-180 point, but made two different ones instead.
Looked for different solutions, but couldn't find a more fitting one.
Code
So I have this code for the rotation:
// Use this for initialization
void Start () {
rb = gameObject.GetComponent<Rigidbody2D>();
}
// Update is called once per frame
void Update () {
//getting mouse position in world units
mousePos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
//getting the angle of the ship -> cursor vector
angle = Mathf.Atan2(mousePos.y - transform.position.y, mousePos.x - transform.position.x) * Mathf.Rad2Deg;
//getting the angle between the ship -> cursor and the rigidbody.rotation vector
diffAngle = angle - (rb.rotation + 90);
//Increasing angular velocity scaling with the diffAngle
rb.angularVelocity = diffAngle * Time.deltaTime * PlayerShipStats.Instance.speed * 100f;
Thank you for your contribution in advance
Solution for Problem 1
Inserting this code made it work, not for long :
if(diffAngle > 180) {
diffAngle -= 360;
} else if (diffAngle < -180) {
diffAngle += 360;
}
Problem 2 and Solution for Problem 2
The new problem is:
rigidbody.rotation can exceed it's boundaries, it can be rotated for more than 360 degrees.
this code patched this bug:
if(rb.rotation + 90 >= 180) {
rb.rotation = -270;
} else if (rb.rotation + 90 <= -180) {
rb.rotation = 90;
}
The perfect code
void AimAtTarget(Vector2 target, float aimSpeed) {
//getting the angle of the this -> target vector
float targetAngle = Mathf.Atan2(target.y - transform.position.y, target.x - transform.position.x) * Mathf.Rad2Deg;
if (rb.rotation + 90 >= 180) {
rb.rotation = -270;
} else if (rb.rotation + 90 <= -180) {
rb.rotation = 90;
}
//getting the angle between the this -> target and the rigidbody.rotation vector
float diffAngle = targetAngle - (rb.rotation - 90);
if (diffAngle > 180) {
diffAngle -= 360;
} else if (diffAngle < -180) {
diffAngle += 360;
}
//Increasing angular velocity scaling with the diffAngle
rb.angularVelocity = diffAngle * Time.deltaTime * aimSpeed * 100;
}
There are two problems I see here:
Problem 1
angle is always going to be between -180 and 180, while rb.rotation is between 0 and 360. So you are comparing angles using two different notations. The first step is to get both angles returning -180 to 180 or 0 to 360. I chose to do the following which puts both angles between -180 and 180:
//getting the angle of the ship -> cursor vector
float targetAngle = Mathf.Atan2(
mousePos.y - transform.position.y,
mousePos.x - transform.position.x) * Mathf.Rad2Deg;
//get the current angle of the ship
float sourceAngle = Mathf.Atan2(
this.transform.up.y,
this.transform.up.x) * Mathf.Rad2Deg;
Problem 2
If you fix problem 1 and tried your app you would notice that the ship sometimes rotates the wrong way, although it will eventually get to its target. The problem is that diffAngle can sometimes give a result that is greater than +180 degrees (or less than -180). When this happens we actually want the ship to rotate the other direction. That code looks like this:
//getting the angle between the ship -> cursor and the rigidbody.rotation vector
float diffAngle = targetAngle - sourceAngle;
//use the smaller of the two angles to ensure we always turn the correct way
if (Mathf.Abs(diffAngle) > 180f)
{
diffAngle = sourceAngle - targetAngle;
}
I made a simple Unity to verify this works. I was able to rotate my ship in either direction smoothly.
One thing you may have to handle, if you don't already, is appropriately stopping the rotation of the ship when the it is facing the cursor. In my test I noticed that the ship would jitter slightly when it reached its target because it would (very) slightly overshoot the cursor's angle in one direction and then the other. The larger the value of PlayerShipStats.Instance.speed the more pronounced this effect will likely be.

Vertex displacement on sphere break the mesh

I'm trying to make a simple noise effect on a sphere with shaders.
I tried to use ashima's perlin noise but the effect wasn't what I expected so I create my own shader based on Phong.
Here is what I get with this code in my vertex shader:
attribute int index;
uniform float time;
vec3 newPosition = position + normal * vec3(sin((time * 0.001) * float(index)) * 0.05);
gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
where index is the index of the vertex and time the current elapsed time.
The noise effect is exactly what I expected but the sphere mesh is open...
How can I keep this effect and keep the sphere mesh closed?
Most likely your sphere contains duplicated vertices. Get rid of them and your shader will works well. Or get rid of your shader dependency on "index".

Why does my GLSL shader lighting move around the scene with the objects it's shining on?

I'm following a tutorial on OpenGL ES 2.0 and combining it with a tutorial on GLSL lighting that I found, using a handy Utah teapot from developer.apple.com.
After a lot of fiddling and experimentation I have the teapot drawn moderately correctly on the screen, spinning around all three axes with the 'toon shading' from the lighting tutorial working. There's a few glitches in the geometry due to me simply drawing the whole vertex list as triangle strips (if you look in the teapot.h file there are '-1' embedded where I'm supposed to start new triangle strips, but this is only test data and not relevant to my problem).
The bit I am really confused about is how to position a light in the scene. In my Objective-C code I have a float3 vector that contains {0,1,0} and pass that into the shader to then calculate the intensity of the light.
Why does the light appear to move in the scene too? What I mean is the light acts as though it's attached to the teapot by an invisible stick, always pointing at the same side of it no matter what direction the teapot is facing.
This is the vertex shader
attribute vec4 Position;
attribute vec4 SourceColor;
attribute vec3 Normal;
uniform mat4 Projection;
uniform mat4 Modelview;
varying vec3 normal;
void main(void) {
normal = Normal;
gl_Position = Projection * Modelview * Position;
}
'Position' is set by the Obj-C code and is the vertices for the object, 'Normal' is the list of normals both from a vertex array (VBO), 'Projection' and 'Modelview' are calculated like this:
(A CC3GLMatrix is from the Cocos3D library, mentioned in the GLES tutorial linked above)
CC3GLMatrix *projection = [CC3GLMatrix matrix];
float h = 4.0f * self.frame.size.height / self.frame.size.width;
[projection populateFromFrustumLeft:-2 andRight:2 andBottom:-h/2 andTop:h/2 andNear:1 andFar:100];
glUniformMatrix4fv(_projectionUniform, 1, 0, projection.glMatrix);
CC3GLMatrix *modelView = [CC3GLMatrix matrix];
[modelView populateFromTranslation:CC3VectorMake(0, 0, -7)];
[modelView scaleBy:CC3VectorMake(30, 30, 30)];
_currentRotation += displayLink.duration * 90;
[modelView rotateBy:CC3VectorMake(_currentRotation, _currentRotation, _currentRotation)];
glUniformMatrix4fv(_modelViewUniform, 1, 0, modelView.glMatrix);
And I set the light in the scene by doing
float lightDir[] = {1,0,1};
glUniform3fv(_lightDirUniform, 1, lightDir);
The fragment shader looks like this
varying lowp vec4 DestinationColor; // 1
varying highp vec3 normal;
uniform highp vec3 LightDir;
void main(void) {
highp float intensity;
highp vec4 color;
intensity = dot(LightDir,normal);
if (intensity > 0.95)
color = vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.25)
color = vec4(0.4,0.2,0.2,1.0);
else
color = vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}
While trying to work this out I come across code that references the (non-existant in GLES) 'gl_LightSource' and 'gl_NormalMatrix' but don't know what to put into equivalents I have to pass into the shaders from my code. The references to 'eye space' 'camera space' 'world space' and so on are confusing, I know I should probably be converting things between them but don't understand why or how (and where - in code, or in the shader?)
Every frame do I need to modify the light source? The code I have for setting it looks too simplistic. I'm not really moving the teapot around, am I, instead I'm moving the entire scene - light and all around?
First of all some definitions:
world space: the space your whole world is defined in. By convention it is a static space that never moves.
view space/camera space/eye space: the space your camera is defined in. it is usually a position and rotation relative to world space
model space: the space your model is defined in. Like camera space, it is usually a position and rotation relative to world space
light space: same as model space
In simple examples (and i guess in your's) model space and world space are the same. In addition OpenGL by itself doesn't have a concept of world space, which doesn't mean you cannot use one. It comes in handy when you want to have more than one object moving around independently in your scene.
Now, what you are doing with your object before rendering is creating a matrix that transforms the vertices of a model into viewspace, hence 'modelViewMatrix'.
With light in this case it's a little different. Light calculation in your shader is done in modelspace, so you have to transform your lightposition every frame into modelspace.
This is done by calculating something like:
_lightDirUniform = inverseMatrix(model) * inverseMatrix(light) * lightPosition;
The lightposition is transformed from light into world and then into model space. If you don't have a world space, just leave out the model space transformation and you should be fine.

OpenGL ES 2.0 specifying normals for vertex shader

I am not able to get the right shading on all the faces of the cube I drew. I get a smooth transition from one of the face to another face which does not show the edge properly.
One a different face (where I get the desired edge), I get the shading such that it shows the two triangles which make up that face.
I believe the problem is with the normals I am specifying. I am attaching my vertex and normal matrix and my vertex and fragment shader code. vertex and normal matrix are same.
I guess the problem is with the normals but I have tried almost everything with them but the effect does not change.
//normal matrix and vertex matrix are same
static const float normals[]=
{
//v0,v1,v2,v3,
//v4,v5,v6,v7
0.5,0.5,0.5, -0.5,0.5,0.5, -0.5,-0.5,0.5, 0.5,-0.5,0.5,
0.5,0.5,-0.5, -0.5,0.5,-0.5, -0.5,-0.5,-0.5, 0.5,-0.5,-0.5
};
//vertex shader
attribute vec4 color;
attribute vec4 position;
attribute vec4 normal;
uniform mat4 u_mvpMatrix;
uniform vec4 lightDirection;
uniform vec4 lightDiffuseColor;
uniform float translate;
varying vec4 frontColor;
//varying vec4 colorVarying;
void main()
{
vec4 normalizedNormal = normalize(u_mvpMatrix* normal);
vec4 normalizedLightDirection = normalize(lightDirection);
float nDotL = max(dot(normalizedNormal, normalizedLightDirection), 0.0);
frontColor = color * nDotL * lightDiffuseColor;
gl_Position = u_mvpMatrix * position;
}
//fragment shader
varying lowp vec4 frontColor;
void main()
{
gl_FragColor = frontColor;
}
Please help? Thanks in advance !!
Your problem is not related to your normal matrix, it is related to your input normals.
From what I read, you provide 8 normals, one for each vertex, meaning that you provide pre-averaged normals ; there is no way your vertex shader can un-average them.
To have proper discontinuities in your lighting, you need discontinuous normals : for each quad, you have 4 vertices, each with a normal matching the quad normal. So you end up with 6x4 = 24 normals (so as many vertices).
You may want to take a look at this for more detailed explanation :
http://www.songho.ca/opengl/gl_vertexarray.html

Resources