example
the vs shader
void main() {
gl_Position = vec4(0, 0, 0, 1);
gl_PointSize = 100.0;
}
the canvas is 1x5 pixels (width,height)
the fragment shader uses gl_FragCoord
what will the values of gl_FragCoord be for these 5 pixels?
Cheers
For each pixel gl_FragCoord.xy will be
4.5, 0.5
3.5, 0.5
2.5, 0.5
1.5, 0.5
0.5, 0.5
gl_FragCoord is always the pixel being drawn currently. Your fragment shader will be called 5 times. Once for each of the 5 pixels the 100x100 sized point covers (it is of course clipped to the size of the canvas which you said is only 1x5 pixels in size)
Which pixels is currently being drawn depends on what you asked the GPU to do when you called gl.drawArrays or gl.drawElements.
The vertex shader above as no inputs that change. It will always try to draw a 100x100 pixel "point" in the center of the current viewport to whatever you're drawing to assuming you passed gl.POINTS to gl.drawArrays. If you passed LINES or TRIANGLES or anything else it's not likely to draw anything since the shader always sets gl_Position to vec4(0, 0, 0, 1) which would make a line of zero length or a triangle of zero size.
In the case of POINTS, gl_Position is converted from clip space to the pixel space of the thing you're drawing to (canvas or framebuffer). This conversion happens based on whatever you set gl.viewport to.
generally you set gl.viewport to the size of the canvas. in this case
const x = 0;
const y = 0;
const width = 1;
const height = 5;
gl.viewport(x, y, width, height);
The conversion from clipspace to pixelspace via the viewport setting will come out with a position at 0.5, 2.5. From that pixel position a square will be calculated based on gl_PointSize
gl_Position = vec4(0, 0, 0, 1);
gl.viewport(0, 0, 1, 5);
pixelPosition becomes 0.5, 2.5
x1 = 0.5 - gl_PointSize / 2;
y1 = 2.5 - gl_PointSize / 2;
x2 = 0.5 + gl_PointSize / 2;
y2 = 2.5 + gl_PointSize / 2;
Which means the "POINT" you asked to be drawn goes from
x1 = -49.5
y1 = -47.5
x2 = 50.5
y2 = 52.5
That rectangle is much larger than the 1x5 canvas but it will be clipped which results in the 5 pixels of the canvas getting rendered to. For each pixel in the canvas the fragment shader is called. gl_FragCoord is the coordinate of the pixel currently being drawn to. The first pixel in the canvas (bottom left) always has a gl_FragCoord.xy of (0.5, 0.5). The pixel one pixel to right of that always has a gl_FragCoord.xy of (1.5, 0.5).
Related
I'm new to GLSL and learning from the tutorial here.
(It's using ShaderToy)
https://gamedevelopment.tutsplus.com/tutorials/a-beginners-guide-to-coding-graphics-shaders--cms-23313
My question is why you can set x coordinates to 0-1 by dividing the fragCoord's x coordinates by the iResolution(screensize).
It might be just a math question, but I'm confused what exactly the "iResolution.x" indicates or what kind of calculation is made here. (Is it a vector division? )
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 xy = fragCoord.xy; //We obtain our coordinates for the current pixel
xy.x = xy.x / iResolution.x; //We divide the coordinates by the screen size
xy.y = xy.y / iResolution.y;
// Now x is 0 for the leftmost pixel, and 1 for the rightmost pixel
vec4 solidRed = vec4(0,0.0,0.0,1.0); //This is actually black right now
if(xy.x > 0.5){
solidRed.r = 1.0; //Set its red component to 1.0
}
fragColor = solidRed;
}
The other answers are correct. fragCoord is the pixel currently being drawn, iResolution is the size of the screen so
xy.x = xy.x / iResolution.x; //We divide the coordinates by the screen size
xy.y = xy.y / iResolution.y
Gives normalized values where xy.x goes from 0 to 1 across and xy.y goes from 0 to 1 up the screen which seems to be exactly what the comments say
It's important to note though that iResolution and fragCoord are user variables. In this case I'm guessing you're getting this GLSL from Shadertoy. Those variables are not part of WebGL or GLSL, they are defined by Shadertoy and so their values and meaning are defined by shadertoy.
Note that if you are new to GLSL and WebGL you might want to consider some webgl tutorials. Also see this answer about shadertoy
iResolution.x is the width of your screen in pixels. Dividing the pixel x location by the total width transforms the location into a fraction of the screen width. So, if your screen is 1000 pixels wide, and your current position is x=500, xy.x = xy.x / iResolution.x; will convert xy.x to 0.500.
Say I have two rectangles, each with a 'connector' that points in a certain direction. The transform (location and angle) of the link is specified relative to the centre of its parent rectangle.
In the example below, rectangle A's link is (x: 0, y: -0.5, rotation: 0) while B's is (x: 0.5, y: 0, rotation: 45).
Two rectangles can 'plug in' to eachother by rotating such that their links have the same coordinates and face opposite directions.
I'm trying to figure out how to calculate the transform of rectangle B relative to rectangle A after they are linked.
In this case, rectangle A is (0, 0, 0), A's link is (0, 0.5, 0), B's link is (0, 0.5, 180) and B is (~0.3, ~-0.8, 135).
Does anyone know how to calculate B's final position in the above example?
So you have base points A0 and B0 and link points AL and BL
At first you move B0 by difference of AL and BL, so
B0' = B0 + AL - BL
Then you have to rotate this point around AL to provide final position
B0''.X = AL.X + (B0.X - BL.X) * Cos(D) - (B0.Y - BL.Y) * Sin(D)
B0''.Y = AL.Y + (B0.X - BL.X) * Sin(D) + (B0.Y - BL.Y) * Cos(D)
where D is angle of rotation
D = Pi - A_rotation - B_rotation
I've been trying to represent a 2d array of images as an isometric grid in Processing, however I cannot seem to get their placement right.
The images do not get placed next to each other (as in, the tiles do not touch), even though the x and y points seem to indicate they should be (as the cartesian view works and the isometric conversion equations seem to be correct).
Here is what I mean:
I think I may be treating my translations and rotations wrong, but after hours of googling I cannot find how.
My full code for this implementation can be seen here. This is full Processing code and over complicated, but a simpler version can be seen below.
color grass = color(20, 255, 20); //Grass tiles lay within wall tiles. These are usually images, but here they are colours for simplicity
color wall = color(150, 150, 150);
void setup() {
size(600, 600);
noLoop();
}
void draw() {
int rectWidth = 30;
float scale = 2; //Used to grow the shapes larger
float gap = rectWidth * scale; //The gap between each "tile", to allow tile s to fit next to each other
int rows = 4, cols = 4; //How many rows and columns there are in the grid
translate(300, 200);
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
/* x and y calculations */
float cartesianX = col * gap; //The standard cartesian x and y points. These place the tiles next to each other on the cartesian plane
float cartesianY = row * gap;
float isometricX = (cartesianX - cartesianY); //The isometric x and y points. The equations calculate it from the cartesian ones
float isometricY = (cartesianX + cartesianY) / 2;
/* transformations and placement */
pushMatrix(); //Pushes the transform and rotate matrix onto a stack, allowing it to be reset after each loop
translate(isometricX, isometricY); //Translate to the point that the tile needs to be placed.
scale(scale, scale / 2); //Scale the tile, making it twice as wide as it is high
rotate(radians(45)); //Rotate the tile into place
//Work out what colour to set the box to
if (row == 0 || col == 0 || row == rows -1 || col == cols - 1) fill(wall);
else fill(grass);
rect(0, 0, rectWidth, rectWidth);
popMatrix();
}
}
}
Let's look closer at how you're using two values:
int rectWidth = 30;
This is the size of the rectangles. Makes sense.
float gap = rectWidth * scale;
This is the distance between the left sides of the rectangle. In other words, you're using these to place the rectangles. When this is greater than the size of the rectangles, you'll have space between the rectangles. And since you're multiplying rectWidth by scale (which is 2), it's going to be greater than rectWidth.
In other words, if you make your gap equal to rectWidth, you don't get any spaces:
float gap = rectWidth;
Of course, that means you can probably get rid of your gap variable entirely, but it might come in handy if you want to space the rectangles out to make their borders thicker or something.
I would like to draw a textured circle in Direct3D which looks like a real 3D sphere. For this purpose, I took a texture of a billard ball and tried to write a pixel shader in HLSL, which maps it onto a simple pre-transformed quad in such a way that it looks like a 3-dimensional sphere (apart from the lighting, of course).
This is what I've got so far:
struct PS_INPUT
{
float2 Texture : TEXCOORD0;
};
struct PS_OUTPUT
{
float4 Color : COLOR0;
};
sampler2D Tex0;
// main function
PS_OUTPUT ps_main( PS_INPUT In )
{
// default color for points outside the sphere (alpha=0, i.e. invisible)
PS_OUTPUT Out;
Out.Color = float4(0, 0, 0, 0);
float pi = acos(-1);
// map texel coordinates to [-1, 1]
float x = 2.0 * (In.Texture.x - 0.5);
float y = 2.0 * (In.Texture.y - 0.5);
float r = sqrt(x * x + y * y);
// if the texel is not inside the sphere
if(r > 1.0f)
return Out;
// 3D position on the front half of the sphere
float p[3] = {x, y, sqrt(1 - x*x + y*y)};
// calculate UV mapping
float u = 0.5 + atan2(p[2], p[0]) / (2.0*pi);
float v = 0.5 - asin(p[1]) / pi;
// do some simple antialiasing
float alpha = saturate((1-r) * 32); // scale by half quad width
Out.Color = tex2D(Tex0, float2(u, v));
Out.Color.a = alpha;
return Out;
}
The texture coordinates of my quad range from 0 to 1, so I first map them to [-1, 1]. After that I followed the formula in this article to calculate the correct texture coordinates for the current point.
At first, the outcome looked ok, but I'd like to be able to rotate this illusion of a sphere arbitrarily. So I gradually increased u in the hope of rotating the sphere around the vertical axis. This is the result:
As you can see, the imprint of the ball looks unnaturally deformed when it reaches the edge. Can anyone see any reason for this? And additionally, how could I implement rotations around an arbitrary axis?
Thanks in advance!
I finally found the mistake by myself: The calculation of the z value which corresponds to the current point (x, y) on the front half of the sphere was wrong. It must of course be:
That's all, it works as exspected now. Furthermore, I figured out how to rotate the sphere. You just have to rotate the point p before calculating u and v by multiplying it with a 3D rotation matrix like this one for example.
The result looks like the following:
If anyone has any advice as to how I could smooth the texture a litte bit, please leave a comment.
How can one calculate the camera distance from an object in 3D space (an image in this case) such that the image is at its original pixel width.
Am I right in assuming that this is possible given the aspect ratio of the camera, fov, and the original width/height of the image in pixels?
(In case it is relevant, I am using THREE.js in this particular instance).
Thanks to anyone who can help or lead me in the right direction!
Thanks everyone for all the input!
After doing some digging and then working out how this all fits into the exact problem I was trying to solve with THREE.js, this was the answer I came up with in JavaScript as the target Z distance for displaying things at their original scale:
var vFOV = this.camera.fov * (Math.PI / 180), // convert VERTICAL fov to radians
var targetZ = window.innerHeight / (2 * Math.tan(vFOV / 2) );
I was trying to figure out which one to mark as the answer but I kind of combined all of them into this solution.
Trigonometrically:
A line segment of length l at a right angle to the view plane and at a distance of n perpendicular to it will subtend arctan(l/n) degrees on the camera. You can arrive at that result by simple trigonometry.
Hence if your field of view in direction of the line is q, amounting to p pixels, you'll end up occupying p*arctan(l/n)/q pixels.
So, using y as the output number of pixels:
y = p*arctan(l/n)/q
y*q/p = arctan(l/n)
l/tan(y*q/p) = n
Linear algebra:
In a camera with a field-of-view of 90 degrees and a viewport of 2w pixels wide, the projection into screen space is equivalent to:
x' = w - w*x/z
When perpendicular, the length of a line on screen is the difference between two such xs so by normal associativity and commutivity rules:
l' = w - w*l/z
Hence:
w - l' = w*l/z
z = (w - l') / (w*l)
If your field of view is actually q degrees rather than 90 then you can use the cotangent to scale appropriately.
In your original question you said that you're using css3D. I suggest that you do the following:
Set up an orthographic camera with fov = 1..179 degrees, where left = screenWidth / 2, right = screenWidth / - 2, top = screenHeight / 2, bottom = screenHeight / - 2. Near and far planes do not affect CSS3D rendering as far as I can tell from experience.
camera = new THREE.OrthographicCamera(left, right, top, bottom, near, far);
camera.fov = 75;
now you need to calculate the distance between the camera and object in such way that when the object is projected using the camera with settings above, the object has 1:1 coordinate correspondence on screen. This can be done in following way:
var camscale = Math.tan(( camera.fov / 2 ) / 180 * Math.PI);
var camfix = screenHeight / 2 / camscale;
place your div to position: x, y, z
set the camera's position to 0, 0, z + camfix
This should give you 1:1 coordinate correspondence with rendered result and your pixel values in css / div styles. Remember that the origin is in center and the object's position is the center of the object so you need to do adjustments in order to achieve coordinate specs from top-left corner for example
object.x = ( screenWidth - objectWidth ) / 2 + positionLeft
object.y = ( screenHeight - objectHeight ) / 2 + positionTop
object.z = 0
I hope this helps, I was struggling with same thing (exact control of the css3d scene) but managed to figure out that the Orthographic camera + viewport size adjusted distance from object did the trick. Don't alter the camera rotation or its x and y coordinates, just fiddle with the z and you're safe.