I'm adding lines to my 3D world like we see in 3D studio max. To draw lines I'm using a cylinder mesh and simply stretching/rotating it appropriately. That's all working fine but my problem is scale. Since it's 3D geometry rendering in perspective its size changes, from a distance it's small to invisible, up close it's huge.
I want to make it so the size of the line geometry stays the same. I tried toying around with orthographic projection but came up with nothing. Any ideas?
Well you could easily write a shader to get round that problem. Basically you need to push it out proportionally to the w value you generate. ie if the cylinder has a width of r. then you can cancel out the perspective by pushing it out to (r * w). This way when the w divide occurs it will ALWAYS give you r.
A cylinder, though, could be a tad excessive you could get a similar effect by drawing a billboarded line and applying a texture to it.
I wrote a shader in DX8 many years ago to do this (mind this is with perspective). Basically I defined the vertex data as follows:
struct BillboardLineVertex
{
D3DXVECTOR3 position;
D3DXVECTOR3 otherPosition;
DWORD colour;
D3DXVECTOR2 UV;
};
Assuming the line goes from A to B then position is A and otherPosition is B for the first 2 vertices. Furthermore I encoded into the V (or y) of the UV either a -1 or 1. This told me whether I would push out from the line up or down the screen. Finally the 3rd coordinate for the triangle had the A & B in position and otherPosition the other way round (I'll leave you to figure out how to build the other triangle. Note that the U texture coord (or x) was settable to allow for texture repeating along the line.
I then had the following bit of shader assembly to build the lines ... This had the added bonus that it took exactly 2 triangles to do one line... I could then pack them all into 1 big vertex buffer and render several hundred in one Draw call.
asm
{
vs.1.1
// Create line vector.
mov r1, v0
sub r3, r1, v4
// Get eye to line vector
sub r6, v0, c20
// Get Tangent to line vector lieing on screen plane.
mul r5, r6.yzxw, r3.zxyw
mad r5, -r3.yzxw, r6.zxyw, r5
// Normalise tangent
dp3 r4.w, r5.xyz, r5.xyz
rsq r4.w, r4.w
mul r5.xyz, r5.xyz, r4.w
// Multiply by 1 or -1 (y part of UV)
mul r5.xyz, r5.xyz, -v9.y
// Push tangent out by "thickness"
mul r5.xyz, r5.xyz, c16.x
add r1.xyz, r1.xyz, r5.xyz
// Transform position
m4x4 oPos, r1, c0
// Work out UV (c16.y is assumed to contain 0.5, c16.z is assumed to contain 1)
mov r2.xy, v9.xy
mul r2.y, v9.y, v9.x
add r2.xy, r2.xy, c16.z
mul oT0.xy, r2.xy, c16.y
// Move colour into diffuse output channel.
mov oD0, v3
};
Such a setup would be easily modifiable to give you the same size regardless of distance from the camera.
Related
How can I draw filled elliptical sector using Bresenham's algorithm and bitmap object with DrawPixel method?
I have written method for drawing ellipse, but this method uses symmetry and passes only first quadrant. This algorithm is not situable for sectors. Of course, I can write 8 cycles, but I think it's not the most elegant solution of the task.
On integer math the usual parametrization is by using limiting lines (in CW or CCW direction) instead of your angles. So if you can convert those angles to such (you need sin,cos for that but just once) then you can use integer math based rendering for this. As I mentioned in the comment bresenham is not a good approach for a sector of ellipse as you would need to compute the internal iterators and counters state for the start point of interpolation and also it will give you just the circumference points instead of filled shape.
There are many approaches out there for this here a simple one:
convert ellipse to circle
simply by rescaling the smaller radius axis
loop through bbox of such circle
simple 2 nested for loops covering the outscribed square to our circle
check if point inside circle
simply check if x^2 + y^2 <= r^2 while the circle is centered by (0,0)
check if point lies between edge lines
so it should be CW with one edge and CCW with the other. You can exploit the cross product for this (its z coordinate polarity will tell you if point is CW or CCW against the tested edge line)
but this will work only up to 180 deg slices so you need also add some checking for the quadrants to avoid false negatives. But those are just few ifs on top of this.
if all conditions are met conver the point back to ellipse and render
Here small C++ example of this:
void elliptic_arc(int x0,int y0,int rx,int ry,int a0,int a1,DWORD c)
{
// variables
int x, y, r,
xx,yy,rr,
xa,ya,xb,yb, // a0,a1 edge points with radius r
mx,my,cx,cy,sx,sy,i,a;
// my Pixel access (you can ignore it and use your style of gfx access)
int **Pixels=Main->pyx; // Pixels[y][x]
int xs=Main->xs; // resolution
int ys=Main->ys;
// init variables
r=rx; if (r<ry) r=ry; rr=r*r; // r=max(rx,ry)
mx=(rx<<10)/r; // scale from circle to ellipse (fixed point)
my=(ry<<10)/r;
xa=+double(r)*cos(double(a0)*M_PI/180.0);
ya=+double(r)*sin(double(a0)*M_PI/180.0);
xb=+double(r)*cos(double(a1)*M_PI/180.0);
yb=+double(r)*sin(double(a1)*M_PI/180.0);
// render
for (y=-r,yy=y*y,cy=(y*my)>>10,sy=y0+cy;y<=+r;y++,yy=y*y,cy=(y*my)>>10,sy=y0+cy) if ((sy>=0)&&(sy<ys))
for (x=-r,xx=x*x,cx=(x*mx)>>10,sx=x0+cx;x<=+r;x++,xx=x*x,cx=(x*mx)>>10,sx=x0+cx) if ((sx>=0)&&(sx<xs))
if (xx+yy<=rr) // inside circle
{
if ((cx>=0)&&(cy>=0)) a= 0;// actual quadrant
if ((cx< 0)&&(cy>=0)) a= 90;
if ((cx>=0)&&(cy< 0)) a=270;
if ((cx< 0)&&(cy< 0)) a=180;
if ((a >=a0)||((cx*ya)-(cy*xa)<=0)) // x,y is above a0 in clockwise direction
if ((a+90<=a1)||((cx*yb)-(cy*xb)>=0))
Pixels[sy][sx]=c;
}
}
beware both angles must be in <0,360> range. My screen has y pointing down so if a0<a1 it will be CW direction which matches the routione. If you use a1<a0 then the range will be skipped and the rest of ellipse will be rendered instead.
This approach uses a0,a1 as real angles !!!
To avoid divides inside loop I used 10 bit fixed point scales instead.
You can simply divide this to 4 quadrants to avoid 4 if inside loops to improve performance.
x,y is point in circular scale centered by (0,0)
cx,cy is point in elliptic scale centered by (0,0)
sx,sy is point in elliptic scale translated to ellipse center position
Beware my pixel access is Pixels[y][x] but most apis use Pixels[x][y] so do not forget to change it to your api to avoid access violations or 90deg rotation of the result ...
I have a bouncing ball which can collide lines with random slope. The ball pass through the lines of a bit and i need to set back the ball at a "radius" distance from the line.
Ball (with variables x, y and radius) travels at speedX and speedY (obtained from vectors directionX and directionY multiplied for a variable) and i can know the distance (dist) between the center and the line so i can know how many pixels the ball passed through the line
Think that in the example the ball passed 10 pixels (radius - dist) after the line, i need to set back the center of the ball 10 pixels in the opposite vector (directionX directionY). My question is:
How can i calculate how to split those n pixels between x and y so i can subtract them from the center coordinates?
I can imagine 4 different resolutions of what you have, and it is unclear which one you want.
Here they are, where the black arrow is the movement of the centre of the ball between the frame before collision and the frame you are asking how to draw.
A) the situation which you have now.
pro : simple
con : ball is in a physically unacceptable position
B) you compute where the ball should be after having bounced (assuming an elastic chock)
pro : most accurate
con : you don't have a frame with the ball in contact with the surface (but do you care ?).
C) The position of the ball is A, brought back to being tangent at the surface, with a correction that is orthogonal to said surface
pro : conserves accuracy in direction parallel to surface
con : centre of the ball not on the reflected line (i.e. we take liberties with Descartes' law)
D) The ball is still on the incoming line, but stopped when it is tangent to the surface.
pro : only the speed/timing is messed with
con : err.... out of ideas here. Still not as precise as B.
Well, disregarding all the drawings, it is much easier to take only the centre of the ball, and consider it hits a line that is at 'radius' of the real surface (and parallel to it), so we only have mechanics for a single point. Thus from the previous image, we get the formulation in terms of the following red objects :
So what do we need to do all this ?
The undisturbed trajectory starts at point S, ends at point E (the endpoint of situation A). We will call C the collision point between both lines (red one and trajectory, thus the endpoint of trajectory D).
I will assume we are always in the case of a collision, thus the point of intersection C between the undisturbed trajectory and the surface always exists.
You will also need the vector u that is perpendicular to the surface. Be sure to take a unit vector that points towards the side where the ball is. Thus if your slope has an equation ax+by+c=0, start with the vector ( a/sqrt(a*a+b*b) , b/sqrt(a*a+b*b) ) and multiply both coordinates by -1 if it points to the wrong side.
Then, to shift the line by a distance r in the direction of u, you want the equation a(x-r*u.x)+b(y-r*u.y)+c=0 thus ax+by+c-r*(a*u.x+b*u.y)=0
So if r is the radius and ax+by+c=0 your surface, the red line's equation is ax+by+c+r*sqrt(a*a+b*b)=0
or -r if the ball is beneath the line.
I will write PQ the vector starting at point P and ending at point Q, thus coordinates of said vector will be (Q.x - P.x, Q.y - P.y) and a . between two vectors will mean a scalar product.
So you can express SE in terms of the variables you named directionX, directionY and dist.
A) Move center by SE. Yay, finished !
B) Get C. Move center by SE - 2 * (CE . u) * u : thus the total move, but removing twice the normal component of CE that goes beyond the surface, effectively mirroring the CE vector by that surface.
C) Get C. Move center by SE - (CE . u) * u : the same, but remove the normal component of CE only once, effectively projecting the CE vector on the red line.
D) Get C. Move center by SC.
I am a student in video games, and we are working on a raytracer in C++. We are using our teachers' library.
We create procedural objects (in our case a sphere), the Camera sends a ray for each pixel of the screen and the ray send back information on what it hit.
Some of us decided to integrate Normal Maps. So, at first, we sent ray on the object, looked at the value of the Normal map texel where we hit the sphere, converted it in a vector, normalized it and sent it back in place of the normal of the object. The result was pretty good, but of course, it didn't take the orientation of the "face" (it's procedural, so there is no face, but it gives the idea) into account anymore, so the render was flat.
We still don't really know how to "blend" the normal of the texture (in tangent space) and the normal of the object together. Here is our code:
// TGfxVec3 is part of our teachers library, and is a 3d vector like this:
// TGfxVec3( 12.7f, -13.4f, 52.0f )
// The sphere being at the origin and of radius 1, and tHit.m_tPosition being the
// exact position at the surface of the sphere where the ray hit, the normal of this
// point is the position hit by the ray.
TGfxVec3 tNormal = tHit.m_tPosition;
TGfxVec3 tTangent = Vec3CrossProduct( tNormal , m_tAxisZ );
TGfxVec3 tBiNormal = Vec3CrossProduct( tNormal , tTangent );
TGfxVec3 tTextureNorm = 2*(TGfxVec3( pNorm[0], pNorm[1], pNorm[2] )/255)-TGfxVec3( -1.0f, -1.0f, -1.0f );
// pNorm[0], pNorm[1], pNorm[2] are respectively the channels Red, Green,
// and Blue of the Normal Map texture.
// We put them in a 3D vector, divid them by 255 so their value go from 0 to 1,
// multiply them by 2, and then substract a vector, so their rang goes from -1 to +1.
tHit.m_tNorm = TGfxVec3( tTangente.x*tTextNorm.x + tCoTangente.x*tTextNorm.x +
tNorm.x*tTextNorm.x, tTangente.y*tTextNorm.y + tCoTangente.y*tTextNorm.y +
tNorm.y*tTextNorm.y, tTangente.z*tTextNorm.z + tCoTangente.z*tTextNorm.z +
tNorm.z*tTextNorm.z ).Normalize();
// Here, after some research, I came across this : http://www.txutxi.com/?p=316 ,
// that allow us to convert the normal map tangent space to the object space.
The results are still not good. My main concern are the Tangent and Binormals. The Axis taken in reference (here: m_tAxisZ, the Z Axis of the Sphere), is not right. But I don't know what to take, or even if what I am doing is really good. So I came here for help.
So, we finally did it. :D Ok, I will try to be clear. For this, two images :
(1) : http://i.imgur.com/cHwrR9A.png
(2) : http://i.imgur.com/mGPH1RW.png
(My drawing skill has no equal, I know).
So, the main problem was to find the Tangent "T" and the Bi-tangent "B". We already have the Normal "N". Our circle always being at the origin with a radius of 1, a point on its surface is equal to the Normal to that point (black and red vector on the first image). So, we have to find the tangent to that point (in green). For this, we just have to rotate the vector from PI/2 rad :
With N( x, y ) :
T = ( -N.y , N.x )
However, we are in 3D. So the point will not always be at the equator. We can easily solve this problem by ignoring the position in Y of our point and normalize the vector with only the two other component. So, on the second image, we have P (we set its Y value to 0), and we normalize the new vector to get P'.
With P( x, y, z ) :
P' = ( P.x, 0, P.z).Normalize();
Then, we go back to my first message to find the T. Finally, we get the B with a cross product between the N en the T. Finally, we calculate the normal to that point by taking the normal map into account.
With the variable "Map" containing the three channels (RGB) of the normal Map, each one clamped from -1 to 1, and T, N and B all being 3D vectors :
( Map.R*T + Map.G*B + Map.B*N ).Normalize();
And that's it, you have the normal to the point taking your normal map into account. :) Hope this will be usefull for others.
You are mostly right and completely wrong at the same time.
Tangent space normal mapping use a transformation matrix to convert the tangent space normal from the texture to another space, like object or world space, or transform the light in the tangent space to compute the lighting with everything in the same space.
Bi-normal is a common mistake and should be named bi-tangent.
It is sometime possible to compute the TBN at the fly on simple geometry, like on a height-map as it is easy to deduce the tangent and the bi-tangent on a regular grid. But on a sphere, the cross trick with a fixed axis will result to a singularity at the pole where the cross product give a zero length vector.
Last, even if we ignore the pole singularity, the TBN must be normalized before you apply the matrix to the tangent space normal. You may also miss a transpose, as a 3x3 orthonormal matrix inverse is the transpose, and what you need is the inverse of the original TBN matrix if you go from tangent to object.
Because of all this, we most often store the TBN as extra information in the geometry, computed from the texture coordinate ( the url you referenced link to that computation description ) and interpolate at runtime with the other values.
Rem : there is a rough simplification to use the geometry nornal as the TBN normal but there is no reason in the first place that they match.
I'm trying to create a simple shader for my lighting system. Right now, I'm working on adding support for normal-mapping right now. Without normal-mapping, the lighting system works fine. I'm using the normals forwarded from the vertex shader, and they work perfectly fine. I'm also reading the normals from the normal map correctly. Without including the normal map, the lighting works perfectly. I've tried adding the vertex normal and the normal map's normal, and that doesn't work. Also tried multiplying. Here's how I'm reading the normal-map:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normals = normalize((normalHeight.xyz * vec3(2.0) - vec3(1.0)));
So I have the correct vertex normals, and the normals from the normal map. How should I combine these to get the correct normals?
It depends on how you store your normal maps. If they are in world space to begin with (this is rather rare) and your scene never changes, you can look them up the way you have them. Typically, however, they are in tangent space. Tangent space is a vector space that uses the object's normal, and the rate of change in the (s,t) texture coordinates to properly transform the normals on a surface with arbitrary orientation.
Tangent space normal maps usually appear bluish to the naked eye, whereas world space normal maps are every color of the rainbow (and need to be biased and scaled because half of the colorspace is supposed to represent negative vectors) :)
If you want to understand tangent space better, complete with implementation on deriving the basis vectors, see this link.
Does your normal map not contain the adjusted normals? If yes, then you just need to read the texture in the fragment shader and you should have your normal, like so:
vec4 normalHeight = texture2D(m_NormalMap, texCoord);
vec3 normal = normalize(normalHeight.xyz);
If your trying to account for negative values then you should not be multiplying by the vector but rather the scalar.
vec3 normal = normalize( (normalHeight.xyz * 2.0) - 1.0 );
I render animated geometry. In each frame, I want to texturemap the geometry with a screen-space-texture from the previous frame (projected onto the geometry as it was in the previous frame). so the result should be such as if the screen-space texture was projected onto the geometry one frame ago and then transformed by geometry animation to the current frame.
Calculating the proper texture coordinates per vertex is not difficult. In GLSL that's simply:
void main(void)
{
vPos = currentMVP * vec4(position,1);
gl_Position = vPos;
vec4 oldPos = previousMVP * vec4(position,1);
vec2 UV = vec2(((oldPos.x/oldPos.w)+1)*0.5f, ((oldPos.y/oldPos.w)+1)*0.5f);
...
}
But getting the texturecoordinates interpolate correctly over the geometry is more tricky than I thoght. Normally, texturecoordinates for projection should be interpolated linearily in screenspace - so to achieve this one would multiply them by vPos.w in the vertexshader and divide them again by vPos.w in the fragmentshader. However, that's only correct if the texture is projected from the cameraview. In this case I need something else. I need an interpolation that attributes for forward-perspectivecorrect interpolation in the previous frame and backward-perspectivecorrect interpolation in the current frame.
This graphic illustrates three different cases:
-Case A is simple. here i could leave the normal perspective corrected interpolation of the texturecoordinates (as performed by default by the rasterizer).
-in Case B however, I would need linear interpolation of the texturecoordinates to get the proper result (either by multiplying with vPos.w in vertexShader and divide by vPos.w in fragment shader. Or in newer GLSL versions by using the "noperspective" interpolation qualifier).
-and in Case C I would need perspective corrected interpolation, but according to the oldPos.w value. so I would have to linearize the interpolation of u'=(u/oldPos.w) and v'=(v/oldPos.w) by multiplying u' with currentPos.w in vertex and divide the interpolated value by currentPos.w in fragment. I would also need to linearily interpolate w'=(1/oldPos.w) in the same way and then calculate the final u'' in fragment by dividing the interpolated u' by the interpolated w' (and same for v'' respectively).
So - the question now is, what's the proper math to yeld the correct result in either case?
again, calculating the correct uv's for the vertices is not the problem. it's about achieving the correct interpolation over the triangles.
//maybe relevant: in the same pass I also want to do some regular texturing of the object using non projective, perspective corrected texturing. This means I must not alter the gl_Position.w value.
vec2 UV = vec2(((oldPos.x/oldPos.w)+1)*0.5f, ((oldPos.y/oldPos.w)+1)*0.5f);
Wrong. You need the W; you don't want to divide yet. What you want is this:
vec4 oldPos = previousMVP * vec4(position,1);
oldPos = clipToTexture * oldPos;
vec3 UV = oldPos.xyw;
The clipToScreen matrix is a 4x4 matrix that does the scale and translation needed to go from clip to screen space. That's what your scale of 0.5 and adding 1.0 were doing. Here, it's in matrix form; normally, you'd just left-multiply "previousMVP" with this, so it would all be a single matrix multiply.
In your fragment shader, you need to do projective texture lookups. I don't remember the GLSL 1.20 function, but I know the 1.30+ function:
vec4 color = textureProj(samplerName, UV.stp);
It is this function which will do the necessary division-by-W step.