Recently, i tried to make transparency work in JavaFX 3D as some of the animations i want to play on meshes use transforms that change the alpha of the mesh each keyframe.
However to my surprise, my TriangleMesh that uses a material which has transparent areas doesn't look as expected.
Current result(depth buffer enabled): https://i.imgur.com/EIIWY1p.gif
Result with depth buffer disabled for my TriangleMeshView: https://i.imgur.com/1C95tKy.gif (this looks much closer to the result i was expecting)
However i don't really want to disable depth buffering because it causes other issues.
In case it matters, this is the diffuse map i used for my TriangleMesh: https://i.imgur.com/UqCesXL.png (1 pixel per triangle as my triangles have a color per face, 128 pixels per column).
I compute UV's for the TriangleMesh like this:
float u = (triangleIndex % width + 0.5f) / width;
float v = (triangleIndex / width + 0.5f) / (float) atlas.getHeight();
and then use these for each vertex of the triangle.
What's the proper way to render a TriangleMesh that uses a transparent material(in my case only part of the image is transparent, as some triangles are opaque)?
After a bit of research, i've found this which potentially explains my issue: https://stackoverflow.com/a/31942840/14999427 however i am not sure whether this is what i should do or whether there's a better option.
Minimal reproducible example(this includes the same exact mesh i showed in my gifs): https://pastebin.com/ndkbZCcn (used pastebin because it was 42k characters and the limit in stackoverflow is 30k) make sure to copy the raw data as the preview in pastebin stripped a few lines.
Update: a "fix" i found orders the triangles every time the camera moves the following way:
Take the camera's position multiplied by some scalar(5 worked for me)
Order all opaque triangles first by their centroids distance to the camera and then after that order all transparent triangles the same way.
I am not sure why multiplying the camera's position is necessary but it does work, best solution i've found so far.
I have not yet found a 'proper fix' for this problem, however this is what worked for me pretty well:
Create an ArrayList that'll store the sorted indices
Take the Camera's position multiplied by some scalar (between 5-10 worked for me), call that P
Order all opaque triangles first by their centroids distance to P and add them to the list
Order all triangles with transparency by their centroids distance to P and add them to the list
Use the sorted triangle indices when creating the TriangleMesh
I am creating a raycasting game from scratch using JavaScript canvas.
Part of the challenge (for me) is to decorate walls with random images (pictures). I already implemented drawing of walls, floor an ceiling and sprites.
While drawing walls, I store for each x (depicting screen coordinate) the distance to the wall (Z-BUFFER), the height of the wall (H-BUFFER) and actual coordinates of the pixel in the underlying 2D grid (GRID_BUFFER).
My approach for painting the decals (pictures) on the wall is then the following (after identifying a list of decals that could theoretically be visible):
distance to the decal's position is calculated (position is defined as being in the middle of the grid vertice facing the observer)
screen coordinate decalScreenX is calculated based on the transformation matrix from grid coordinates to screen coordinates. This works correctly:
let decalScreenX = Math.floor((RAYCAST.SCREEN_WIDTH / 2) * (1 + CAMERA.transformX /CAMERA.transformDepth));
Then I retrieve image data for the decal in question and get it's width and height
And based on the distance and the observed angle, I calculate the percieved width of the decal. This is where the real issue lies, as I see that I don't calculate this width completely accurate.
with all this information, it is then easy to calculate left and right screen coordinates - where to begin and and where to end drawing the decal, use H-BUFFER to calculate height factor and use GRID_BUFFER to draw only on grid belonging to this decal.
I saw the width calculation in terms that decal is rotated from the player direction vector by an angle, if the player direction is not opposite of the direction with which decal faces the space (example):
or if player direction is directly opposite to the direction of decal, this angle is 0° (example):
My first approach was to use dot product of the reversed player direction and decal facing direction, thus getting cosine of the angle between vectors and use this as a factor to reduce perceived width:
let CosA = PLAYER.dir.mirror().dot(decal.facingDir);
let widthScale = CosA * (CAMERA.transformDepth / decal.distance);
The problem with this solution is, that when perpendicular , the factor is 0 and the decal is not drawn but as the walls are drawn with perspective, this should not be the case. So I began improvising. I defined CAMERA.minPerspective factor as seen below. Field of vision (FOV) is 70°.
CAMERA.minPerspective = Math.cos(Math.radians((90 + this.FOV) / 2));
My intuition was (as I lack the knowledge of perspective and geometry, alas) that for small angles, the factor should remain 1. And for angles close to 90° there should be some minimal factor, so that decal remains visible. So I came with this "improved" code:
let CosA = PLAYER.dir.mirror().dot(decal.facingDir);
let FACTOR = Math.min(1, CosA + CAMERA.minPerspective);
FACTOR = Math.max(FACTOR, CAMERA.minPerspective);
let widthScale = FACTOR * (CAMERA.transformDepth / decal.distance);
This works considerably better, but it has some flaws. Visually, for angles 0-50° the factor of reduction is too great. This can be observed if I use decals of such width, that they should cover complete grid surface. (see image below; left of the stairs the wall underneath is visible, decal should cover complete grid, but it doesn't, bacause the FACTOR is to small).
I have searched Stack Overflow and the rest of the Web for better solution, by it seems that my knowledge of geometry also prevents me to recognize proper solutions if they are out of this context.
So, please. There are probably deterministic solutions for calculating percieved width, without using raycasting phase again or by using the information I am able to store in raycasting phase. While JavaScript is used in code example, I consider this question not to be specific to any programming language.
I have found solution that retains (or even improves) simplicity and time complexity of the approach in the question.
I have added two points to the decal definition - leftDrawStart and
rightStartDraw. Those are easy to calculate at the point of decal
instantialization, based on real sprite (decal) width and the definition
of the grid (block) size. While doing this calculation, I consider leftDrawStart from the camera perspective (not grid coordinates).
when rendering decal, I calculate using transformation matrix (as in question, code example below) screen coordinates for leftDrawStart and rightStartDraw from their grid coordinates:
transform(spritePos) {
let invDet = 1.0 / (CAMERA.dir.x * PLAYER.dir.y - PLAYER.dir.x * CAMERA.dir.y);
CAMERA.transformX = invDet * (PLAYER.dir.y * spritePos.x - PLAYER.dir.x * spritePos.y);
CAMERA.transformDepth = invDet * (-CAMERA.dir.y * spritePos.x + CAMERA.dir.x * spritePos.y);
}
I distinguish the calculated absolute drawStartX and drawEndX, and their adjustment so that they fit the screen boundaries or return from function if they are completely offscreen
finally, percieved width of the decal is not even required since the texture position can be calculated by using ratio of differences between curent drawing stripe - absolute drawing start and difference of absolute drawing end - absolute drawing start:
let texX = (((stripe - drawStartX_abs) / (drawEndX_abs - drawStartX_abs)) * imageData.width) | 0;
The approach is completelly accurate and considerably faster in comparison to approach where decal casting would be incorporated in the raycasting step.
I'm trying to create a program that creates a custom pattern. I have it so that if sides = 3, it's a triangle 4 = rect and anything else above that has a formula so that, if you really wanted, you could have 25 sides. I'm using lines, rotation and rotation to plant, turn, draw repeat.
angleMeasure = (180 * (sides-2) ) /sides;
println(angleMeasure);
println(radians(angleMeasure));
//creating the 5+ shape
pushMatrix();
translate(width/2, height/2); //translating the whole shape/while loop
while(counter < sides){
line(0,0,170,0);
translate(170,0);//THIS translate is what makes the lines go in the direction they need too.
rotate(angleMeasure);
counter = counter + 1;
This works almost correctly. The last and first lines don't connect. Suggestions? Maybe it's a problem with the math, but println reveals a correct angle measure in degrees. Here's what it looks like: http://i.stack.imgur.com/TwYMj.png
EDIT: Changed the rotate from rotate(angleMeasure) to rotate(angleMeasure * -1). This rotated the whole shape and made it clear that the angle on the very first line is off. See:http://i.stack.imgur.com/Z1KmY.png
You actually need to turn by angle=360°/sides. And convert this angle into radians.
Thus for a pentagram you need angle=72°. The number you computed is 108, which interpreted as radians is 34 full turns plus an angle of about 67°. This falls 5° short of the correct angle, so that you obtain a somewhat correct picture with slightly too wide inner angles, resulting in the gap (and not a crossing as when the angle were larger than the correct angle).
I've gone through a lot of questions online for the last few hours trying to figure out how to calculate the rotation between two vectors, using the least distance. Also, what is the difference between using Atan2 and using the dot product? I see them the most often.
The purpose is to rotate in the y axis only based on differences in X/Z position (Think top down rotation in a 3D world).
Option 1: use Atan2. Seems to work really well, except when it switchs the radians from positive to negative.
float radians = (float)Math.Atan2(source.Position.X - target.Position.X, source.Position.Z - target.Position.Z);
npc.ModelPosition.Rotation = new Vector3(0,
npc.ModelPosition.Rotation.Y + (radians),
0);
This works fine, except at one point it starts jittering then swings back the other way.
Use the dot product. Can't seem to make it work well, it ALWAYS turns in the same direction - is it possible to get it to swing in the least direction? I understand it ranges from 0 to pi - can I use that?
float dot = Vector3.Dot(Vector3.Normalize(source.Position), Vector3.Normalize(target.Position));
float angle = (float)Math.Acos(dot);
npc.ModelPosition.Rotation = new Vector3(0,
npc.ModelPosition.Rotation.Y + (angle),
0);
Thanks,
James
You calculate two different angles:
Angle alpha is the angle calculated with atan2 (which equals opposite side / adjacent side). Angle beta is the one calculated with the dot product.
You can achieve similar results (as with atan) with the dot product with
angle = acos(dot(normalize(target - source), -z))
(-z is the unit vector in negative z direction)
However, you will lose the sign.
Hey math geeks, I've got a problem that's been stumping me for a while now. It's for a personal project.
I've got three dots: red, green, and blue. They're positioned on a cardboard slip such that the red dot is in the lower left (0,0), the blue dot is in the lower right (1,0), and the green dot is in the upper left. Imagine stepping back and taking a picture of the card from an angle. If you were to find the center of each dot in the picture (let's say the units are pixels), how would you find the normal vector of the card's face in the picture (relative to the camera)?
Now a few things I've picked up about this problem:
The dots (in "real life") are always at a right angle. In the picture, they're only at a right angle if the camera has been rotated around the red dot along an "axis" (axis being the line created by the red and blue or red and green dots).
There are dots on only one side of the card. Thus, you know you'll never be looking at the back of it.
The distance of the card to the camera is irrelevant. If I knew the depth of each point, this would be a whole lot easier (just a simple cross product, no?).
The rotation of the card is irrelevant to what I'm looking for. In the tinkering that I've been doing to try to figure this one out, the rotation can be found with the help of the normal vector in the end. Whether or not the rotation is a part of (or product of) finding the normal vector is unknown to me.
Hope there's someone out there that's either done this or is a math genius. I've got two of my friends here helping me on it and we've--so far--been unsuccessful.
i worked it out in my old version of MathCAD:
Edit: Wording wrong in screenshot of MathCAD: "Known: g and b are perpendicular to each other"
In MathCAD i forgot the final step of doing the cross-product, which i'll copy-paste here from my earlier answer:
Now we've solved for the X-Y-Z of the
translated g and b points, your
original question wanted the normal of
the plane.
If cross g x b, we'll get the
vector normal to both:
| u1 u2 u3 |
g x b = | g1 g2 g3 |
| b1 b2 b3 |
= (g2b3 - b2g3)u1 + (b1g3 - b3g1)u2 + (g1b2 - b1g2)u3
All the values are known, plug them in
(i won't write out the version with g3
and b3 substituted in, since it's just
too long and ugly to be helpful.
But in practical terms, i think you'll have to solve it numerically, adjusting gz and bz so as to best fit the conditions:
g · b = 0
and
|g| = |b|
Since the pixels are not algebraically perfect.
Example
Using a picture of the Apollo 13 astronauts rigging one of the command module's square Lithium Hydroxide cannister to work in the LEM, i located the corners:
Using them as my basis for an X-Y plane:
i recorded the pixel locations using Photoshop, with positive X to the right, and positive Y down (to keep the right-hand rule of Z going "into" the picture):
g = (79.5, -48.5, gz)
b = (-110.8, -62.8, bz)
Punching the two starting formulas into Excel, and using the analysis toolpack to "minimize" the error by adjusting gz and bz, it came up with two Z values:
g = (79.5, -48.5, 102.5)
b = (-110.8, -62.8, 56.2)
Which then lets me calcuate other interesting values.
The length of g and b in pixels:
|g| = 138.5
|b| = 139.2
The normal vector:
g x b = (3710, -15827, -10366)
The unit normal (length 1):
uN = (0.1925, -0.8209, -0.5377)
Scaling normal to same length (in pixels) as g and b (138.9):
Normal = (26.7, -114.0, -74.7)
Now that i have the normal that is the same length as g and b, i plotted them on the same picture:
i think you're going to have a new problem: distortion introduced by the camera lens. The three dots are not perfectly projected onto the 2-dimensional photographic plane. There's a spherical distortion that makes straight lines no longer straight, makes equal lengths no longer equal, and makes the normals slightly off of normal.
Microsoft research has an algorithm to figure out how to correct for the camera's distortion:
A Flexible New Technique for Camera Calibration
But it's beyond me:
We propose a flexible new technique to
easily calibrate a camera. It is well
suited for use without specialized
knowledge of 3D geometry or computer
vision. The technique only requires
the camera to observe a planar pattern
shown at a few (at least two)
different orientations. Either the
camera or the planar pattern can be
freely moved. The motion need not be
known. Radial lens distortion is
modeled. The proposed procedure
consists of a closed-form solution,
followed by a nonlinear refinement
based on the maximum likelihood
criterion. Both computer simulation
and real data have been used to test
the proposed technique, and very good
results have been obtained. Compared
with classical techniques which use
expensive equipments such as two or
three orthogonal planes, the proposed
technique is easy to use and flexible.
It advances 3D computer vision one
step from laboratory environments to
real world use.
They have a sample image, where you can see the distortion:
(source: microsoft.com)
Note
you don't know if you're seeing the "top" of the cardboard, or the "bottom", so the normal could be mirrored vertically (i.e. z = -z)
Update
Guy found an error in the derived algebraic formulas. Fixing it leads to formulas that i, don't think, have a simple closed form. This isn't too bad, since it can't be solved exactly anyway; but numerically.
Here's a screenshot from Excel where i start with the two knowns rules:
g · b = 0
and
|g| = |b|
Writing the 2nd one as a difference (an "error" amount), you can then add both up and use that value as a number to have excel's solver minimize:
This means you'll have to write your own numeric iterative solver. i'm staring over at my Numerical Methods for Engineers textbook from university; i know it contains algorithms to solve recursive equations with no simple closed form.
From the sounds of it, you have three points p1, p2, and p3 defining a plane, and you want to find the normal vector to the plane.
Representing the points as vectors from the origin, an equation for a normal vector would be
n = (p2 - p1)x(p3 - p1)
(where x is the cross-product of the two vectors)
If you want the vector to point outwards from the front of the card, then ala the right-hand rule, set
p1 = red (lower-left) dot
p2 = blue (lower-right) dot
p3 = green (upper-left) dot
# Ian Boyd...I liked your explanation, only I got stuck on step 2, when you said to solve for bz. You still had bz in your answer, and I don't think you should have bz in your answer...
bz should be +/- square root of gx2 + gy2 + gz2 - bx2 - by2
After I did this myself, I found it very difficult to substitute bz into the first equation when you solved for gz, because when substituting bz, you would now get:
gz = -(gxbx + gyby) / sqrt( gx2 + gy2 + gz2 - bx2 - by2 )
The part that makes this difficult is that there is gz in the square root, so you have to separate it and combine the gz together, and solve for gz Which I did, only I don't think the way I solved it was correct, because when I wrote my program to calculate gz for me, I used your gx, and gy values to see if my answer matched up with yours, and it did not.
So I was wondering if you could help me out, because I really need to get this to work for one of my projects. Thanks!
Just thinking on my feet here.
Your effective inputs are the apparent ratio RB/RG [+], the apparent angle BRG, and the angle that (say) RB makes with your screen coordinate y-axis (did I miss anything). You need out the components of the normalized normal (heh!) vector, which I believe is only two independent values (though you are left with a front-back ambiguity if the card is see through).[++]
So I'm guessing that this is possible...
From here on I work on the assumption that the apparent angle of RB is always 0, and we can rotate the final solution around the z-axis later.
Start with the card positioned parallel to the viewing plane and oriented in the "natural" way (i.e. you upper vs. lower and left vs. right assignments are respected). We can reach all the interesting positions of the card by rotating by \theta around the initial x-axis (for -\pi/2 < \theta < \pi/2), then rotating by \phi around initial y-axis (for -\pi/2 < \phi < \pi/2). Note that we have preserved the apparent direction of the RB vector.
Next step compute the apparent ratio and apparent angle after in terms of \theta and \phi and invert the result.[+++]
The normal will be R_y(\phi)R_x(\theta)(0, 0, 1) for R_i the primitive rotation matrix around axis i.
[+] The absolute lengths don't count, because that just tells you the distance to card.
[++] One more assumption: that the distance from the card to view plane is much large than the size of the card.
[+++] Here the projection you use from three-d space to the viewing plane matters. This is the hard part, but not something we can do for you unless you say what projection you are using. If you are using a real camera, then this is a perspective projection and is covered in essentially any book on 3D graphics.
right, the normal vector does not change by distance, but the projection of the cardboard on a picture does change by distance (Simple: If you have a small cardboard, nothing changes.
If you have a cardboard 1 mile wide and 1 mile high and you rotate it so that one side is nearer and the other side more far away, the near side is magnified and the far side shortened on the picture. You can see that immediately that an rectangle does not remain a rectangle, but a trapeze)
The mostly accurate way for small angles and the camera centered on the middle is to measure the ratio of the width/height between "normal" image and angle image on the middle lines (because they are not warped).
We define x as left to right, y as down to up, z as from far to near.
Then
x = arcsin(measuredWidth/normWidth) red-blue
y = arcsin(measuredHeight/normHeight) red-green
z = sqrt(1.0-x^2-y^2)
I will calculate tomorrow a more exact solution, but I'm too tired now...
You could use u,v,n co-oridnates. Set your viewpoint to the position of the "eye" or "camera", then translate your x,y,z co-ordinates to u,v,n. From there you can determine the normals, as well as perspective and visible surfaces if you want (u',v',n'). Also, bear in mind that 2D = 3D with z=0. Finally, make sure you use homogenious co-ordinates.