I'm trying to achieve smooth shading of triangles in my graphics program, however I'm currently stuck on how to do it exactly, I've got two options.
Option 1: (per vector)
Create a "zero" Vector.
Add the non-normalized normal of every incident triangle to the created vector.
Scale the resulting vector by 1 / incidentTriangleCount.
Return the normalized version of the resulting vector.
Option 2: (per vector)
Create a "zero" Vector.
Add the normalized normal of every incident triangle to the created vector.
Scale the resulting vector by 1 / incidentTriangleCount.
Return the non-normalized version of the resulting vector.
Both approaches are giving me different results and I don't really know which one to take, can anyone give me advice on this?
Always work with normalized normals. Thus your two options will merge in single one :)
Besides, you have to be careful when using "every" incident triangle, because in this case you will have your entire model smoothed, which is not good. E.g. a model of pencil that actually have edges will look like a rounded one. Implement a treshold, i.e. only consider triangles, which normals have relatively small angle beetween them.
Related
i have two concentric circles and three points are given for each circle that are on circumference.
I need a optimized method to check if a given random point exist inbetween these circles or not.
You can compute (x²+y²), x, y, 1 for each point. The last entry is simply the constant one. Put these terms for four given points into a matrix and compute its determinant. The determinant will be zero if the points are cocircular. Otherwise the sign will tell you which point is on which side with respect to the circle defined by the other three. Use a simple example to check which sign corresponds to which direction. Be prepared for the fact that the three circle-defining points being oriented in a clockwise or counter-clockwise orientation will affect this sign, too.
Computing a 4×4 determinant can be done horribly inefficiently, too. I'd suggest you compute all the 2×2 minors from the first two rows, and all the 2×2 minors from the last two, then you can combine them to form the full determinant. See this Math SE post for details. If you need further mathematical help (as opposed to programming help), you might find more suitable answers there.
Nothe that the above works for each circle independently. Check whether the point is inside the one, then check whether it is outside the other. It does not make use of the fact that the circles are assumed to be cocircular.
I'm working on a PyMEL script that allows the user to duplicate a selected object multiple times, using a CV curve and its points coordinates to transform & rotate each copy to a certain point in space.
In order to achieve this, Im using the adjacent 2 points of each CV (control vertex) to determine the rotation for the object.
I have managed to retrieve the coordinates of the curve's CVs
#Add all points of the curve to the cvDict dictionary
int=0
cvDict={}
while int<selSize:
pointName='point%s' % int
coords= pointPosition ('%s.cv[%s]' % (obj,int), w=1)
#Setup the key for the current point
cvDict[pointName]={}
#add coords to x,y,z subkeys to dict
cvDict[pointName]['x']= coords[0]
cvDict[pointName]['y']= coords[1]
cvDict[pointName]['z']= coords[2]
int += 1
Now the problem I'm having is figuring out how to get the angle for each CV.
I stumbled upon the angleBetween() function:
http://download.autodesk.com/us/maya/2010help/CommandsPython/angleBetween.html
In theory, this should be my solution, since I could find the "middle vector" (not sure if that's the mathematical term) of each of the curve's CVs (using the adjacent CVs' coordinates to find a fourth point) and use the above mentioned function to determine how much I'd have to rotate the object using a reference vector, for example on the z axis.
At least theoretically - the issue is that the function only takes 1 set of coords for each vector and I have absolutely no Idea how to convert my point coords to that format (since I always have at least 2 sets of coordinates, one for each point).
Thanks.
If you wanna go the long way and not grab the world transforms of the curve, definitely make use of pymel's datatypes module. It has everything that python's native math module does and a few others that are Maya specific. Also the math you would require to do this based on CVs can be found here.
Hope that puts you in the right direction.
If you're going to skip the math, maybe you should just create a locator, path-animate it along the curve, and then sample the result. That would allow you to get completely continuous orientations along the curve. The midpoint-constraint method you've outlined above is limited to 1 valid sample per curve segment -- if you wanted 1/4 of the way or 3/4 of the way between two cv's your orientation would be off. Plus you don't have to reinvent all of the manu different options for deciding on the secondary axis of rotation, reading curves with funky parameterization, and so forth.
I'm having trouble with Maple.
I have a cosine wave, which I figured out how to plot, but now I have to take samples
from that wave and plot those(as dots) over top of the original cosine wave.
Here is the question from the assignment:
"Produce the samples from Q1 above and plot the result (plot the points on a plot of the cosine wave - use different colours for both, it will look like a cosine wave with dots on it)"
Problem is, my samples keep being straight lines at different heights
http://i197.photobucket.com/albums/aa221/Haseo_Ame/Maple.png
I'm not sure what I'm doing wrong since I've never used maple before.
Firstly, try not to build up lists using repeated concatenation (which can incur an O(n^2) in resources) if you can use the seq command instead (which can incur an O(n) cost in resources). You should always reconsider, when coding like s:=[op(s),...] in a loop.
Next, a point-plot needs pairs of x-y values. Your list is just a collection of scalar values, and hence is being interpreted as a collection of constant functions to be plotted.
The pairs of x-y values can be in a list of (2-element) lists such as [[x1,y1],...,[xn,yn]
It's not clear how you want your x-axis scaled, but you could start off with something like this,
s:=[seq([i, 4*cos(2*Pi*i*70/200+Pi/4)],i=0..20)]:
plot(s, style=point);
# s:=[seq([2*Pi*i*70/200+Pi/4, 4*cos(2*Pi*i*70/200+Pi/4)],i=0..20)]:
ps. Please post source code as text, not as embedded images, so that anyone trying to help needn't type it all in.
I'm working in OpenCV but I don't think there is a function for this. I can find a function for finding affine transformations, but affine transformations include scaling, and I only want to consider rotation + translation.
Imagine I have two sets of points in 2d - let's say each set has exactly 50 points.
E.g. set A = {x1, y1, x2, y2, ... , x50, y50}
set B = {x1', y1', x2', y2', ... , x50', y50'}
I want to find the rotation and translation combination that gets closest to mapping set A onto set B. I guess I would define "closest" as minimises the average distance between points in A and corresponding points in B. I.e., minimises the average distance between (x1, y1) and (x1', y1'), etc.
I guess I could use brute force testing all possible translations and rotations but this would be extremely inefficient. Does anyone know a simpler way?
Thanks!
This problem has a very elegant solution in terms of singular value decomposition of the proximity matrix (distances between pairs of points). The name of this is the orthogonal Procrustes problem, after the Greek legend about a fellow who offered travellers a bed that would fit anyone.
The solution comes from finding the nearest orthogonal matrix to a given (not necessarily orthogonal) matrix.
The way I would do it in Excel is to make a couple columns representing the points.
Cells representing rotation/translation of a set (no need to rotate and translate both of them).
Then columns representing those same points rotated/translated.
Then another column for the distance between the points of the rotated/translated points.
Then a cell of the sum of the distances between points.
Finally, use Solver to optimize the rotation and translation cells.
If you fix some rotation you can get an answer using ternary search. Run search in x and for every tested x run it in y to get the best value. This will give you the correct answer since the function (sum of corresponding distances) is convex (this can be proved through observing that restriction of the function to any line is a one-dimensional convex function; and the last is a standard fact: the sum of several convex functions is convex).
Instead of brute force over the angle I can propose such a method based on the ternary search. Choose some not very large step S. Compute the target function for every angle in (0, S, 2S,...). Then, if S is small enough, we can exclude some of segments (iS, (i + 1)S) from consideration. Namely ones with relatively large values of function with angles iS and (i + 1)S. Being implemented carefully this can give an answer and can do it faster than brute force.
If I have a mesh of triangles, how does one go about calculating the normals at each given vertex?
I understand how to find the normal of a single triangle. If I have triangles sharing vertices, I can partially find the answer by finding each triangle's respective normal, normalizing it, adding it to the total, and then normalizing the end result. However, this obviously does not take into account proper weighting of each normal (many tiny triangles can throw off the answer when linked with a large triangle, for example).
I think a good method should be using a weighted average but using angles instead of area as weights. This is in my opinion a better answer because the normal you are computing is a "local" feature so you don't really care about how big is the triangle that is contributing... you need a sort of "local" measure of the contribution and the angle between the two sides of the triangle on the specified vertex is such a local measure.
Using this approach a lot of small (thin) triangles doesn't give you an unbalanced answer.
Using angles is the same as using an area-weighted average if you localize the computation by using the intersection of the triangles with a small sphere centered in the vertex.
The weighted average appears to be the best approach.
But be aware that, depending on your application, sharp corners could still give you problems. In that case, you can compute multiple vertex normals by averaging surface normals whose cross product is less than some threshold (i.e., closer to being parallel).
Search for Offset triangular mesh using the multiple normal vectors of a vertex by SJ Kim, et. al., for more details about this method.
This blog post outlines three different methods and gives a visual example of why the standard and simple method (area weighted average of the normals of all the faces joining at the vertex) might sometimes give poor results.
You can give more weight to big triangles by multiplying the normal by the area of the triangle.
Check out this paper: Discrete Differential-Geometry Operators for Triangulated 2-Manifolds.
In particular, the "Discrete Mean Curvature Normal Operator" (Section 3.5, Equation 7) gives a robust normal that is independent of tessellation, unlike the methods in the blog post cited by another answer here.
Obviously you need to use a weighted average to get a correct normal, but using the triangles area won't give you what you need since the area of each triangle has no relationship with the % weight that triangles normal represents for a given vertex.
If you base it on the angle between the two sides coming into the vertex, you should get the correct weight for every triangle coming into it. It might be convenient if you could convert it to 2d somehow so you could go off of a 360 degree base for your weights, but most likely just using the angle itself as your weight multiplier for calculating it in 3d space and then adding up all the normals produced that way and normalizing the final result should produce the correct answer.