I'd like to programmatically draw a shape like this where there is an underlying spiral and equally spaced objects along it, placed tangent to the spiral as shown in this sketch:
I found an example of how to determine equally spaced points along the spiral here and am now trying to place hemispheres along the spiral. However, I'm not sure how to calculate the angle the shape needs to be rotated.
This is what I have so far (viewable here):
var totalSegments = 235,hw = 320,hh = 240,segments;
var len = 15;
points = [];
function setup(){
createCanvas(640,480);
smooth();
colorMode(HSB,255,100,100);
stroke(0);
noFill();
//println("move cursor vertically");
}
function draw(){
background(0);
translate(hw,hh);
segments = floor(totalSegments);
points = getTheodorus(segments,len);
angles = getAngles(segments, len);
for(var i = 0 ; i < segments ; i++){
let c = color('blue');
fill(c);
noStroke();
// draw shape
if(i % 2){
// console.log(i, ' ', angles[i]);
// try rotating around the object's center
push();
// translate(points[i].x, points[i].y)
rotate(PI/angles[i]);
arc(points[i].x, points[i].y, len*3, len*3, 0, 0 + PI);
pop();
}
// draw spiral
strokeWeight(20);
stroke(0,0,100,(20+i/segments));
if(i > 0) line(points[i].x,points[i].y,points[i-1].x,points[i-1].y);
}
}
function getAngles(segment, len){
let angles = [];
let radius = 0;
let angle = 0;
for(var i =0; i < segments; i++){
radius = sqrt(i+1);
angle += asin(1/radius);
angles[i] = angle;
}
return angles;
}
function getTheodorus(segments,len){
var result = [];
var radius = 0;
var angle = 0;
for(var i = 0 ; i < segments ; i++){
radius = sqrt(i+1);
angle += asin(1/radius);
result[i] = new p5.Vector(cos(angle) * radius*len,sin(angle) * radius*len);
}
return result;
}
Note that your drawing shows Archimedean spiral while link refers to Theodorus one.
Archimedean spiral is described by equation in polar coordinates (rho-theta)
r = a + b * Theta
where a is initial angle, b is scale value (describes distance between arms), r is radius.
And angle Theta + Pi/2 describes normal to spiral in point at parameter Theta
If you need an approximation to divide spiral into (almost) equal segments - use Clackson formula (example here)
theta = 2 * Pi * Sqrt(2 * s / b)
for arc length s
I am trying to figure out where a bunch of line-segments clip into a window around them. I saw the Liang–Barsky algorithm, but that seems to assume the segments already clip the edges of the window, which these do not.
Say I have a window from (0,0) to (26,16), and the following segments:
(7,6) - (16,3)
(10,6) - (19,6)
(13,10) - (21,3)
(16,12) - (19,14)
Illustration:
I imagine I need to extend the segments to a certain X or Y point, till they hit the edge of the window, but I don't know how.
How would I find the points where these segments (converted to lines?) clip into the edge of the window? I will be implementing this in C#, but this is pretty language-agnostic.
If you have two line segments P and Q with points
P0 - P1
Q0 - Q1
The line equations are
P = P0 + t(P1 - P0)
Q = Q0 + r(Q1 - Q0)
then to find out where they intersect after extension you need to solve the following equation for t and r
P0 + t(P1 - P0) = Q0 + r(Q1 - Q0)
The following code can do this. ( Extracted from my own code base )
public static (double t, double r )? SolveIntersect(this Segment2D P, Segment2D Q)
{
// a-d are the entries of a 2x2 matrix
var a = P.P1.X - P.P0.X;
var b = -Q.P1.X + Q.P0.X;
var c = P.P1.Y - P.P0.Y;
var d = -Q.P1.Y + Q.P0.Y;
var det = a*d - b*c;
if (Math.Abs( det ) < Utility.ZERO_TOLERANCE)
return null;
var x = Q.P0.X - P.P0.X;
var y = Q.P0.Y - P.P0.Y;
var t = 1/det*(d*x - b*y);
var r = 1/det*(-c*x + a*y);
return (t, r);
}
If null is returned from the function then it means the lines are parallel and cannot intersect. If a result is returned then you can do.
var result = SolveIntersect( P, Q );
if (result != null)
{
var ( t, r) = result.Value;
var p = P.P0 + t * (P.P1 - P.P0);
var q = Q.P0 + t * (Q.P1 - Q.P0);
// p and q are the same point of course
}
The extended line segments will generally intersect more than one box edge but only one of those intersections will be inside the box. You can check this easily.
bool IsInBox(Point corner0, Point corner1, Point test) =>
(test.X > corner0.X && test.X < corner1.X && test.Y > corner0.Y && test.Y < corner1.Y ;
That should give you all you need to extend you lines to the edge of your box.
I managed to figure this out.
I can extend my lines to the edge of the box by first finding the equations of my lines, then solving for the X and Y of each of the sides to get their corresponding point. This requires passing the max and min Y and the max and min X into the following functions, returning 4 values. If the point is outside the bounds of the box, it can be ignored.
My code is in C#, and is making extension methods for EMGU's LineSegment2D. This is a .NET wrapper for OpenCv.
My Code:
public static float GetYIntersection(this LineSegment2D line, float x)
{
Point p1 = line.P1;
Point p2 = line.P2;
float dx = p2.X - p1.X;
if(dx == 0)
{
return float.NaN;
}
float m = (p2.Y - p1.Y) / dx; //Slope
float b = p1.Y - (m * p1.X); //Y-Intercept
return m * x + b;
}
public static float GetXIntersection(this LineSegment2D line, float y)
{
Point p1 = line.P1;
Point p2 = line.P2;
float dx = p2.X - p1.X;
if (dx == 0)
{
return float.NaN;
}
float m = (p2.Y - p1.Y) / dx; //Slope
float b = p1.Y - (m * p1.X); //Y-Intercept
return (y - b) / m;
}
I can then take these points, check if they are in the bounds of the box, discard the ones that are not, remove duplicate points (line goes directly into corner). This will leave me with one x and one y value, which I can then pair to the corresponding min or max Y or X values I passed into the functions to make 2 points. I can then make my new segment with the two points.
Wiki description of Liang-Barsky algorithm is not bad, but code is flaw.
Note: this algorithm intended to throw out lines without intersection as soon as possible. If most of lines intersect the rectangle, then approach from your answer might be rather effective, otherwise L-B algorithm wins.
This page describes approach in details and contains concise effective code:
// Liang-Barsky function by Daniel White # http://www.skytopia.com/project/articles/compsci/clipping.html
// This function inputs 8 numbers, and outputs 4 new numbers (plus a boolean value to say whether the clipped line is drawn at all).
//
bool LiangBarsky (double edgeLeft, double edgeRight, double edgeBottom, double edgeTop, // Define the x/y clipping values for the border.
double x0src, double y0src, double x1src, double y1src, // Define the start and end points of the line.
double &x0clip, double &y0clip, double &x1clip, double &y1clip) // The output values, so declare these outside.
{
double t0 = 0.0; double t1 = 1.0;
double xdelta = x1src-x0src;
double ydelta = y1src-y0src;
double p,q,r;
for(int edge=0; edge<4; edge++) { // Traverse through left, right, bottom, top edges.
if (edge==0) { p = -xdelta; q = -(edgeLeft-x0src); }
if (edge==1) { p = xdelta; q = (edgeRight-x0src); }
if (edge==2) { p = -ydelta; q = -(edgeBottom-y0src);}
if (edge==3) { p = ydelta; q = (edgeTop-y0src); }
if(p==0 && q<0) return false; // Don't draw line at all. (parallel line outside)
r = q/p;
if(p<0) {
if(r>t1) return false; // Don't draw line at all.
else if(r>t0) t0=r; // Line is clipped!
} else if(p>0) {
if(r<t0) return false; // Don't draw line at all.
else if(r<t1) t1=r; // Line is clipped!
}
}
x0clip = x0src + t0*xdelta;
y0clip = y0src + t0*ydelta;
x1clip = x0src + t1*xdelta;
y1clip = y0src + t1*ydelta;
return true; // (clipped) line is drawn
}
I am trying to implement an angular constraint into a simple verlet integration based 2D physic engine. This is the code I am currently using:
int indexA = physicAngularConstraint[i].indexA;
int indexB = physicAngularConstraint[i].indexB;
int indexC = physicAngularConstraint[i].indexC;
CGPoint e = CGPointSubtract(physicParticle[indexB].pos,physicParticle[indexA].pos);
CGPoint f = CGPointSubtract(physicParticle[indexC].pos,physicParticle[indexB].pos);
float dot = CGPointDot(e, f);
float cross = CGPointCross(e, f);
float angle = atan2f(cross, dot);
float da = (angle < physicAngularConstraint[i].minAngle)? angle - physicAngularConstraint[i].minAngle : (angle > physicAngularConstraint[i].maxAngle)? angle - physicAngularConstraint[i].maxAngle : 0.0f;
if (da != 0.0f)
{
physicParticle[indexA].pos = CGPointRotate(physicParticle[indexA].pos,
physicParticle[indexB].pos, da);
physicParticle[indexC].pos = CGPointRotate(physicParticle[indexC].pos,
physicParticle[indexB].pos, -da);
}
The CGPointRotate function looks like this:
CGPoint CGPointRotate(CGPoint pt, CGPoint center, float angle)
{
CGPoint ret;
pt = CGPointSubtract(pt, center);
float co = cosf(angle);
float si = sinf(angle);
ret.x = pt.x * co - pt.y * si;
ret.y = pt.x * si + pt.y * co;
ret = CGPointAdd(ret, center);
return ret;
}
I am testing this implementation with a row of particles which are connected through distant constraints. Without the angular constraints they act like a rope. I am trying to give the "rope" some stiffness through the angular constraints. But my implementation above is gaining energy and blowing up the system after some millisecs. Why does this constrain implementation is gaining energie?
i'm trying to code correct 2D affine texture mapping in GLSL.
Explanation:
...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for.
I'd like to simply "stretch" texture inside quadrilateral, something like this:
but composed from two triangles. Any advice (GLSL) please?
This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package.
GLSL:
varying vec2 shiftedPosition, width_height;
#ifdef VERTEX
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.
width_height = gl_MultiTexCoord1.xy;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
void main() {
gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);
}
#endif
C#:
// Zero out the left and bottom edges,
// leaving a right trapezoid with two sides on the axes and a vertex at the origin.
var shiftedPositions = new Vector2[] {
Vector2.zero,
new Vector2(0, vertices[1].y - vertices[0].y),
new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),
new Vector2(vertices[3].x - vertices[0].x, 0)
};
mesh.uv = shiftedPositions;
var widths_heights = new Vector2[4];
widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;
widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;
widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;
widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;
mesh.uv2 = widths_heights;
I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0
In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code.
float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;
//These are all uniforms for a given quad. Calculate on CPU.
Ax = (x3 - x0) - (x2 - x1);
Bx = (x0 - x1);
Cx = (x2 - x1);
Dx = x1;
Ay = (y3 - y0) - (y2 - y1);
By = (y0 - y1);
Cy = (y2 - y1);
Dy = y1;
float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);
float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);
A = (Ay*Cx)-(Ax*Cy);
//These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader
B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);
C = (Bx * y) + ByDx_minus_BxDy - (By * x);
//These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.
u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);
v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);
Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels.
Check out this link.
https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be
I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids):
//usual euclidean distance
float distance(int ax, int ay, int bx, int by) {
int x = ax-bx;
int y = ay-by;
return sqrtf((float)(x*x + y*y));
}
void gfx_quad(gfx_t *dst //destination texture, we are rendering into
,gfx_t *src //source texture
,int *quad // quadrilateral vertices
)
{
int *v = quad; //quad vertices
float z = 20.0;
float top = distance(v[0],v[1],v[2],v[3]); //top
float bot = distance(v[4],v[5],v[6],v[7]); //bottom
float lft = distance(v[0],v[1],v[4],v[5]); //left
float rgt = distance(v[2],v[3],v[6],v[7]); //right
// By default all vertices lie on the screen plane
float az = 1.0;
float bz = 1.0;
float cz = 1.0;
float dz = 1.0;
// Move Z from screen, if based on distance ratios.
if (top<bot) {
az *= top/bot;
bz *= top/bot;
} else {
cz *= bot/top;
dz *= bot/top;
}
if (lft<rgt) {
az *= lft/rgt;
cz *= lft/rgt;
} else {
bz *= rgt/lft;
dz *= rgt/lft;
}
// draw our quad as two textured triangles
gfx_textured(dst, src
, v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz
, 0.0,0.0, 1.0,0.0, 0.0,1.0);
gfx_textured(dst, src
, v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz
, 1.0,0.0, 0.0,1.0, 1.0,1.0);
}
I'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers
I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates.
I based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions:
The vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order.
The coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner).
std::vector<cv::Point2f> points;
// Convert points to homogeneous coordinates to simplify the problem.
Eigen::Vector3f p0(points[0].x, points[0].y, 1);
Eigen::Vector3f p1(points[1].x, points[1].y, 1);
Eigen::Vector3f p2(points[2].x, points[2].y, 1);
Eigen::Vector3f p3(points[3].x, points[3].y, 1);
// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.
// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.
auto line1 = p2.cross(p0);
auto line2 = p3.cross(p1);
auto intersection = line1.cross(line2);
intersection = intersection / intersection(2);
// Compute distance to each point.
for (const auto &pt : points) {
auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +
std::pow(pt.y - intersection(1), 2));
distances.push_back(distance);
}
// Assumes same order as above.
std::vector<cv::Point2f> texture_coords_unnormalized = {
{1.0f, 1.0f},
{1.0f, 0.0f},
{0.0f, 0.0f},
{0.0f, 1.0f}
};
std::vector<float> texture_coords;
for (int i = 0; i < texture_coords_unnormalized.size(); ++i) {
float u_i = texture_coords_unnormalized[i].x;
float v_i = texture_coords_unnormalized[i].y;
float d_i = distances.at(i);
float d_i_2 = distances.at((i + 2) % 4);
float scale = (d_i + d_i_2) / d_i_2;
texture_coords.push_back(u_i*scale);
texture_coords.push_back(v_i*scale);
texture_coords.push_back(scale);
}
Pass the texture coordinates to your shader (use vec3). Then:
gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);
thanks for answers, but after experimenting i found a solution.
two triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction.
Numbers and shader:
tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]
tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]
d1 = length of top edge = 2
d2 = length of bottom edge = 1
tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]
tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]
only right triangles are rendered using this glsl shader (on left is fixed pipeline):
void main()
{
gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);
}
so.. only U is perspective and V is linear.
I am experiment kinect on winrt for metro app.
I am trying to obtain angle at the elbow.
normally i will do the following
Vector3D handLeftVector = new Vector3D(HandLeftX, HandLeftY, HandLeftZ);
handLeftVector.Normalize();
Vector3D ElbowLeftEVector = new Vector3D(ElbowLeftX, ElbowLeftY, ElbowLeftZ);
ElbowLeftEVector.Normalize();
Vector3D ShoulderLeftVector = new Vector3D(ShoulderLeftX, ShoulderLeftY, ShoulderLeftZ);
ShoulderLeftVector.Normalize();
Vector3D leftElbowV1 = ShoulderLeftVector - ElbowLeftEVector;
Vector3D leftElbowV2 = handLeftVector - ElbowLeftEVector;
double leftElbowAngle = Vector3D.AngleBetween(leftElbowV1, leftElbowV2);
However Vector3D object isn't available in winrt.
I had decided to replicate the Vector3D method as below. However the result doesn't seem to be as expected. Did I make a mistake anywhere?
double leftElbowV1X = ShoulderLeftX - ElbowLeftX;
double leftElbowV1Y = ShoulderLeftY - ElbowLeftY;
double leftElbowV1Z = ShoulderLeftZ - ElbowLeftZ;
double leftElbowV2X = handLeftX - ElbowLeftX;
double leftElbowV2Y = handLeftY - ElbowLeftY;
double leftElbowV2Z = handLeftZ - ElbowLeftZ;
double product = leftElbowV1X * leftElbowV2X + leftElbowV1Y * leftElbowV2Y + leftElbowV1Z * leftElbowV2Z;
double magnitudeA = Math.Sqrt(Math.Pow(leftElbowV1X, 2) + Math.Pow(leftElbowV1Y, 2) + Math.Pow(leftElbowV1Z, 2));
double magnitudeB = Math.Sqrt(Math.Pow(leftElbowV2X, 2) + Math.Pow(leftElbowV2Y, 2) + Math.Pow(leftElbowV2Z, 2));
magnitudeA = Math.Abs(magnitudeA);
magnitudeB = Math.Abs(magnitudeB);
double cosDelta = product / (magnitudeA * magnitudeB);
double angle = Math.Acos(cosDelta) *180.0 / Math.P;
And is there a need to normalize it?
i had managed to resolve it, however i am thinking if there is a more efficient way of doing.
Not sure if this helps but this is some old angle code I use, return in degrees:
float AngleBetween(Vector3 from, Vector3 dest) {
float len = from.magnitude * dest.magnitude;
if(len < Mathf.Epsilon) len = Mathf.Epsilon;
float f = Vector3.Dot(from,dest) / len;
if(f>1.0f)f=1.0f;
else if ( f < -1.0f) f = -1.0f;
return Mathf.Acos(f) * 180.0f / (float)Math.PI;
}
It's obviously using API specific syntax but I think the method is clear.