i'm trying to code correct 2D affine texture mapping in GLSL.
Explanation:
...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for.
I'd like to simply "stretch" texture inside quadrilateral, something like this:
but composed from two triangles. Any advice (GLSL) please?
This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package.
GLSL:
varying vec2 shiftedPosition, width_height;
#ifdef VERTEX
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.
width_height = gl_MultiTexCoord1.xy;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
void main() {
gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);
}
#endif
C#:
// Zero out the left and bottom edges,
// leaving a right trapezoid with two sides on the axes and a vertex at the origin.
var shiftedPositions = new Vector2[] {
Vector2.zero,
new Vector2(0, vertices[1].y - vertices[0].y),
new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),
new Vector2(vertices[3].x - vertices[0].x, 0)
};
mesh.uv = shiftedPositions;
var widths_heights = new Vector2[4];
widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;
widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;
widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;
widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;
mesh.uv2 = widths_heights;
I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0
In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code.
float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;
//These are all uniforms for a given quad. Calculate on CPU.
Ax = (x3 - x0) - (x2 - x1);
Bx = (x0 - x1);
Cx = (x2 - x1);
Dx = x1;
Ay = (y3 - y0) - (y2 - y1);
By = (y0 - y1);
Cy = (y2 - y1);
Dy = y1;
float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);
float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);
A = (Ay*Cx)-(Ax*Cy);
//These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader
B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);
C = (Bx * y) + ByDx_minus_BxDy - (By * x);
//These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.
u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);
v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);
Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels.
Check out this link.
https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be
I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids):
//usual euclidean distance
float distance(int ax, int ay, int bx, int by) {
int x = ax-bx;
int y = ay-by;
return sqrtf((float)(x*x + y*y));
}
void gfx_quad(gfx_t *dst //destination texture, we are rendering into
,gfx_t *src //source texture
,int *quad // quadrilateral vertices
)
{
int *v = quad; //quad vertices
float z = 20.0;
float top = distance(v[0],v[1],v[2],v[3]); //top
float bot = distance(v[4],v[5],v[6],v[7]); //bottom
float lft = distance(v[0],v[1],v[4],v[5]); //left
float rgt = distance(v[2],v[3],v[6],v[7]); //right
// By default all vertices lie on the screen plane
float az = 1.0;
float bz = 1.0;
float cz = 1.0;
float dz = 1.0;
// Move Z from screen, if based on distance ratios.
if (top<bot) {
az *= top/bot;
bz *= top/bot;
} else {
cz *= bot/top;
dz *= bot/top;
}
if (lft<rgt) {
az *= lft/rgt;
cz *= lft/rgt;
} else {
bz *= rgt/lft;
dz *= rgt/lft;
}
// draw our quad as two textured triangles
gfx_textured(dst, src
, v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz
, 0.0,0.0, 1.0,0.0, 0.0,1.0);
gfx_textured(dst, src
, v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz
, 1.0,0.0, 0.0,1.0, 1.0,1.0);
}
I'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers
I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates.
I based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions:
The vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order.
The coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner).
std::vector<cv::Point2f> points;
// Convert points to homogeneous coordinates to simplify the problem.
Eigen::Vector3f p0(points[0].x, points[0].y, 1);
Eigen::Vector3f p1(points[1].x, points[1].y, 1);
Eigen::Vector3f p2(points[2].x, points[2].y, 1);
Eigen::Vector3f p3(points[3].x, points[3].y, 1);
// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.
// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.
auto line1 = p2.cross(p0);
auto line2 = p3.cross(p1);
auto intersection = line1.cross(line2);
intersection = intersection / intersection(2);
// Compute distance to each point.
for (const auto &pt : points) {
auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +
std::pow(pt.y - intersection(1), 2));
distances.push_back(distance);
}
// Assumes same order as above.
std::vector<cv::Point2f> texture_coords_unnormalized = {
{1.0f, 1.0f},
{1.0f, 0.0f},
{0.0f, 0.0f},
{0.0f, 1.0f}
};
std::vector<float> texture_coords;
for (int i = 0; i < texture_coords_unnormalized.size(); ++i) {
float u_i = texture_coords_unnormalized[i].x;
float v_i = texture_coords_unnormalized[i].y;
float d_i = distances.at(i);
float d_i_2 = distances.at((i + 2) % 4);
float scale = (d_i + d_i_2) / d_i_2;
texture_coords.push_back(u_i*scale);
texture_coords.push_back(v_i*scale);
texture_coords.push_back(scale);
}
Pass the texture coordinates to your shader (use vec3). Then:
gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);
thanks for answers, but after experimenting i found a solution.
two triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction.
Numbers and shader:
tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]
tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]
d1 = length of top edge = 2
d2 = length of bottom edge = 1
tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]
tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]
only right triangles are rendered using this glsl shader (on left is fixed pipeline):
void main()
{
gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);
}
so.. only U is perspective and V is linear.
Related
The code below compiles fine but I cant run it on TURBO C++. The runtime screen just flashes. But i have also used getch(). I dont know where I am going wrong. What should I do?
#include<conio.h>
#include<math.h>
#include<stdlib.h>
#include<graphics.h>
void main()
{
int gm;
int gd = DETECT; //graphic driver
int x1, x2, x3, y1, y2, y3, x1n, x2n, x3n, y1n, y2n, y3n, c; //vertices of triangle
int r; //rotation angle
float t;
initgraph(&gd, &gm, "C:\TURBOC3:\BGI:");
setcolor(RED);
printf("\t Enter vertices of triangle: ");
scanf("%d%d%d%d%d%d", &x1,&y1,&x2,&y2,&x3,&y3);
line(x1,y1,x2,y2);
line(x2,y2,x3,y3);
line(x3,y3,x1,y1);
printf("\nEnter angle of rotation: ");
scanf("%d",&r);
t = 3.14*r/180; //converting degree into radian
//applying 2D rotation equations
x1n = abs(x1*cos(t)-y1*sin(t));
y1n = abs(x1*sin(t)+y1*cos(t));
x2n = abs(x2*cos(t)-y2*sin(t));
y2n = abs(x2*sin(t)+y2*cos(t));
x3n = abs(x3*cos(t)-y3*sin(t));
y3n = abs(x3*sin(t)+y3*cos(t));
//Drawing the rotated triangle
line(x1n,y1n,x2n,y2n);
line(x2n,y2n,x3n,y3n);
line(x3n,y3n,x1n,y1n);
getch();
}
Many useful pieces of info in comments.
The problem (or at least the main one) is clear: path to .bgi files ("C:\TURBOC3:\BGI:") is wrong, actually it's not even a valid Win (DOS) path.
It contains a bunch of colons (:), when only the drive letter (if present) should contain one
It's always good to escape (double) bkslashes (\) in paths. This doesn't affect you in this case, but it's a general guideline
As a consequence, initgraph fails.
Another golden rule when programming, is: always check a function outcome (return code, error flags, ...), don't assume everything just worked fine! In this case, graphresult should be used. I don't know where the official documentation is (or if it exists), but here's a pretty good substitute: [Colorado.CS]: Borland Graphics Interface (BGI) for Windows.
There are also some minor problems, like printf not functioning in graphic mode (scanf does, but it lets the user input to be displayed (in text mode), so it messes up (part of) the graphic screen).
Here's a modified version of the code (I added the test variable to avoid entering the 7 values every time the program is run).
main00.c:
#include <conio.h>
#include <graphics.h>
#include <math.h>
#include <stdlib.h>
int main() {
int err, gm, gd = DETECT; // Graphic driver
int x1, x2, x3, y1, y2, y3, x1n, x2n, x3n, y1n, y2n, y3n, c; // Vertices of triangle
int r; // Rotation angle
float t;
int test = 1; // Set to: 0 to read from keyboard, or anything else to use predefined values
if (test) {
x1 = 220;
y1 = 200;
x2 = 420;
y2 = 200;
x3 = 320;
y3 = 280;
r = 45;
} else {
printf("\nEnter vertices (x, y) of triangle: ");
scanf("%d%d%d%d%d%d", &x1, &y1, &x2, &y2, &x3, &y3);
printf("\nEnter angle of rotation (degrees): ");
scanf("%d", &r);
}
initgraph(&gd, &gm, "Y:\\BC\\BGI"); // You should use "C:\\TURBOC3\\BGI"
err = graphresult();
if (err != grOk) {
printf("Error initializing graphics: %d\n", err);
getch();
return -1;
}
setcolor(WHITE);
outtextxy(10, 10, "Triangle rotation demo");
setcolor(LIGHTRED);
line(x1, y1, x2, y2);
line(x2, y2, x3, y3);
line(x3, y3, x1, y1);
t = M_PI * r / 180; // Converting degrees into radians
// Applying 2D rotation equations
x1n = abs(x1 * cos(t) - y1 * sin(t));
y1n = abs(x1 * sin(t) + y1 * cos(t));
x2n = abs(x2 * cos(t) - y2 * sin(t));
y2n = abs(x2 * sin(t) + y2 * cos(t));
x3n = abs(x3 * cos(t) - y3 * sin(t));
y3n = abs(x3 * sin(t) + y3 * cos(t));
// Drawing the rotated triangle
setcolor(YELLOW);
line(x1n, y1n, x2n, y2n);
line(x2n, y2n, x3n, y3n);
line(x3n, y3n, x1n, y1n);
getch();
return 0;
}
Output (in a DOSBox emulator):
Build:
Run:
Note: The rotated triangle (yellow) might seem positioned a bit unexpectedly (translated), but that is because no rotation center is explicitly provided, so O(0, 0) (origin - upper left corner) is used, and the 3 points are rotated around it. If choosing one of the triangle vertices (or better: one of its centers) as rotation center, the 2 triangles will overlap, making the rotation more obvious. But that's just (plane) geometry, and it's beyond this question's scope.
I am trying to figure out where a bunch of line-segments clip into a window around them. I saw the Liang–Barsky algorithm, but that seems to assume the segments already clip the edges of the window, which these do not.
Say I have a window from (0,0) to (26,16), and the following segments:
(7,6) - (16,3)
(10,6) - (19,6)
(13,10) - (21,3)
(16,12) - (19,14)
Illustration:
I imagine I need to extend the segments to a certain X or Y point, till they hit the edge of the window, but I don't know how.
How would I find the points where these segments (converted to lines?) clip into the edge of the window? I will be implementing this in C#, but this is pretty language-agnostic.
If you have two line segments P and Q with points
P0 - P1
Q0 - Q1
The line equations are
P = P0 + t(P1 - P0)
Q = Q0 + r(Q1 - Q0)
then to find out where they intersect after extension you need to solve the following equation for t and r
P0 + t(P1 - P0) = Q0 + r(Q1 - Q0)
The following code can do this. ( Extracted from my own code base )
public static (double t, double r )? SolveIntersect(this Segment2D P, Segment2D Q)
{
// a-d are the entries of a 2x2 matrix
var a = P.P1.X - P.P0.X;
var b = -Q.P1.X + Q.P0.X;
var c = P.P1.Y - P.P0.Y;
var d = -Q.P1.Y + Q.P0.Y;
var det = a*d - b*c;
if (Math.Abs( det ) < Utility.ZERO_TOLERANCE)
return null;
var x = Q.P0.X - P.P0.X;
var y = Q.P0.Y - P.P0.Y;
var t = 1/det*(d*x - b*y);
var r = 1/det*(-c*x + a*y);
return (t, r);
}
If null is returned from the function then it means the lines are parallel and cannot intersect. If a result is returned then you can do.
var result = SolveIntersect( P, Q );
if (result != null)
{
var ( t, r) = result.Value;
var p = P.P0 + t * (P.P1 - P.P0);
var q = Q.P0 + t * (Q.P1 - Q.P0);
// p and q are the same point of course
}
The extended line segments will generally intersect more than one box edge but only one of those intersections will be inside the box. You can check this easily.
bool IsInBox(Point corner0, Point corner1, Point test) =>
(test.X > corner0.X && test.X < corner1.X && test.Y > corner0.Y && test.Y < corner1.Y ;
That should give you all you need to extend you lines to the edge of your box.
I managed to figure this out.
I can extend my lines to the edge of the box by first finding the equations of my lines, then solving for the X and Y of each of the sides to get their corresponding point. This requires passing the max and min Y and the max and min X into the following functions, returning 4 values. If the point is outside the bounds of the box, it can be ignored.
My code is in C#, and is making extension methods for EMGU's LineSegment2D. This is a .NET wrapper for OpenCv.
My Code:
public static float GetYIntersection(this LineSegment2D line, float x)
{
Point p1 = line.P1;
Point p2 = line.P2;
float dx = p2.X - p1.X;
if(dx == 0)
{
return float.NaN;
}
float m = (p2.Y - p1.Y) / dx; //Slope
float b = p1.Y - (m * p1.X); //Y-Intercept
return m * x + b;
}
public static float GetXIntersection(this LineSegment2D line, float y)
{
Point p1 = line.P1;
Point p2 = line.P2;
float dx = p2.X - p1.X;
if (dx == 0)
{
return float.NaN;
}
float m = (p2.Y - p1.Y) / dx; //Slope
float b = p1.Y - (m * p1.X); //Y-Intercept
return (y - b) / m;
}
I can then take these points, check if they are in the bounds of the box, discard the ones that are not, remove duplicate points (line goes directly into corner). This will leave me with one x and one y value, which I can then pair to the corresponding min or max Y or X values I passed into the functions to make 2 points. I can then make my new segment with the two points.
Wiki description of Liang-Barsky algorithm is not bad, but code is flaw.
Note: this algorithm intended to throw out lines without intersection as soon as possible. If most of lines intersect the rectangle, then approach from your answer might be rather effective, otherwise L-B algorithm wins.
This page describes approach in details and contains concise effective code:
// Liang-Barsky function by Daniel White # http://www.skytopia.com/project/articles/compsci/clipping.html
// This function inputs 8 numbers, and outputs 4 new numbers (plus a boolean value to say whether the clipped line is drawn at all).
//
bool LiangBarsky (double edgeLeft, double edgeRight, double edgeBottom, double edgeTop, // Define the x/y clipping values for the border.
double x0src, double y0src, double x1src, double y1src, // Define the start and end points of the line.
double &x0clip, double &y0clip, double &x1clip, double &y1clip) // The output values, so declare these outside.
{
double t0 = 0.0; double t1 = 1.0;
double xdelta = x1src-x0src;
double ydelta = y1src-y0src;
double p,q,r;
for(int edge=0; edge<4; edge++) { // Traverse through left, right, bottom, top edges.
if (edge==0) { p = -xdelta; q = -(edgeLeft-x0src); }
if (edge==1) { p = xdelta; q = (edgeRight-x0src); }
if (edge==2) { p = -ydelta; q = -(edgeBottom-y0src);}
if (edge==3) { p = ydelta; q = (edgeTop-y0src); }
if(p==0 && q<0) return false; // Don't draw line at all. (parallel line outside)
r = q/p;
if(p<0) {
if(r>t1) return false; // Don't draw line at all.
else if(r>t0) t0=r; // Line is clipped!
} else if(p>0) {
if(r<t0) return false; // Don't draw line at all.
else if(r<t1) t1=r; // Line is clipped!
}
}
x0clip = x0src + t0*xdelta;
y0clip = y0src + t0*ydelta;
x1clip = x0src + t1*xdelta;
y1clip = y0src + t1*ydelta;
return true; // (clipped) line is drawn
}
I want to project A vector onto vector a and vector c, in Processing.
In my sketch vector a is red and c is blue, I wanted c to be perpendicular to b but this is where i'm having alot of trouble. I'm using the JAMA library to try and make this easier. Any help with this is much appreciated as I have been stumped for about a week now.
float X=200; // Origin : Note we have now centred the origin in the
X-direction float Y=350; float ax=150; // Vector a resolved into
components float ay=-50; float bx=0; // Vector b resolved into
components float by=-150; float cx=150; float cy=200;
Matrix a; Matrix b; Matrix c;
void setup() {
size(400,400); // Create a drawing window
strokeWeight(3); // Make pen 3 pixels wide for all lines
double [][] anums = {{ax},
{ay}};
double [][] bnums = {{bx},
{by}};
double [][] cnums = {{-cy},
{cx}};
a = new Matrix(anums);
b = new Matrix(bnums);
c = new Matrix(cnums); }
void draw() {
background(255); // Clear screen
// Evaluate equation (1.5)
// STEP1: Insert code here that computes a_unit (i.e. the unit vector in the
// direction of a
double length = a.norm2();
Matrix a_unit= a.times(1/length);
// STEP2: Insert code here to compute the dot product of b and a_unit
Matrix a_unit_T = a_unit.transpose();
Matrix projection = a_unit_T.times(b);
double lp = projection.get(0,0);
// STEP3 Insert code here to compute the vector p using equation 1.5 above Matrix p = a_unit.times(lp);
float px = (float)p.get(0,0);
float py = (float)p.get(1,0);
float ax = (float)a.get(0,0);
float ay = (float)a.get(1,0);
float bx = (float)b.get(0,0);
float by = (float)b.get(1,0);
float cx = (float)c.get(0,0);
float cy = (float)c.get(1,0);
// Draw the projection of b onto a
stroke(0,0,0); // Use a black pen
ellipse(X+px,Y+py,10,10); // point where b projects onto a
line(X+px,Y+py,X+bx,Y+by); // line from a to point of projection on b
stroke(255,0,0); // Make pen red
arrow(X,Y,X+ax,Y+ay); // Draw vector a starting at (X,Y)
//stroke(0,0,255);
//arrow(X,Y,X-ax,Y+ay);
stroke(0,255,0); // Make pen green
arrow(X,Y,X+bx,Y+by); // Draw vector b starting at (X,Y)
// STEP 4. Insert code here to add a new vector at 90 degrees to the vector a
stroke(0,0,255);
arrow(X,Y,X+cx,Y+cy);
// STEP 5. Insert code here to compute and draw the projection of b onto c
double length1 = c.norm2();
Matrix c_unit= c.times(1/length1);
// STEP2: Insert code here to compute the dot product of b and a_unit
Matrix c_unit_T = c_unit.transpose();
Matrix projection1 = c_unit_T.times(b);
double lp1 = projection.get(0,0);
// STEP3 Insert code here to compute the vector p using equation 1.5 above
Matrix r = c_unit.times(lp1);
float rx = (float)r.get(0,0);
float ry = (float)r.get(1,0);
stroke(0,0,0); // Use a black pen
ellipse(X+rx,Y+ry,10,10); // point where b projects onto a
line(X+rx,Y+ry,X+bx,Y+by); // line from a to point of projection on b
if (mouseButton == RIGHT)
{
a.set(0,0,(double)mouseX-X);
a.set(1,0,(double)mouseY-Y);
}
if (mouseButton == LEFT)
{
b.set(0,0,(double)mouseX-X);
b.set(1,0,(double)mouseY-Y);
} } // Draw an arrow from (x1,y1) to (x2,y2) void arrow(float x1, float y1, float x2, float y2) { line(x1, y1, x2, y2);
pushMatrix(); translate(x2, y2); float a = atan2(x1-x2, y2-y1);
rotate(a); line(0, 0, -8, -8); line(0, 0, 8, -8); popMatrix(); }
Here is the code mate,
float X=200; // Origin : Note we have now centred the origin in the X-direction
float Y=350;
float ax=300; // Vector a resolved into components
float ay=-100;
float bx=0; // Vector b resolved into components
float by=-300;
Matrix a;
Matrix b;
void setup()
{
size(400,400); // Create a drawing window
strokeWeight(3); // Make pen 3 pixels wide for all lines
double [][] anums = {{ax},
{ay}};
double [][] bnums = {{bx},
{by}};
a = new Matrix(anums);
b = new Matrix(bnums);
}
void draw()
{
background(255); // Clear screen
// Evaluate equation (1.5)
// STEP1: Insert code here that computes a_unit (i.e. the unit vector in the
// direction of a
double length = a.norm2();
Matrix a_unit = a.times(1/length);
// STEP2: Insert code here to compute the dot product of b and a_unit
Matrix a_unit_T = a_unit.transpose();
Matrix projection = a_unit_T.times(b);
double lp = projection.get(0,0);
// STEP3: Insert code here to compute the vector p using equation 1.5 above
Matrix p = a_unit.times(lp);
float px = (float)p.get(0,0);
float py = (float)p.get(1,0);
float ax = (float)a.get(0,0);
float ay = (float)a.get(1,0);
float bx = (float)b.get(0,0);
float by = (float)b.get(1,0);
// Draw the projection of b onto a
stroke(0,0,0); // Use a black pen
ellipse(X+px,Y+py,10,10); // point where b projects onto a
line(X+px,Y+py,X+bx,Y+by); // line from a to point of projection on b
stroke(255,0,0); // Make pen red
arrow(X,Y,X+ax,Y+ay); // Draw vector a starting at (X,Y)
stroke(0,255,0); // Make pen green
arrow(X,Y,X+bx,Y+by); // Draw vector b starting at (X,Y)
// STEP 4. Insert code here to add a new vector at 90 degrees to the vector a
double [][] cnums = {{ay},
{-ax}};
Matrix c = new Matrix(cnums);
float cx = (float)c.get(0,0);
float cy = (float)c.get(1,0);
stroke(0,0,255);
arrow(X,Y,X+cx,Y+cy);
// STEP 5. Insert code here to compute and draw the projection of b onto c
double length1 = c.norm2();
Matrix c_unit= c.times(1/length1);
Matrix c_unit_T = c_unit.transpose();
Matrix projection1 = c_unit_T.times(b);
double lp1 = projection1.get(0,0);
Matrix r = c_unit.times(lp1);
float rx = (float)r.get(0,0);
float ry = (float)r.get(1,0);
stroke(0,0,0); // Use a black pen
ellipse(X+rx,Y+ry,10,10); // point where b projects onto a
line(X+rx,Y+ry,X+bx,Y+by); // line from a to point of projection on b
if (mouseButton == RIGHT)
{
a.set(0,0,(double)mouseX-X);
a.set(1,0,(double)mouseY-Y);
}
if (mouseButton == LEFT)
{
b.set(0,0,(double)mouseX-X);
b.set(1,0,(double)mouseY-Y);
}
}
// Draw an arrow from (x1,y1) to (x2,y2)
void arrow(float x1, float y1, float x2, float y2)
{
line(x1, y1, x2, y2);
pushMatrix();
translate(x2, y2);
float a = atan2(x1-x2, y2-y1);
rotate(a);
line(0, 0, -8, -8);
line(0, 0, 8, -8);
popMatrix();
}
I'm developing a Processing sketch that, given a certain angle, draws a dot at the edge of a rhombus.
I know the width of the rhombus, and its position, but I'm not sure how to calculate the x-y coordinates of a dot resting at its edge.
Are there any elegant solutions for this problem? Any help in pseudocode would be welcomed.
Let's square side length is A, half-length is H = A/2. Angle Theta. Intersection point P.
All coordinates are relative to the square center.
Rotate square by -Pi/4, angle Alpha = Theta - Pi/4
if Alpha lies in range -Pi/4..Pi/4, then intersection point P' = (H, H*Tan(Alpha))
if Alpha lies in range Pi/4..3*Pi/4, then P' = (H*Cotangent(Alpha), H)
if Alpha lies in range 3*Pi/4..5*Pi/4, then P' = (-H, -H*Tan(Alpha))
if Alpha lies in range 5*Pi/4..7*Pi/4, then P' = (-H*Cotangent(Alpha), -H)
Then rotate point P' back by Pi/4:
S = Sqrt(2)/2
P.X = S * (P'.X - P'.Y)
P.Y = S * (P'.X + P'.Y)
Example (data like your sketch):
A = 200, Theta = 5*Pi/12
H = 200/2 = 100, Alpha =Theta-Pi/4 = Pi/6
P'.X = H = 100
P'.Y = H * Tan(Alpha) = 100 * Tan(Pi/6) ~= 57.7
S = 0.707
P.X = 0.707 * (100 - 57.7) = 30
P.Y = 0.707 * (100 + 57.7) = 111
Based on your image, you want to find the intersection of two equations, that of the line at angle θ, and that of the side of the square with which it intersects.
Assuming the size of your square is n, the equation of the square is y=±(n*(√2/2))±x (by Pythagoras' theorem). The equation for the side you intersect in your image is y=n*(√2/2)-x.
The equation of the radial line can be calculated using trigonometry to be y=tan(θ)*x, with θ expressed in radians.
You can then solve this as a simultaneous equation to determine the intersection. Please note that it will intersect with both sides of the square (both above and below), so if you only want the one you will have to choose the equation for the correct side of the square. Also guard against the case where θ is π/2, as tan(π/2) is undefined. You can easily work out that case, as x=0 and so it will always intersect at y=±(n*(√2/2)).
In your example, the intersection occurs when x*(1+tan(θ))=n*(√n/n), or x=(n*(√n/n))/(1+tan(θ)). You can calculate that, plug it back into y and that is your (x,y) intersection.
Imagine a circle with a larger radius that will intersect your rhombus at the points you want. One way to draw at that location is to use a nested coordinate system that you translate and rotate. All you need to know is the radius and the angle.
Here's a very basic example:
float angle = radians(-80.31);
float radius = 128;
float centerX,centerY;
void setup(){
size(320,320);
noFill();
rectMode(CENTER);
centerX = width * 0.5;
centerY = height * 0.5;
}
void draw(){
background(255);
noFill();
//small circle
strokeWeight(1);
stroke(95,105,120);
ellipse(centerX,centerY,210,210);
rhombus(centerX,centerY,210);
//large circle
strokeWeight(3);
stroke(95,105,120);
ellipse(centerX,centerY,radius * 2,radius * 2);
//line at angle
pushMatrix();
translate(centerX,centerY);
rotate(angle);
stroke(162,42,32);
line(0,0,radius,0);
popMatrix();
//debug
fill(0);
text("angle: " + degrees(angle),10,15);
}
void rhombus(float x,float y,float size){
pushMatrix();
translate(x,y);
rotate(radians(45));
rect(0,0,size,size);
popMatrix();
}
void mouseDragged(){
angle = atan2(centerY-mouseY,centerX-mouseX)+PI;
}
You can try a demo here(you can drag the mouse to change the angle):
var angle;
var radius = 128;
var centerX,centerY;
function setup(){
createCanvas(320,320);
noFill();
rectMode(CENTER);
angle = radians(-80.31);
centerX = width * 0.5;
centerY = height * 0.5;
}
function draw(){
background(255);
noFill();
//small circle
strokeWeight(1);
stroke(95,105,120);
ellipse(centerX,centerY,210,210);
rhombus(centerX,centerY,210);
//large circle
strokeWeight(3);
stroke(95,105,120);
ellipse(centerX,centerY,radius * 2,radius * 2);
//line at angle
push();
translate(centerX,centerY);
rotate(angle);
stroke(162,42,32);
line(0,0,radius,0);
pop();
//debug
fill(0);
noStroke();
text("angle: " + degrees(angle),10,15);
}
function rhombus(x,y,size){
push();
translate(x,y);
rotate(radians(45));
rect(0,0,size,size);
pop();
}
function mouseDragged(){
angle = atan2(centerY-mouseY,centerX-mouseX)+PI;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.0/p5.min.js"></script>
If you want to calculate the position, you can use the polar to cartesian coordinate conversion formula:
x = cos(angle) * radius
y = sin(angle) * radius
Here's an example using that. Note that drawing is done from the centre, therefore the centre coordinates are added to the above:
float angle = radians(-80.31);
float radius = 128;
float centerX,centerY;
void setup(){
size(320,320);
noFill();
rectMode(CENTER);
centerX = width * 0.5;
centerY = height * 0.5;
}
void draw(){
background(255);
noFill();
//small circle
strokeWeight(1);
stroke(95,105,120);
ellipse(centerX,centerY,210,210);
rhombus(centerX,centerY,210);
//large circle
strokeWeight(3);
stroke(95,105,120);
ellipse(centerX,centerY,radius * 2,radius * 2);
//line at angle
float x = centerX+(cos(angle) * radius);
float y = centerX+(sin(angle) * radius);
stroke(162,42,32);
line(centerX,centerY,x,y);
//debug
fill(0);
text("angle: " + degrees(angle),10,15);
}
void rhombus(float x,float y,float size){
pushMatrix();
translate(x,y);
rotate(radians(45));
rect(0,0,size,size);
popMatrix();
}
void mouseDragged(){
angle = atan2(centerY-mouseY,centerX-mouseX)+PI;
}
Another option would be using transformation matrices
I am trying to display a mathematical surface f(x,y) defined on a XY regular mesh using OpenGL and C++ in an effective manner:
struct XYRegularSurface {
double x0, y0;
double dx, dy;
int nx, ny;
XYRegularSurface(int nx_, int ny_) : nx(nx_), ny(ny_) {
z = new float[nx*ny];
}
~XYRegularSurface() {
delete [] z;
}
float& operator()(int ix, int iy) {
return z[ix*ny + iy];
}
float x(int ix, int iy) {
return x0 + ix*dx;
}
float y(int ix, int iy) {
return y0 + iy*dy;
}
float zmin();
float zmax();
float* z;
};
Here is my OpenGL paint code so far:
void color(QColor & col) {
float r = col.red()/255.0f;
float g = col.green()/255.0f;
float b = col.blue()/255.0f;
glColor3f(r,g,b);
}
void paintGL_XYRegularSurface(XYRegularSurface &surface, float zmin, float zmax) {
float x, y, z;
QColor col;
glBegin(GL_QUADS);
for(int ix = 0; ix < surface.nx - 1; ix++) {
for(int iy = 0; iy < surface.ny - 1; iy++) {
x = surface.x(ix,iy);
y = surface.y(ix,iy);
z = surface(ix,iy);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix + 1, iy);
y = surface.y(ix + 1, iy);
z = surface(ix + 1,iy);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix + 1, iy + 1);
y = surface.y(ix + 1, iy + 1);
z = surface(ix + 1,iy + 1);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix, iy + 1);
y = surface.y(ix, iy + 1);
z = surface(ix,iy + 1);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
}
}
glEnd();
}
The problem is that this is slow, nx=ny=1000 and fps ~= 1.
How do I optimize this to be faster?
EDIT: following your suggestion (thanks!) regarding VBO
I added:
float* XYRegularSurface::xyz() {
float* data = new float[3*nx*ny];
long i = 0;
for(int ix = 0; ix < nx; ix++) {
for(int iy = 0; iy < ny; iy++) {
data[i++] = x(ix,iy);
data[i++] = y(ix,iy);
data[i] = z[i]; i++;
}
}
return data;
}
I think I understand how I can create a VBO, initialize it to xyz() and send it to the GPU in one go, but how do I use the VBO when drawing. I understand that this can either be done in the vertex shader or by glDrawElements? I assume the latter is easier? If so: I do not see any QUAD mode in the documentation for glDrawElements!?
Edit2:
So I can loop trough all nx*ny quads and draw each by:
GL_UNSIGNED_INT indices[4];
// ... set indices
glDrawElements(GL_QUADS, 1, GL_UNSIGNED_INT, indices);
?
1/. Use display lists, to cache GL commands - avoiding recalculation of the vertices and the expensive per-vertex call overhead. If the data is updated, you need to look at client-side vertex arrays (not to be confused with VAOs). Now ignore this option...
2/. Use vertex buffer objects. Available as of GL 1.5.
Since you need VBOs for core profile anyway (i.e., modern GL), you can at least get to grips with this first.
Well, you've asked a rather open ended question. I'd suggest using modern (3.0+) OpenGL for everything. The point of just about any new OpenGL feature is to provide a faster way to do things. Like everyone else is suggesting, use array (vertex) buffer objects and vertex array objects. Use an element array (index) buffer object too. Most GPUs have a 'post-transform cache', which stores the last few transformed vertices, but this can only be used when you call the glDraw*Elements family of functions. I also suggest you store a flat mesh in your VBO, where y=0 for each vertex. Sample the y from a heightmap texture in your vertex shader. If you do this, whenever the surface changes you will only need to update the heightmap texture, which is easier than updating the VBO. Use one of the floating point or integer texture formats for a heightmap, so you aren't restricted to having your values be between 0 and 1.
If so: I do not see any QUAD mode in the documentation for glDrawElements!?
If you want quads make sure you're looking at the GL 2.1-era docs, not the new stuff.