This is a computer graphics code for 2D rotation in TURBO C++. It compiles fine but I can't run it. What should I do? - turbo-c++

The code below compiles fine but I cant run it on TURBO C++. The runtime screen just flashes. But i have also used getch(). I dont know where I am going wrong. What should I do?
#include<conio.h>
#include<math.h>
#include<stdlib.h>
#include<graphics.h>
void main()
{
int gm;
int gd = DETECT; //graphic driver
int x1, x2, x3, y1, y2, y3, x1n, x2n, x3n, y1n, y2n, y3n, c; //vertices of triangle
int r; //rotation angle
float t;
initgraph(&gd, &gm, "C:\TURBOC3:\BGI:");
setcolor(RED);
printf("\t Enter vertices of triangle: ");
scanf("%d%d%d%d%d%d", &x1,&y1,&x2,&y2,&x3,&y3);
line(x1,y1,x2,y2);
line(x2,y2,x3,y3);
line(x3,y3,x1,y1);
printf("\nEnter angle of rotation: ");
scanf("%d",&r);
t = 3.14*r/180; //converting degree into radian
//applying 2D rotation equations
x1n = abs(x1*cos(t)-y1*sin(t));
y1n = abs(x1*sin(t)+y1*cos(t));
x2n = abs(x2*cos(t)-y2*sin(t));
y2n = abs(x2*sin(t)+y2*cos(t));
x3n = abs(x3*cos(t)-y3*sin(t));
y3n = abs(x3*sin(t)+y3*cos(t));
//Drawing the rotated triangle
line(x1n,y1n,x2n,y2n);
line(x2n,y2n,x3n,y3n);
line(x3n,y3n,x1n,y1n);
getch();
}

Many useful pieces of info in comments.
The problem (or at least the main one) is clear: path to .bgi files ("C:\TURBOC3:\BGI:") is wrong, actually it's not even a valid Win (DOS) path.
It contains a bunch of colons (:), when only the drive letter (if present) should contain one
It's always good to escape (double) bkslashes (\) in paths. This doesn't affect you in this case, but it's a general guideline
As a consequence, initgraph fails.
Another golden rule when programming, is: always check a function outcome (return code, error flags, ...), don't assume everything just worked fine! In this case, graphresult should be used. I don't know where the official documentation is (or if it exists), but here's a pretty good substitute: [Colorado.CS]: Borland Graphics Interface (BGI) for Windows.
There are also some minor problems, like printf not functioning in graphic mode (scanf does, but it lets the user input to be displayed (in text mode), so it messes up (part of) the graphic screen).
Here's a modified version of the code (I added the test variable to avoid entering the 7 values every time the program is run).
main00.c:
#include <conio.h>
#include <graphics.h>
#include <math.h>
#include <stdlib.h>
int main() {
int err, gm, gd = DETECT; // Graphic driver
int x1, x2, x3, y1, y2, y3, x1n, x2n, x3n, y1n, y2n, y3n, c; // Vertices of triangle
int r; // Rotation angle
float t;
int test = 1; // Set to: 0 to read from keyboard, or anything else to use predefined values
if (test) {
x1 = 220;
y1 = 200;
x2 = 420;
y2 = 200;
x3 = 320;
y3 = 280;
r = 45;
} else {
printf("\nEnter vertices (x, y) of triangle: ");
scanf("%d%d%d%d%d%d", &x1, &y1, &x2, &y2, &x3, &y3);
printf("\nEnter angle of rotation (degrees): ");
scanf("%d", &r);
}
initgraph(&gd, &gm, "Y:\\BC\\BGI"); // You should use "C:\\TURBOC3\\BGI"
err = graphresult();
if (err != grOk) {
printf("Error initializing graphics: %d\n", err);
getch();
return -1;
}
setcolor(WHITE);
outtextxy(10, 10, "Triangle rotation demo");
setcolor(LIGHTRED);
line(x1, y1, x2, y2);
line(x2, y2, x3, y3);
line(x3, y3, x1, y1);
t = M_PI * r / 180; // Converting degrees into radians
// Applying 2D rotation equations
x1n = abs(x1 * cos(t) - y1 * sin(t));
y1n = abs(x1 * sin(t) + y1 * cos(t));
x2n = abs(x2 * cos(t) - y2 * sin(t));
y2n = abs(x2 * sin(t) + y2 * cos(t));
x3n = abs(x3 * cos(t) - y3 * sin(t));
y3n = abs(x3 * sin(t) + y3 * cos(t));
// Drawing the rotated triangle
setcolor(YELLOW);
line(x1n, y1n, x2n, y2n);
line(x2n, y2n, x3n, y3n);
line(x3n, y3n, x1n, y1n);
getch();
return 0;
}
Output (in a DOSBox emulator):
Build:
Run:
Note: The rotated triangle (yellow) might seem positioned a bit unexpectedly (translated), but that is because no rotation center is explicitly provided, so O(0, 0) (origin - upper left corner) is used, and the 3 points are rotated around it. If choosing one of the triangle vertices (or better: one of its centers) as rotation center, the 2 triangles will overlap, making the rotation more obvious. But that's just (plane) geometry, and it's beyond this question's scope.

Related

Receiving denormalized output texture coordinates in Frag shader

Update
See rationale at the end of my question below
Using WebGL2 I can access a texel by its denormalized coordinates (sorry don't the right lingo for this). That means I don't have to scale them down to 0-1 like I do in texture2D().
However the input to the fragment shader is still the vec2/3 in normalized values.
Is there a way to declare in/out variables in the Vertex and Frag shaders so that I don't have to scale the coordinates?
somewhere in vertex shader:
...
out vec2 TextureCoordinates;
somewhere in frag shader:
...
in vec2 TextureCoordinates;
I would like for TextureCoordinates to be ivec2 and already scaled.
This question and all my other questions on webgl related to general computing using WebGL. We are trying to do tensor (multi-D matrix) operations using WebGL.
We map our data in a few ways to a Texture. The simplest approach we follow is -- assuming we can access our data as a flat array -- to lay it out along the texture's width and go up the texture's height until we're done.
Since our thinking, logic, and calculations are all based on tensor/matrix indices -- inside the fragment shader -- we'd have to map back to/from the X-Y texture coordinates to indices. The intermediate step here is to calculate an offset for a given position of a texel. Then from that offset we can calculate the matrix indices from its strides.
Calculating an offset in webgl 1 for very large textures seems to be taking much longer than webgl2 using the integer coordinates. See below:
WebGL 1 offset calculation
int coordsToOffset(vec2 coords, int width, int height) {
float s = coords.s * float(width);
float t = coords.t * float(height);
int offset = int(t) * width + int(s);
return offset;
}
vec2 offsetToCoords(int offset, int width, int height) {
int t = offset / width;
int s = offset - t*width;
vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height);
return coords;
}
WebGL 2 offset calculation in the presence of int coords
int coordsToOffset(ivec2 coords, int width) {
return coords.t * width + coords.s;
}
ivec2 offsetToCoords(int offset, int width) {
int t = offset / width;
int s = offset - t*width;
return ivec2(s,t);
}
It should be clear that for a series of large texture operations we're saving hundreds of thousands of operations just on the offset/coords calculation.
It's not clear why you want do what you're trying to do. It would be better to ask something like "I'm trying to draw an image/implement post processing glow/do ray tracing/... and to do that I want to use un-normalized texture coordinates because " and then we can tell you if your solution is going to work and how to solve it.
In any case, passing int or unsigned int or ivec2/3/4 or uvec2/3/4 as a varying is supported but not interpolation. You have to declare them as flat.
Still, you can pass un-normalized values as float or vec2/3/4 and the convert to int, ivec2/3/4 in the fragment shader.
The other issue is you'll get no sampling using texelFetch, the function that takes texel coordinates instead of normalized texture coordinates. It just returns the exact value of a single pixel. It does not support filtering like the normal texture function.
Example:
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need webgl2");
}
const vs = `
#version 300 es
in vec4 position;
in ivec2 texelcoord;
out vec2 v_texcoord;
void main() {
v_texcoord = vec2(texelcoord);
gl_Position = position;
}
`;
const fs = `
#version 300 es
precision mediump float;
in vec2 v_texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texelFetch(tex, ivec2(v_texcoord), 0);
}
`;
// compile shaders, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// create buffers via gl.createBuffer, gl.bindBuffer, gl.bufferData)
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-.5, -.5,
.5, -.5,
0, .5,
],
},
texelcoord: {
numComponents: 2,
data: new Int32Array([
0, 0,
15, 0,
8, 15,
]),
}
});
// make a 16x16 texture
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = 16;
ctx.canvas.height = 16;
for (let i = 23; i > 0; --i) {
ctx.fillStyle = `hsl(${i / 23 * 360 | 0}, 100%, ${i % 2 ? 25 : 75}%)`;
ctx.beginPath();
ctx.arc(8, 15, i, 0, Math.PI * 2, false);
ctx.fill();
}
const tex = twgl.createTexture(gl, { src: ctx.canvas });
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// no need to set uniforms since they default to 0
// and only one texture which is already on texture unit 0
gl.drawArrays(gl.TRIANGLES, 0, 3);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
So in response to your updated question it's still not clear what you want to do. Why do you want to pass varyings to the fragment shader? Can't you just do whatever math you want in the fragment shader itself?
Example:
uniform sampler2D tex;
out float result;
// some all the values in the texture
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(tex, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
sum4 += texelFetch(tex, ivec2(x, y), 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Example2
uniform isampler2D indices;
uniform sampler2D data;
out float result;
// some only values in data pointed to by indices
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(indices, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
ivec2 index = texelFetch(indices, ivec2(x, y), 0).xy;
sum4 += texelFetch(tex, index, 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Note that I'm also not an expert in GPGPU but I have an hunch the code above is not the fastest way because I believe parallelization happens based on output. The code above has only 1 output so no parallelization? It would be easy to change so that it takes a block ID, tile ID, area ID as input and computes just the sum for that area. Then you'd write out a larger texture with the sum of each block and finally sum the block sums.
Also, dependant and non-uniform texture reads are a known perf issue. The first example reads the texture in order. That's cache friendly. The second example reads the texture in a random order (specified by indices), that's not cache friendly.

Processing 3 improving intensive math calculation

I wrote a very simple sketch to simulate the interference of two planar waves, very easy.
The problem seems to be a little to much intensive for the cpu (moreover processing uses only one core) and I get only 1 o 2 fps.
Any idea how to improve this sketch?
float x0;
float y0;
float x1;
float y1;
float x2;
float y2;
int t = 0;
void setup() {
//noLoop();
frameRate(30);
size(400, 400, P2D);
x0 = width/2;
y0 = height/2;
x1 = width/4;
y1 = height/2;
x2 = width * 3/4;
y2 = height / 2;
}
void draw() {
background(0);
for (int x = 0; x <= width; x++) {
for (int y = 0; y <= height; y++) {
float d1 = dist(x1, y1, x, y);
float d2 = dist(x2, y2, x, y);
float factorA = 20;
float factorB = 80;
float wave1 = (1 + (sin(TWO_PI * d1/factorA + t)))/2 * exp(-d1/factorB);
float wave2 = (1 + (sin(TWO_PI * d2/factorA + t)))/2 * exp(-d2/factorB);
stroke( (wave1 + wave2) *255);
point(x, y);
}
}
t--; //Wave propagation
//saveFrame("wave-##.png");
}
As Kevin suggested, using point() isn't the most efficient method since it calls beginShape();vertex() and endShape();. You might be off better using pixels.
Additionally, the nested loops can be written as a single loop and dist() which uses square root behind the scenes can be avoided (you can uses squared distance with higher values).
Here's a version using these:
float x1;
float y1;
float x2;
float y2;
int t = 0;
//using larger factors to use squared distance bellow instead of dist(),sqrt()
float factorA = 20*200;
float factorB = 80*200;
void setup() {
//noLoop();
frameRate(30);
size(400, 400);
x1 = width/4;
y1 = height/2;
x2 = width * 3/4;
y2 = height / 2;
//use pixels, not points()
loadPixels();
}
void draw() {
for (int i = 0; i < pixels.length; i++) {
int x = i % width;
int y = i / height;
float dx1 = x1-x;
float dy1 = y1-y;
float dx2 = x2-x;
float dy2 = y2-y;
//squared distance
float d1 = dx1*dx1+dy1*dy1;//dist(x1, y1, x, y);
float d2 = dx2*dx2+dy2*dy2;//dist(x2, y2, x, y);
float wave1 = (1 + (sin(TWO_PI * d1/factorA + t))) * 0.5 * exp(-d1/factorB);
float wave2 = (1 + (sin(TWO_PI * d2/factorA + t))) * 0.5 * exp(-d2/factorB);
pixels[i] = color((wave1 + wave2) *255);
}
updatePixels();
text((int)frameRate+"fps",10,15);
// endShape();
t--; //Wave propagation
//saveFrame("wave-##.png");
}
This can be sped up further using lookup tables for the more time consuming functions such as sin() and exp().
You can see a rough (numbers need to be tweaked) preview running even in javascript:
var x1;
var y1;
var x2;
var y2;
var t = 0;
var factorA = 20*200;
var factorB = 80*200;
var numPixels;
var scaledWidth;
function setup() {
createCanvas(400, 400);
fill(255);
frameRate(30);
x1 = width /4;
y1 = height /2;
x2 = width * 3/4;
y2 = height / 2;
loadPixels();
numPixels = (width * height) * pixelDensity();
scaledWidth = width * pixelDensity();
}
function draw() {
for (var i = 0, j = 0; i < numPixels; i++, j += 4) {
var x = i % scaledWidth;
var y = floor(i / scaledWidth);
var dx1 = x1 - x;
var dy1 = y1 - y;
var dx2 = x2 - x;
var dy2 = y2 - y;
var d1 = (dx1 * dx1) + (dy1 * dy1);//dist(x1, y1, x, y);
var d2 = (dx2 * dx2) + (dy2 * dy2);//dist(x2, y2, x, y);
var wave1 = (1 + (sin(TWO_PI * d1 / factorA + t))) * 0.5 * exp(-d1 / factorB);
var wave2 = (1 + (sin(TWO_PI * d2 / factorA + t))) * 0.5 * exp(-d2 / factorB);
var gray = (wave1 + wave2) * 255;
pixels[j] = pixels[j+1] = pixels[j+2] = gray;
pixels[j+3] = 255;
}
updatePixels();
text(frameRate().toFixed(2)+"fps",10,15);
t--; //Wave propagation
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.0.0/p5.min.js"></script>
Because you're using math to synthesise the image, it may make more sense to write this as a GLSL Shader. Be sure sure to checkout the PShader tutorial for more info.
Update:
Here's a GLSL version: code is less hacky and a lot more readable:
float t = 0;
float factorA = 0.20;
float factorB = 0.80;
PShader waves;
void setup() {
size(400, 400, P2D);
noStroke();
waves = loadShader("waves.glsl");
waves.set("resolution", float(width), float(height));
waves.set("factorA",factorA);
waves.set("factorB",factorB);
waves.set("pt1",-0.5,0.0);
waves.set("pt2",0.75,0.0);
}
void draw() {
t++;
waves.set("t",t);
shader(waves);
rect(0, 0, width, height);
}
void mouseDragged(){
float x = map(mouseX,0,width,-1.0,1.0);
float y = map(mouseY,0,height,1.0,-1.0);
println(x,y);
if(keyPressed) waves.set("pt2",x,y);
else waves.set("pt1",x,y);
}
void keyPressed(){
float amount = 0.05;
if(keyCode == UP) factorA += amount;
if(keyCode == DOWN) factorA -= amount;
if(keyCode == LEFT) factorB -= amount;
if(keyCode == RIGHT) factorB += amount;
waves.set("factorA",factorA);
waves.set("factorB",factorB);
println(factorA,factorB);
}
And the waves.glsl:
#define PROCESSING_COLOR_SHADER
uniform vec2 pt1;
uniform vec2 pt2;
uniform float t;
uniform float factorA;
uniform float factorB;
const float TWO_PI = 6.283185307179586;
uniform vec2 resolution;
uniform float time;
void main(void) {
vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
float d1 = distance(pt1,p);
float d2 = distance(pt2,p);
float wave1 = (1.0 + (sin(TWO_PI * d1/factorA + t))) * 0.5 * exp(-d1/factorB);
float wave2 = (1.0 + (sin(TWO_PI * d2/factorA + t))) * 0.5 * exp(-d2/factorB);
float gray = wave1 + wave2;
gl_FragColor=vec4(gray,gray,gray,1.0);
}
You can use drag for first point and hold a key and drag for the second point.
Additionally, use UP/DOWN, LEFT/RIGHT keys to change factorA and factorB. Results look interesting:
Also, you can grab a bit of code from this answer to save frames using Threads (I recommend saving uncompressed).
Option 1: Pre-render your sketch.
This seems to be a static repeating pattern, so you can pre-render it by running the animation ahead of time and saving each frame to an image. I see that you already had a call to saveFrame() in there. Once you have the images saved, you can then load them into a new sketch and play them one frame at a time. It shouldn't require very many images, since it seems to repeat itself pretty quickly. Think of an animated gif that loops forever.
Option 2: Decrease the resolution of your sketch.
Do you really need pixel-perfect 400x400 resolution? Can you maybe draw to an image that's 100x100 and scale up?
Or you could decrease the resolution of your for loops by incrementing by more than 1:
for (int x = 0; x <= width; x+=2) {
for (int y = 0; y <= height; y+=2) {
You could play with how much you increase and then use the strokeWeight() or rect() function to draw larger pixels.
Option 3: Decrease the time resolution of your sketch.
Instead of moving by 1 pixel every 1 frame, what if you move by 5 pixels every 5 frames? Speed your animation up, but only move it every X frames, that way the overall speed appears to be the same. You can use the modulo operator along with the frameCount variable to only do something every X frames. Note that you'd still want to keep the overall framerate of your sketch to 30 or 60, but you'd only change the animation every X frames.
Option 4: Simplify your animation.
Do you really need to calculate every single pixels? If all you want to show is a series of circles that increase in size, there are much easier ways to do that. Calling the ellipse() function is much faster than calling the point() function a bunch of times. You can use other functions to create the blurry effect without calling point() half a million times every second (which is how often you're trying to call it).
Option 5: Refactor your code.
If all else fails, then you're going to have to refactor your code. Most of your program's time is being spent in the point() function- you can prove this by drawing an ellipse at mouseX, mouseY at the end of the draw() function and comparing the performance of that when you comment out the call to point() inside your nested for loops.
Computers aren't magic, so calling the point() function half a million times every second isn't free. You're going to have to decrease that number somehow, either by taking one (or more than one) of the above options, or by refactoring your code in some other way.
How you do that really depends on your actual goals, which you haven't stated. If you're just trying to render this animation, then pre-rendering it will work fine. If you need to have user interaction with it, then maybe something like decreasing the resolution will work. You're going to have to sacrifice something, and it's really up to you what that is.

Why does this angle calculation return only values of 0 or -1?

I am trying to calculate some rough angles on the x-axis from an onboard accelerometer on the light blue bean (https://punchthrough.com/bean/). Issue: Everything I calculate comes back as 0 or -1. So I am obviously either passing something wrong, or converting something wrong. I am unsure what. Thought I would post here to see if anyone has a suggestion. Bean docs say use int16_t but they also sometimes use uint16_t or int. Not sure what to follow. Thanks.
void setup()
{
Serial.begin(57600);
}
void loop()
{
AccelerationReading currentAccel = Bean.getAcceleration();
float xAng = makeXAngles(currentAccel);
String stringMaster = String();
stringMaster = stringMaster + "X-Angle: " + xAng;
Serial.println(stringMaster);
Bean.sleep(100);
}
float makeXAngles(AccelerationReading one) {
float x1 = one.xAxis;
float y1 = one.yAxis;
float z1 = one.zAxis;
float x2 = x1 * x1;
float y2 = y1 * y1;
float z2 = z1 * z1;
float result;
float accel_angle_x;
// X-Axis
result = sqrt(y2+z2);
result = x1/result;
accel_angle_x = atan(result);
// return the x angle
return accel_angle_x;
}
Besides the minor problems, like use of uninitialized variable here int y2 = y1 * y2;,
You are using int variables for obviously floating point calculations of the angle, which should be a fractional number in radians (which is hinted by the calculation attempt used). You need to work with float or double precision variables here.

How do I optimize displaying a large number of quads in OpenGL?

I am trying to display a mathematical surface f(x,y) defined on a XY regular mesh using OpenGL and C++ in an effective manner:
struct XYRegularSurface {
double x0, y0;
double dx, dy;
int nx, ny;
XYRegularSurface(int nx_, int ny_) : nx(nx_), ny(ny_) {
z = new float[nx*ny];
}
~XYRegularSurface() {
delete [] z;
}
float& operator()(int ix, int iy) {
return z[ix*ny + iy];
}
float x(int ix, int iy) {
return x0 + ix*dx;
}
float y(int ix, int iy) {
return y0 + iy*dy;
}
float zmin();
float zmax();
float* z;
};
Here is my OpenGL paint code so far:
void color(QColor & col) {
float r = col.red()/255.0f;
float g = col.green()/255.0f;
float b = col.blue()/255.0f;
glColor3f(r,g,b);
}
void paintGL_XYRegularSurface(XYRegularSurface &surface, float zmin, float zmax) {
float x, y, z;
QColor col;
glBegin(GL_QUADS);
for(int ix = 0; ix < surface.nx - 1; ix++) {
for(int iy = 0; iy < surface.ny - 1; iy++) {
x = surface.x(ix,iy);
y = surface.y(ix,iy);
z = surface(ix,iy);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix + 1, iy);
y = surface.y(ix + 1, iy);
z = surface(ix + 1,iy);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix + 1, iy + 1);
y = surface.y(ix + 1, iy + 1);
z = surface(ix + 1,iy + 1);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix, iy + 1);
y = surface.y(ix, iy + 1);
z = surface(ix,iy + 1);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
}
}
glEnd();
}
The problem is that this is slow, nx=ny=1000 and fps ~= 1.
How do I optimize this to be faster?
EDIT: following your suggestion (thanks!) regarding VBO
I added:
float* XYRegularSurface::xyz() {
float* data = new float[3*nx*ny];
long i = 0;
for(int ix = 0; ix < nx; ix++) {
for(int iy = 0; iy < ny; iy++) {
data[i++] = x(ix,iy);
data[i++] = y(ix,iy);
data[i] = z[i]; i++;
}
}
return data;
}
I think I understand how I can create a VBO, initialize it to xyz() and send it to the GPU in one go, but how do I use the VBO when drawing. I understand that this can either be done in the vertex shader or by glDrawElements? I assume the latter is easier? If so: I do not see any QUAD mode in the documentation for glDrawElements!?
Edit2:
So I can loop trough all nx*ny quads and draw each by:
GL_UNSIGNED_INT indices[4];
// ... set indices
glDrawElements(GL_QUADS, 1, GL_UNSIGNED_INT, indices);
?
1/. Use display lists, to cache GL commands - avoiding recalculation of the vertices and the expensive per-vertex call overhead. If the data is updated, you need to look at client-side vertex arrays (not to be confused with VAOs). Now ignore this option...
2/. Use vertex buffer objects. Available as of GL 1.5.
Since you need VBOs for core profile anyway (i.e., modern GL), you can at least get to grips with this first.
Well, you've asked a rather open ended question. I'd suggest using modern (3.0+) OpenGL for everything. The point of just about any new OpenGL feature is to provide a faster way to do things. Like everyone else is suggesting, use array (vertex) buffer objects and vertex array objects. Use an element array (index) buffer object too. Most GPUs have a 'post-transform cache', which stores the last few transformed vertices, but this can only be used when you call the glDraw*Elements family of functions. I also suggest you store a flat mesh in your VBO, where y=0 for each vertex. Sample the y from a heightmap texture in your vertex shader. If you do this, whenever the surface changes you will only need to update the heightmap texture, which is easier than updating the VBO. Use one of the floating point or integer texture formats for a heightmap, so you aren't restricted to having your values be between 0 and 1.
If so: I do not see any QUAD mode in the documentation for glDrawElements!?
If you want quads make sure you're looking at the GL 2.1-era docs, not the new stuff.

correct glsl affine texture mapping

i'm trying to code correct 2D affine texture mapping in GLSL.
Explanation:
...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for.
I'd like to simply "stretch" texture inside quadrilateral, something like this:
but composed from two triangles. Any advice (GLSL) please?
This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package.
GLSL:
varying vec2 shiftedPosition, width_height;
#ifdef VERTEX
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.
width_height = gl_MultiTexCoord1.xy;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
void main() {
gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);
}
#endif
C#:
// Zero out the left and bottom edges,
// leaving a right trapezoid with two sides on the axes and a vertex at the origin.
var shiftedPositions = new Vector2[] {
Vector2.zero,
new Vector2(0, vertices[1].y - vertices[0].y),
new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),
new Vector2(vertices[3].x - vertices[0].x, 0)
};
mesh.uv = shiftedPositions;
var widths_heights = new Vector2[4];
widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;
widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;
widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;
widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;
mesh.uv2 = widths_heights;
I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0
In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code.
float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;
//These are all uniforms for a given quad. Calculate on CPU.
Ax = (x3 - x0) - (x2 - x1);
Bx = (x0 - x1);
Cx = (x2 - x1);
Dx = x1;
Ay = (y3 - y0) - (y2 - y1);
By = (y0 - y1);
Cy = (y2 - y1);
Dy = y1;
float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);
float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);
A = (Ay*Cx)-(Ax*Cy);
//These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader
B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);
C = (Bx * y) + ByDx_minus_BxDy - (By * x);
//These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.
u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);
v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);
Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels.
Check out this link.
https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be
I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids):
//usual euclidean distance
float distance(int ax, int ay, int bx, int by) {
int x = ax-bx;
int y = ay-by;
return sqrtf((float)(x*x + y*y));
}
void gfx_quad(gfx_t *dst //destination texture, we are rendering into
,gfx_t *src //source texture
,int *quad // quadrilateral vertices
)
{
int *v = quad; //quad vertices
float z = 20.0;
float top = distance(v[0],v[1],v[2],v[3]); //top
float bot = distance(v[4],v[5],v[6],v[7]); //bottom
float lft = distance(v[0],v[1],v[4],v[5]); //left
float rgt = distance(v[2],v[3],v[6],v[7]); //right
// By default all vertices lie on the screen plane
float az = 1.0;
float bz = 1.0;
float cz = 1.0;
float dz = 1.0;
// Move Z from screen, if based on distance ratios.
if (top<bot) {
az *= top/bot;
bz *= top/bot;
} else {
cz *= bot/top;
dz *= bot/top;
}
if (lft<rgt) {
az *= lft/rgt;
cz *= lft/rgt;
} else {
bz *= rgt/lft;
dz *= rgt/lft;
}
// draw our quad as two textured triangles
gfx_textured(dst, src
, v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz
, 0.0,0.0, 1.0,0.0, 0.0,1.0);
gfx_textured(dst, src
, v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz
, 1.0,0.0, 0.0,1.0, 1.0,1.0);
}
I'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers
I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates.
I based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions:
The vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order.
The coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner).
std::vector<cv::Point2f> points;
// Convert points to homogeneous coordinates to simplify the problem.
Eigen::Vector3f p0(points[0].x, points[0].y, 1);
Eigen::Vector3f p1(points[1].x, points[1].y, 1);
Eigen::Vector3f p2(points[2].x, points[2].y, 1);
Eigen::Vector3f p3(points[3].x, points[3].y, 1);
// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.
// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.
auto line1 = p2.cross(p0);
auto line2 = p3.cross(p1);
auto intersection = line1.cross(line2);
intersection = intersection / intersection(2);
// Compute distance to each point.
for (const auto &pt : points) {
auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +
std::pow(pt.y - intersection(1), 2));
distances.push_back(distance);
}
// Assumes same order as above.
std::vector<cv::Point2f> texture_coords_unnormalized = {
{1.0f, 1.0f},
{1.0f, 0.0f},
{0.0f, 0.0f},
{0.0f, 1.0f}
};
std::vector<float> texture_coords;
for (int i = 0; i < texture_coords_unnormalized.size(); ++i) {
float u_i = texture_coords_unnormalized[i].x;
float v_i = texture_coords_unnormalized[i].y;
float d_i = distances.at(i);
float d_i_2 = distances.at((i + 2) % 4);
float scale = (d_i + d_i_2) / d_i_2;
texture_coords.push_back(u_i*scale);
texture_coords.push_back(v_i*scale);
texture_coords.push_back(scale);
}
Pass the texture coordinates to your shader (use vec3). Then:
gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);
thanks for answers, but after experimenting i found a solution.
two triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction.
Numbers and shader:
tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]
tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]
d1 = length of top edge = 2
d2 = length of bottom edge = 1
tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]
tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]
only right triangles are rendered using this glsl shader (on left is fixed pipeline):
void main()
{
gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);
}
so.. only U is perspective and V is linear.

Resources