Related
I create a wireframe mesh of two lines between three points:
By these functions:
Qt3DRender::QGeometryRenderer *Utils::createWireframeMesh()
{
Qt3DRender::QGeometryRenderer *mesh = new Qt3DRender::QGeometryRenderer();
Qt3DRender::QGeometry *geometry = new Qt3DRender::QGeometry(mesh);
Qt3DRender::QBuffer *vertexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer,
geometry);
Qt3DRender::QBuffer *indexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::IndexBuffer,
geometry);
QByteArray vertexBufferData;
QByteArray indexBufferData;
int vertexCount = 3; // Three vertices at (0, -1, 0) and (1, 0, 0) and (0, 1, 0)
int lineCount = 2; // Two lines between three vertices
vertexBufferData.resize(vertexCount * 3 * sizeof(float));
indexBufferData.resize(lineCount * 2 * sizeof(ushort));
// Arrow triangle is 2D and is inside XY plane
float *vPtr = reinterpret_cast<float *>(vertexBufferData.data());
vPtr[0] = 0.0f; vPtr[1] = -1.0f; vPtr[2] = 0.0f; // First vertex at (0, -1, 0)
vPtr[3] = 1.0f; vPtr[4] = 0.0f; vPtr[5] = 0.0f; // Second vertex at (1, 0, 0)
vPtr[6] = 0.0f; vPtr[7] = +1.0f; vPtr[8] = 0.0f; // Third vertex at (0, 1, 0)
ushort *iPtr = reinterpret_cast<ushort *>(indexBufferData.data());
iPtr[0] = 0; iPtr[1] = 1; // First line from index 0 to index 1
iPtr[2] = 1; iPtr[3] = 2; // Second line from index 1 to index 2
vertexDataBuffer->setData(vertexBufferData);
indexDataBuffer->setData(indexBufferData);
addPositionAttributeToGeometry(geometry, vertexDataBuffer, vertexCount);
addIndexAttributeToGeometry(geometry, indexDataBuffer, lineCount * 2);
mesh->setInstanceCount(1);
mesh->setIndexOffset(0);
mesh->setFirstInstance(0);
// How to set vertex count here?
mesh->setVertexCount(vertexCount);
mesh->setPrimitiveType(Qt3DRender::QGeometryRenderer::Lines);
mesh->setGeometry(geometry);
return mesh;
}
void Utils::addPositionAttributeToGeometry(Qt3DRender::QGeometry *geometry,
Qt3DRender::QBuffer *buffer, int count)
{
Qt3DRender::QAttribute *posAttribute = new Qt3DRender::QAttribute();
posAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
posAttribute->setBuffer(buffer);
posAttribute->setDataType(Qt3DRender::QAttribute::Float);
posAttribute->setDataSize(3);
posAttribute->setByteOffset(0);
posAttribute->setByteStride(0);
posAttribute->setCount(count);
posAttribute->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
geometry->addAttribute(posAttribute);
}
void Utils::addIndexAttributeToGeometry(Qt3DRender::QGeometry *geometry,
Qt3DRender::QBuffer *buffer, int count)
{
Qt3DRender::QAttribute *indexAttribute = new Qt3DRender::QAttribute();
indexAttribute->setAttributeType(Qt3DRender::QAttribute::IndexAttribute);
indexAttribute->setBuffer(buffer);
indexAttribute->setDataType(Qt3DRender::QAttribute::UnsignedShort);
indexAttribute->setDataSize(1);
indexAttribute->setByteOffset(0);
indexAttribute->setByteStride(0);
indexAttribute->setCount(count);
geometry->addAttribute(indexAttribute);
}
In above code, I tried three different statements at this line:
// How to set vertex count here?
mesh->setVertexCount(vertexCount);
mesh->setVertexCount(vertexCount * 2);
mesh->setVertexCount(vertexCount * 3);
With these results - I do some ray casting in my 3D scene which are surprisingly affected too:
Documentation explains vertexCount property of Qt3DRender::QGeometryRenderer as:
vertexCount : int
Holds the primitive count.
In my case, primitive count is line count, so I tried it but only one line is drawn:
I'm confused about setVertexCount API. Can anybody give me a hint?
vertexCount is the same value that you would pass to glDrawArrays or glDrawElements, ie it's the number of vertices involved in the drawing. Since you're using indexed rendering, that would typically be the number of indexes (assuming you're drawing all in data in the index array). So in the case above, it should be 4.
Please note we recently fixed a bug with line picking when using primitive restart, but that doesn't affect the code you included above.
Update
See rationale at the end of my question below
Using WebGL2 I can access a texel by its denormalized coordinates (sorry don't the right lingo for this). That means I don't have to scale them down to 0-1 like I do in texture2D().
However the input to the fragment shader is still the vec2/3 in normalized values.
Is there a way to declare in/out variables in the Vertex and Frag shaders so that I don't have to scale the coordinates?
somewhere in vertex shader:
...
out vec2 TextureCoordinates;
somewhere in frag shader:
...
in vec2 TextureCoordinates;
I would like for TextureCoordinates to be ivec2 and already scaled.
This question and all my other questions on webgl related to general computing using WebGL. We are trying to do tensor (multi-D matrix) operations using WebGL.
We map our data in a few ways to a Texture. The simplest approach we follow is -- assuming we can access our data as a flat array -- to lay it out along the texture's width and go up the texture's height until we're done.
Since our thinking, logic, and calculations are all based on tensor/matrix indices -- inside the fragment shader -- we'd have to map back to/from the X-Y texture coordinates to indices. The intermediate step here is to calculate an offset for a given position of a texel. Then from that offset we can calculate the matrix indices from its strides.
Calculating an offset in webgl 1 for very large textures seems to be taking much longer than webgl2 using the integer coordinates. See below:
WebGL 1 offset calculation
int coordsToOffset(vec2 coords, int width, int height) {
float s = coords.s * float(width);
float t = coords.t * float(height);
int offset = int(t) * width + int(s);
return offset;
}
vec2 offsetToCoords(int offset, int width, int height) {
int t = offset / width;
int s = offset - t*width;
vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height);
return coords;
}
WebGL 2 offset calculation in the presence of int coords
int coordsToOffset(ivec2 coords, int width) {
return coords.t * width + coords.s;
}
ivec2 offsetToCoords(int offset, int width) {
int t = offset / width;
int s = offset - t*width;
return ivec2(s,t);
}
It should be clear that for a series of large texture operations we're saving hundreds of thousands of operations just on the offset/coords calculation.
It's not clear why you want do what you're trying to do. It would be better to ask something like "I'm trying to draw an image/implement post processing glow/do ray tracing/... and to do that I want to use un-normalized texture coordinates because " and then we can tell you if your solution is going to work and how to solve it.
In any case, passing int or unsigned int or ivec2/3/4 or uvec2/3/4 as a varying is supported but not interpolation. You have to declare them as flat.
Still, you can pass un-normalized values as float or vec2/3/4 and the convert to int, ivec2/3/4 in the fragment shader.
The other issue is you'll get no sampling using texelFetch, the function that takes texel coordinates instead of normalized texture coordinates. It just returns the exact value of a single pixel. It does not support filtering like the normal texture function.
Example:
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need webgl2");
}
const vs = `
#version 300 es
in vec4 position;
in ivec2 texelcoord;
out vec2 v_texcoord;
void main() {
v_texcoord = vec2(texelcoord);
gl_Position = position;
}
`;
const fs = `
#version 300 es
precision mediump float;
in vec2 v_texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texelFetch(tex, ivec2(v_texcoord), 0);
}
`;
// compile shaders, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// create buffers via gl.createBuffer, gl.bindBuffer, gl.bufferData)
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-.5, -.5,
.5, -.5,
0, .5,
],
},
texelcoord: {
numComponents: 2,
data: new Int32Array([
0, 0,
15, 0,
8, 15,
]),
}
});
// make a 16x16 texture
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = 16;
ctx.canvas.height = 16;
for (let i = 23; i > 0; --i) {
ctx.fillStyle = `hsl(${i / 23 * 360 | 0}, 100%, ${i % 2 ? 25 : 75}%)`;
ctx.beginPath();
ctx.arc(8, 15, i, 0, Math.PI * 2, false);
ctx.fill();
}
const tex = twgl.createTexture(gl, { src: ctx.canvas });
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// no need to set uniforms since they default to 0
// and only one texture which is already on texture unit 0
gl.drawArrays(gl.TRIANGLES, 0, 3);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
So in response to your updated question it's still not clear what you want to do. Why do you want to pass varyings to the fragment shader? Can't you just do whatever math you want in the fragment shader itself?
Example:
uniform sampler2D tex;
out float result;
// some all the values in the texture
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(tex, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
sum4 += texelFetch(tex, ivec2(x, y), 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Example2
uniform isampler2D indices;
uniform sampler2D data;
out float result;
// some only values in data pointed to by indices
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(indices, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
ivec2 index = texelFetch(indices, ivec2(x, y), 0).xy;
sum4 += texelFetch(tex, index, 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Note that I'm also not an expert in GPGPU but I have an hunch the code above is not the fastest way because I believe parallelization happens based on output. The code above has only 1 output so no parallelization? It would be easy to change so that it takes a block ID, tile ID, area ID as input and computes just the sum for that area. Then you'd write out a larger texture with the sum of each block and finally sum the block sums.
Also, dependant and non-uniform texture reads are a known perf issue. The first example reads the texture in order. That's cache friendly. The second example reads the texture in a random order (specified by indices), that's not cache friendly.
I am applying a slightly modified version of the classic depth peeling algorithm, basically I am rendering all the opaque objects first and then I use that depth as minimum depth, because since they are opaque, it doesnt make sense to not discard fragment deeper than them.
I first tested it on a small test case and it works flawless.
Now I am applying this algorithm to my main application, but for some unknown reasons, it doesnt work and I am going crazy, the main problem is that I keep reading the value 0 for the opaque depth texture bounded in the fragment shader of the next stage
To sum up, this is the fbo for the opaque stuff:
opaqueDepthTexture = new int[1];
opaqueColorTexture = new int[1];
opaqueFbo = new int[1];
gl3.glGenTextures(1, opaqueDepthTexture, 0);
gl3.glGenTextures(1, opaqueColorTexture, 0);
gl3.glGenFramebuffers(1, opaqueFbo, 0);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);
gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_DEPTH_COMPONENT32F, width, height, 0,
GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueColorTexture[0]);
gl3.glTexImage2D(GL3.GL_TEXTURE_RECTANGLE, 0, GL3.GL_RGBA, width, height, 0,
GL3.GL_RGBA, GL3.GL_FLOAT, null);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_BASE_LEVEL, 0);
gl3.glTexParameteri(GL3.GL_TEXTURE_RECTANGLE, GL3.GL_TEXTURE_MAX_LEVEL, 0);
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);
gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_DEPTH_ATTACHMENT, GL3.GL_TEXTURE_RECTANGLE,
opaqueDepthTexture[0], 0);
gl3.glFramebufferTexture2D(GL3.GL_FRAMEBUFFER, GL3.GL_COLOR_ATTACHMENT0, GL3.GL_TEXTURE_RECTANGLE,
opaqueColorTexture[0], 0);
checkBindedFrameBuffer(gl3);
Here I just clear the depth (default to 1), I even commented out the opaque rendering:
/**
* (1) Initialize Opaque FBO.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, opaqueFbo[0]);
gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);
gl3.glClearColor(1, 1, 1, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
gl3.glEnable(GL3.GL_DEPTH_TEST);
dpOpaque.bind(gl3);
{
// EC_Graph.instance.getRoot().renderDpOpaque(gl3, dpOpaque, new MatrixStack(), properties);
}
dpOpaque.unbind(gl3);
And I have a double confirmation from this
FloatBuffer fb = FloatBuffer.allocate(1 * GLBuffers.SIZEOF_FLOAT);
gl3.glReadPixels(width / 2, height / 2, 1, 1, GL3.GL_DEPTH_COMPONENT, GL3.GL_FLOAT, fb);
System.out.println("opaque fb.get(0) " + fb.get(0));
If I change the clearDepth to 0.9 for example, I get 0.9, so this is ok.
Now I initialize the minimum depth buffer, by rendering all the geometry having alpha < 1 and I bind the previous depth texture, the one used in the opaque rendering, to the
uniform sampler2D opaqueDepthTexture;
I temporarily switched the rendering of this passage to the default framebuffer
/**
* (2) Initialize Min Depth Buffer.
*/
gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, 0);
gl3.glDrawBuffer(GL3.GL_BACK);
// gl3.glBindFramebuffer(GL3.GL_FRAMEBUFFER, blendFbo[0]);
// gl3.glDrawBuffer(GL3.GL_COLOR_ATTACHMENT0);
gl3.glClearColor(0, 0, 0, 1);
gl3.glClear(GL3.GL_COLOR_BUFFER_BIT | GL3.GL_DEPTH_BUFFER_BIT);
gl3.glEnable(GL3.GL_DEPTH_TEST);
if (cullFace) {
gl3.glEnable(GL3.GL_CULL_FACE);
}
dpInit.bind(gl3);
{
gl3.glActiveTexture(GL3.GL_TEXTURE1);
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, opaqueDepthTexture[0]);
gl3.glUniform1i(dpInit.getOpaqueDepthTextureUL(), 1);
gl3.glBindSampler(1, sampler[0]);
{
EC_Graph.instance.getRoot().renderDpTransparent(gl3, dpInit, new MatrixStack(), properties);
}
gl3.glBindTexture(GL3.GL_TEXTURE_RECTANGLE, 0);
gl3.glBindSampler(1, 0);
}
dpInit.unbind(gl3);
This is the dpInit Fragment Shader:
#version 330
out vec4 outputColor;
uniform sampler2D texture0;
in vec2 oUV;
uniform sampler2D opaqueDepthTexture;
/*
* Layout {lighting, normal orientation, active, selected}
*/
uniform ivec4 settings;
const vec3 selectionColor = vec3(1, .5, 0);
const vec4 inactiveColor = vec4(.5, .5, .5, .2);
vec4 CalculateLight();
void main()
{
float opaqueDepth = texture(opaqueDepthTexture, gl_FragCoord.xy).r;
if(gl_FragCoord.z > opaqueDepth) {
//discard;
}
vec4 color = (1 - settings.x) * texture(texture0, oUV) + settings.x * CalculateLight();
if(settings.w == 1) {
if(settings.z == 1) {
color = vec4(selectionColor, color.q);
} else {
color = vec4(selectionColor, inactiveColor.w);
}
} else {
if(settings.z == 0) {
color = inactiveColor;
}
}
outputColor = vec4(color.rgb * color.a, 1.0 - color.a);
outputColor = vec4(.5, 1, 1, 1.0 - color.a);
if(opaqueDepth == 0)
outputColor = vec4(1, 0, 0, 1);
else
outputColor = vec4(0, 1, 0, 1);
}
Ignore the middle, the important is just at the begin, where I read the red component of the previous depth texture and then I compare at the end, and the geometry I obtain is red, this means the value I read in the opaqueDepthTexture is 0...
The question is why?
After the dpInit rendering, if I bind again the opaqueFbo and read the depth, it is always the clearDepth, so 1 as default or .9 if I cleared it with .9, so it works.
The problem is really that I read the wrong value in the dpInit FS from a bound depth texture.. why?
For clarification, this is the sampler:
private void initSampler(GL3 gl3) {
sampler = new int[1];
gl3.glGenSamplers(1, sampler, 0);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_S, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_WRAP_T, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MIN_FILTER, GL3.GL_NEAREST);
gl3.glSamplerParameteri(sampler[0], GL3.GL_TEXTURE_MAG_FILTER, GL3.GL_NEAREST);
}
Ps: checking all the components, I see the opaqueDepthTexture has always the following values (0, 0, 0, 1)
Oh god, I found it, in the init FS
uniform sampler2D opaqueDepthTexture;
should be
uniform sampler2DRect opaqueDepthTexture;
In this answer to my recent question, there is some code that draws a graph, but I can't manage to edit it into something that accepts any list of points as a parameter.
I'd like the Drawing method to accept these parameters:
List of Vector2, Point or VertexPositionColor, I can work with whichever.
Offset for the whole graph
These optional requirements would be appreciated:
Color that may override VertexPositionColor's color and apply to all points.
Size of the graph, so it can be shrunk or expanded, either as Vector2 as multiplier, or Point as target size. Maybe even combine this with offset in Rectangle.
And if it's possible, I'd like to have it all in a class, so graphs can be used separately from each other, each with its own Effect.world matrix, etc.
Here is that code (by Niko Drašković):
Matrix worldMatrix;
Matrix viewMatrix;
Matrix projectionMatrix;
BasicEffect basicEffect;
VertexPositionColor[] pointList;
short[] lineListIndices;
protected override void Initialize()
{
int n = 300;
//GeneratePoints generates a random graph, implementation irrelevant
pointList = new VertexPositionColor[n];
for (int i = 0; i < n; i++)
pointList[i] = new VertexPositionColor() { Position = new Vector3(i, (float)(Math.Sin((i / 15.0)) * height / 2.0 + height / 2.0 + minY), 0), Color = Color.Blue };
//links the points into a list
lineListIndices = new short[(n * 2) - 2];
for (int i = 0; i < n - 1; i++)
{
lineListIndices[i * 2] = (short)(i);
lineListIndices[(i * 2) + 1] = (short)(i + 1);
}
worldMatrix = Matrix.Identity;
viewMatrix = Matrix.CreateLookAt(new Vector3(0.0f, 0.0f, 1.0f), Vector3.Zero, Vector3.Up);
projectionMatrix = Matrix.CreateOrthographicOffCenter(0, (float)GraphicsDevice.Viewport.Width, (float)GraphicsDevice.Viewport.Height, 0, 1.0f, 1000.0f);
basicEffect = new BasicEffect(graphics.GraphicsDevice);
basicEffect.World = worldMatrix;
basicEffect.View = viewMatrix;
basicEffect.Projection = projectionMatrix;
basicEffect.VertexColorEnabled = true; //important for color
base.Initialize();
}
And the drawing method:
foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes)
{
pass.Apply();
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColor>(
PrimitiveType.LineList,
pointList,
0,
pointList.Length,
lineListIndices,
0,
pointList.Length - 1
);
}
The Graph class that does the requested can be found here.About 200 lines of code seemed too much to paste here.
The Graph is drawn by passing a list of floats (optionally with colors) to its Draw(..) method.
Graph properties are:
Vector2 Position - the bottom left corner of the graph
Point Size - the width (.X) and height (.Y) of the graph. Horizontally, values will be distributed to exactly fit the width. Vertically, all values will be scaled with Size.Y / MaxValue.
float MaxValue - the value which will be at the top of the graph. All off the chart values (greater than MaxValue) will be set to this value.
GraphType Type - with possible values GraphType.Line and GraphType.Fill, determines if the graph will be drawn line only, or bottom filled.
The graph is drawn with a line list / triangle strip.
i'm trying to code correct 2D affine texture mapping in GLSL.
Explanation:
...NONE of this images is correct for my purposes. Right (labeled Correct) has perspective correction which i do not want. So this: Getting to know the Q texture coordinate solution (without further improvements) is not what I'm looking for.
I'd like to simply "stretch" texture inside quadrilateral, something like this:
but composed from two triangles. Any advice (GLSL) please?
This works well as long as you have a trapezoid, and its parallel edges are aligned with one of the local axes. I recommend playing around with my Unity package.
GLSL:
varying vec2 shiftedPosition, width_height;
#ifdef VERTEX
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.
width_height = gl_MultiTexCoord1.xy;
}
#endif
#ifdef FRAGMENT
uniform sampler2D _MainTex;
void main() {
gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);
}
#endif
C#:
// Zero out the left and bottom edges,
// leaving a right trapezoid with two sides on the axes and a vertex at the origin.
var shiftedPositions = new Vector2[] {
Vector2.zero,
new Vector2(0, vertices[1].y - vertices[0].y),
new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),
new Vector2(vertices[3].x - vertices[0].x, 0)
};
mesh.uv = shiftedPositions;
var widths_heights = new Vector2[4];
widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;
widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;
widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;
widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;
mesh.uv2 = widths_heights;
I recently managed to come up with a generic solution to this problem for any type of quadrilateral. The calculations and GLSL maybe of help. There's a working demo in java (that runs on Android), but is compact and readable and should be easily portable to unity or iOS: http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0
In case anyone's still interested, here's a C# implementation that takes a quad defined by the clockwise screen verts (x0,y0) (x1,y1) ... (x3,y3), an arbitrary pixel at (x,y) and calculates the u and v of that pixel. It was originally written to CPU-render an arbitrary quad to a texture, but it's easy enough to split the algorithm across CPU, Vertex and Pixel shaders; I've commented accordingly in the code.
float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;
//These are all uniforms for a given quad. Calculate on CPU.
Ax = (x3 - x0) - (x2 - x1);
Bx = (x0 - x1);
Cx = (x2 - x1);
Dx = x1;
Ay = (y3 - y0) - (y2 - y1);
By = (y0 - y1);
Cy = (y2 - y1);
Dy = y1;
float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);
float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);
A = (Ay*Cx)-(Ax*Cy);
//These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader
B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);
C = (Bx * y) + ByDx_minus_BxDy - (By * x);
//These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.
u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);
v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);
Tessellation solves this problem. Subdividing quad vertex adds hints to interpolate pixels.
Check out this link.
https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be
I had similar question ( https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871 ) , and at gamedev they suggested using imaginary Z coord, which I calculate using the following C code, which appears to be working in general case (not just trapezoids):
//usual euclidean distance
float distance(int ax, int ay, int bx, int by) {
int x = ax-bx;
int y = ay-by;
return sqrtf((float)(x*x + y*y));
}
void gfx_quad(gfx_t *dst //destination texture, we are rendering into
,gfx_t *src //source texture
,int *quad // quadrilateral vertices
)
{
int *v = quad; //quad vertices
float z = 20.0;
float top = distance(v[0],v[1],v[2],v[3]); //top
float bot = distance(v[4],v[5],v[6],v[7]); //bottom
float lft = distance(v[0],v[1],v[4],v[5]); //left
float rgt = distance(v[2],v[3],v[6],v[7]); //right
// By default all vertices lie on the screen plane
float az = 1.0;
float bz = 1.0;
float cz = 1.0;
float dz = 1.0;
// Move Z from screen, if based on distance ratios.
if (top<bot) {
az *= top/bot;
bz *= top/bot;
} else {
cz *= bot/top;
dz *= bot/top;
}
if (lft<rgt) {
az *= lft/rgt;
cz *= lft/rgt;
} else {
bz *= rgt/lft;
dz *= rgt/lft;
}
// draw our quad as two textured triangles
gfx_textured(dst, src
, v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz
, 0.0,0.0, 1.0,0.0, 0.0,1.0);
gfx_textured(dst, src
, v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz
, 1.0,0.0, 0.0,1.0, 1.0,1.0);
}
I'm doing it in software to scale and rotate 2d sprites, and for OpenGL 3d app you will need to do it in pixel/fragment shader, unless you will be able to map these imaginary az,bz,cz,dz into your actual 3d space and use the usual pipeline. DMGregory gave exact code for OpenGL shaders: https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers
I came up with this issue as I was trying to implement a homography warping in OpenGL. Some of the solutions that I found relied on a notion of depth, but this was not feasible in my case since I am working on 2D coordinates.
I based my solution on this article, and it seems to work for all cases that I could try. I am leaving it here in case it is useful for someone else as I could not find something similar. The solution makes the following assumptions:
The vertex coordinates are the 4 points of a quad in Lower Right, Upper Right, Upper Left, Lower Left order.
The coordinates are given in OpenGL's reference system (range [-1, 1], with origin at bottom left corner).
std::vector<cv::Point2f> points;
// Convert points to homogeneous coordinates to simplify the problem.
Eigen::Vector3f p0(points[0].x, points[0].y, 1);
Eigen::Vector3f p1(points[1].x, points[1].y, 1);
Eigen::Vector3f p2(points[2].x, points[2].y, 1);
Eigen::Vector3f p3(points[3].x, points[3].y, 1);
// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.
// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.
auto line1 = p2.cross(p0);
auto line2 = p3.cross(p1);
auto intersection = line1.cross(line2);
intersection = intersection / intersection(2);
// Compute distance to each point.
for (const auto &pt : points) {
auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +
std::pow(pt.y - intersection(1), 2));
distances.push_back(distance);
}
// Assumes same order as above.
std::vector<cv::Point2f> texture_coords_unnormalized = {
{1.0f, 1.0f},
{1.0f, 0.0f},
{0.0f, 0.0f},
{0.0f, 1.0f}
};
std::vector<float> texture_coords;
for (int i = 0; i < texture_coords_unnormalized.size(); ++i) {
float u_i = texture_coords_unnormalized[i].x;
float v_i = texture_coords_unnormalized[i].y;
float d_i = distances.at(i);
float d_i_2 = distances.at((i + 2) % 4);
float scale = (d_i + d_i_2) / d_i_2;
texture_coords.push_back(u_i*scale);
texture_coords.push_back(v_i*scale);
texture_coords.push_back(scale);
}
Pass the texture coordinates to your shader (use vec3). Then:
gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);
thanks for answers, but after experimenting i found a solution.
two triangles on the left has uv (strq) according this and two triangles on the right are modifed version of this perspective correction.
Numbers and shader:
tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]
tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]
d1 = length of top edge = 2
d2 = length of bottom edge = 1
tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]
tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]
only right triangles are rendered using this glsl shader (on left is fixed pipeline):
void main()
{
gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);
}
so.. only U is perspective and V is linear.