I realized that a sprite which texture is set to “setAliasTexParameters” must be placed on integer positions to avoid pixel bleeting when moved.
That’s fine but if I use a RenderTexture with screen size and scale it by a non-integer value (for example 1.5f) I get again pixel bleeding although the position of the sprite is rounded.
I guess that I have to take the scalefactor of the rendertexture into account when I calculate the position but I did not find the correct formula yet.
Really appreciate if someone can help here!
Related
Recently, i tried to make transparency work in JavaFX 3D as some of the animations i want to play on meshes use transforms that change the alpha of the mesh each keyframe.
However to my surprise, my TriangleMesh that uses a material which has transparent areas doesn't look as expected.
Current result(depth buffer enabled): https://i.imgur.com/EIIWY1p.gif
Result with depth buffer disabled for my TriangleMeshView: https://i.imgur.com/1C95tKy.gif (this looks much closer to the result i was expecting)
However i don't really want to disable depth buffering because it causes other issues.
In case it matters, this is the diffuse map i used for my TriangleMesh: https://i.imgur.com/UqCesXL.png (1 pixel per triangle as my triangles have a color per face, 128 pixels per column).
I compute UV's for the TriangleMesh like this:
float u = (triangleIndex % width + 0.5f) / width;
float v = (triangleIndex / width + 0.5f) / (float) atlas.getHeight();
and then use these for each vertex of the triangle.
What's the proper way to render a TriangleMesh that uses a transparent material(in my case only part of the image is transparent, as some triangles are opaque)?
After a bit of research, i've found this which potentially explains my issue: https://stackoverflow.com/a/31942840/14999427 however i am not sure whether this is what i should do or whether there's a better option.
Minimal reproducible example(this includes the same exact mesh i showed in my gifs): https://pastebin.com/ndkbZCcn (used pastebin because it was 42k characters and the limit in stackoverflow is 30k) make sure to copy the raw data as the preview in pastebin stripped a few lines.
Update: a "fix" i found orders the triangles every time the camera moves the following way:
Take the camera's position multiplied by some scalar(5 worked for me)
Order all opaque triangles first by their centroids distance to the camera and then after that order all transparent triangles the same way.
I am not sure why multiplying the camera's position is necessary but it does work, best solution i've found so far.
I have not yet found a 'proper fix' for this problem, however this is what worked for me pretty well:
Create an ArrayList that'll store the sorted indices
Take the Camera's position multiplied by some scalar (between 5-10 worked for me), call that P
Order all opaque triangles first by their centroids distance to P and add them to the list
Order all triangles with transparency by their centroids distance to P and add them to the list
Use the sorted triangle indices when creating the TriangleMesh
I am not sure how to put this problem in a single sentence, sorry if the title is misleading.
I am currently developing a simple terrain editor with a circle-shaped brush size. The image below shows a few cases that represent my problem.
additional info: the square size is fixed and uniform and in the current version, my concern is only to find which one is hit and which one is not (the amount of region covered is important for weighting the hit, but probably not right now)
My current solution (which is not even correct for a certain condition) is: given a hit in a position (x, y) with radius r, loop through all square from (x-radius, y-radius) to (x+radius, y+radius) and apply 2-D box to circle collision detection. But I don't think this is optimal (or even correct IMO).
Can anyone help me with this one? Thank you
Since i can't add a simple comment due to bureaucracy on this website i have to type it out here.
Anyway you're in luck since i was trying to do this recently as well! The way i did it is i iterated through the vertex array and check if the current vertex falls inside the radius of the circle. But perhaps what you want is to check it against each quad center and if that center falls inside the radius then add the whole quad as it's being collided.
Of course depending on the size of your grid the performance will vary so it's good to try to iterate through as few quads as needed. Though accessing these quads from the array is something you have to figure out yourself.
Recently I had much fun with the Laplacian Pyramid algorithm (http://persci.mit.edu/pub_pdfs/pyramid83.pdf). But one big problem is that the original paper is limited to 2^m+1*2^n+1 images. My question is: What is the best way to deal with arbitrary w*h instead? I can think of a couple of options:
Up sample the input to the next 2^m+1,2^n+1 up front
Pad even lines. How exactly? Wouldn't it shift the signal?
Shift even lines by half a sample? Wouldn't it loose half a sample?
Does anybody have experience with this? What is the most practical and efficient approach? Also any pointers to papers dealing with this would be very welcome.
One approach is to create an image with a width and height equal to the next 2^m+1,2^n+1, but instead of up-sampling the image to fill the expanded dimensions, just place it in the top-left corner and fill the empty space to the right and below with a constant value (the average value for the image is a good choice for this). Then encode in the normal way, storing the original image dimensions along with the pyramid. When decoding, decode and then crop to the original size.
This won't introduce any visual artifacts or degradation because you aren't stretching or offsetting the image in any way.
Because the empty space to the right and below the original image is a constant value, the high-pass bands at each level in the image pyramid will be all zero in this area. So if you are using a compression scheme like run length encoding to store each level this will be automatically taken care off and these areas will be compressed to almost nothing. If not then you can simply store the top-left (potentially non-zero) area of each level and then fill out the rest with zeros when decoding.
You could find the min and max x and y bounding rectangle of the non-zero values for each level and store this along with the level, cropped to include only non-zero values. The decoder could also be optimized so that areas of the image that are going to be cropped away are not actually decoded in the first place, by only processing the top-left of each level.
Here's an illustration of the technique:
Instead of just filling the lower-right area with a flat color, you could fill it with horizontally and vertically mirrored copies of the image to the right and below, and a copy mirrored in both directions to the bottom-right, like this:
This will avoid the discontinuities of the first technique, although there will be a discontinuity in dx (e.g. if the value was gradually increasing from left to right it will suddenly be decreasing). Choosing a mirror that keeps dx constant and ddx zero will avoid this second-order discontinuity by linearly extrapolating the values.
Another technique, which is similar to what some JPEG encoders do to pad out an image to a whole number of MCU blocks, is to take the last pixel value of each row and repeat it, and likewise for columns, with the bottom-right-most pixel of the image used to fill the bottom-right area:
This last technique could easily be modified to extrapolate the gradient of values or even the gradient of gradients instead of just repeating the same value for the remainder of the row or column.
I am working with OpenGL and I wanted to invert the image. So I went here, asked a question and finally I had the following code:
glMatrixMode(GL_PROJECTION);
glScalef(-1,1,1);
glTranslatef(-width(),0,0);
From what I understand from this, the position of every pixel gets inverted, so the pixels that were on the right of the image are now on the same absolute position, but are the left of the image, so I have to move the entire thing back exactly as many pixels as its wide: 360 (which is the size of the "canvas", so in the snippents the function width() is being used)! So to undo this process I would invert the image again and then move it back to where it came from:
glMatrixMode(GL_PROJECTION);
glScalef(-1,1,1);
glTranslatef(width(),0,0);
Nope, blackscreen. I have to do exactly the same thing twice to undo the flipping: I have to move with -360 every time I flip the image. Why?
It's exactly as Daniel Fischer mentioned in the comment. Here is an illustration of the process.
What you must have in mind is that the transformations operate on the transformed coordinate systems.
We start with the image (grey) on the screen (green):
Then we scale the image. So the origin is preserved, but the x-axis is mirrored.
No we have to move the image onto the screen again. Because the x-axis points to the left (but we want to move the image to the right), we have to use a negative offset for the translation:
If we flip the image again, the following happens. The origin is preserved and the x-axis is mirrored:
So we must translate the image by a negative offset:
Another way of undoing the flip is undoing the operations (but in the opposite order):
glTranslatef(width, 0, 0);
glScalef(-1,1,1);
The mathematical reason for that is that inversion reverses the oder. If we have Matrix A = B * C then A^-1 = (C^-1 * B^-1).
I have a QImage of size 12x12 in GIF format. I want to rotate it on certain angle with very high frequency. My application involves a robot so when it changes its orientation(which it does very frequently) my QImage in simulation should also be rotated but it causes loss of information. I am doing it something like below.
robot_transform.rotate(angle);
*robot2 = robot->transformed(robot_transform,Qt::SmoothTransformation);
*robot2= robot2->scaled(12,12, Qt::KeepAspectRatio,Qt::SmoothTransformation);
I need suggestions that whats wrong in this approach and secondly is there any other optimal approach for the desired application?
Thanks
I would increase the resolution of the source image to at least double. Rotating an image to non-90-degree angles will cause loss of pixel information. An higher res source can compensate for that.
Most sprite based animations use pre-rendered images for each possible angle.
The problem is the scaling afterwards, you need to crop the center of the image. You can do this with QImage::copy.