See also: Why is my image rotation algorithm not working?
This question isn't language specific, and is a math problem. I will however use some C++ code to explain what I need as I'm not experienced with the mathematic equations needed to express the problem (but if you know about this, I’d be interested to learn).
Here's how the image is composed:
ImageMatrix image;
image[0][0][0] = 1;
image[0][1][0] = 2;
image[0][2][0] = 1;
image[1][0][0] = 0;
image[1][1][0] = 0;
image[1][2][0] = 0;
image[2][0][0] = -1;
image[2][1][0] = -2;
image[2][2][0] = -1;
Here's the prototype for the function I'm trying to create:
ImageMatrix rotateImage(ImageMatrix image, double angle);
I'd like to rotate only the first two indices (rows and columns) but not the channel.
The usual way to solve this is by doing it backwards. Instead of calculating where each pixel in the input image ends up in the output image, you calculate where each pixel in the output image is located in the input image (by rotationg the same amount in the other direction. This way you can be sure that all pixels in the output image will have a value.
output = new Image(input.size())
for each pixel in input:
{
p2 = rotate(pixel, -angle);
value = interpolate(input, p2)
output(pixel) = value
}
There are different ways to do interpolation. For the formula of rotation I think you should check https://en.wikipedia.org/wiki/Rotation_matrix#In_two_dimensions
But just to be nice, here it is (rotation of point (x,y) angle degrees/radians):
newX = cos(angle)*x - sin(angle)*y
newY = sin(angle)*x + cos(angle)*y
To rotate an image, you create 3 points:
A----B
|
|
C
and rotate that around A. To get the new rotated image you do this:
rotate ABC around A in 2D, so this is a single euler rotation
traverse in the rotated state from A to B. For every pixel you traverse also from left to right over the horizontal line in the original image. So if the image is an image of width 100, height 50, you'll traverse from A to B in 100 steps and from A to C in 50 steps, drawing 50 lines of 100 pixels in the area formed by ABC in their rotated state.
This might sound complicated but it's not. Please see this C# code I wrote some time ago:
rotoZoomer by me
When drawing, I alter the source pointers a bit to get a rubber-like effect, but if you disable that, you'll see the code rotates the image without problems. Of course, on some angles you'll get an image which looks slightly distorted. The sourcecode contains comments what's going on so you should be able to grab the math/logic behind it easily.
If you like Java better, I also have made a java version once, 14 or so years ago ;) ->
http://www.xs4all.nl/~perseus/zoom/zoom.java
Note there's another solution apart from rotation matrices, that doesn't loose image information through aliasing.
You can separate 2D image rotation into skews and scalings, which preserve the image quality.
Here's a simpler explanation
It seems like the example you've provided is some edge detection kernel. So if what you want to is detect edges of different angles you'd better choose some continuous function (which in your case might be a parametrized gaussian of x1 multiplied by x2) and then rotate it according to formulae provided by kigurai. As a result you would be able to produce a diskrete kernel more efficiently and without aliasing.
Related
Recently, i tried to make transparency work in JavaFX 3D as some of the animations i want to play on meshes use transforms that change the alpha of the mesh each keyframe.
However to my surprise, my TriangleMesh that uses a material which has transparent areas doesn't look as expected.
Current result(depth buffer enabled): https://i.imgur.com/EIIWY1p.gif
Result with depth buffer disabled for my TriangleMeshView: https://i.imgur.com/1C95tKy.gif (this looks much closer to the result i was expecting)
However i don't really want to disable depth buffering because it causes other issues.
In case it matters, this is the diffuse map i used for my TriangleMesh: https://i.imgur.com/UqCesXL.png (1 pixel per triangle as my triangles have a color per face, 128 pixels per column).
I compute UV's for the TriangleMesh like this:
float u = (triangleIndex % width + 0.5f) / width;
float v = (triangleIndex / width + 0.5f) / (float) atlas.getHeight();
and then use these for each vertex of the triangle.
What's the proper way to render a TriangleMesh that uses a transparent material(in my case only part of the image is transparent, as some triangles are opaque)?
After a bit of research, i've found this which potentially explains my issue: https://stackoverflow.com/a/31942840/14999427 however i am not sure whether this is what i should do or whether there's a better option.
Minimal reproducible example(this includes the same exact mesh i showed in my gifs): https://pastebin.com/ndkbZCcn (used pastebin because it was 42k characters and the limit in stackoverflow is 30k) make sure to copy the raw data as the preview in pastebin stripped a few lines.
Update: a "fix" i found orders the triangles every time the camera moves the following way:
Take the camera's position multiplied by some scalar(5 worked for me)
Order all opaque triangles first by their centroids distance to the camera and then after that order all transparent triangles the same way.
I am not sure why multiplying the camera's position is necessary but it does work, best solution i've found so far.
I have not yet found a 'proper fix' for this problem, however this is what worked for me pretty well:
Create an ArrayList that'll store the sorted indices
Take the Camera's position multiplied by some scalar (between 5-10 worked for me), call that P
Order all opaque triangles first by their centroids distance to P and add them to the list
Order all triangles with transparency by their centroids distance to P and add them to the list
Use the sorted triangle indices when creating the TriangleMesh
I've recently been venturing into conversion of 3D points in space to a 2D pixel position on a screen, and almost every single answer I've found has been something like "do X with your world-to-camera matrix, and multiply by your viewport height to get it in pixels".
Now, that's all fine and good, but oftentimes these questions were about programming for video game engines, where a function to get a camera's view matrix is often built into a library and called on-command. But in my case, I can't do that - I need to know how to, given an FOV (say, 78 degrees) and a position and angle (of the format pitch = x, yaw = y, roll = z) it's facing, calculate the view matrix of a virtual camera.
Does anybody know what I need to do? I'm working with Lua (with built-in userdata for things like 3D vectors, angles, and 4x4 matrices exposed via the C interface), if that helps.
I am using gluPerspective
where:
fovw,fovh // are FOV in width and height of screen angles [rad]
zn,zf // are znear,zfar distances from focal point of camera
When using FOVy notation from OpenGL then:
aspect = width/height
fovh = FOVy
fovw = FOVx = FOVy*aspect
so just feed your 4x4 matrix with the values in order defined by notations you use (column or row major order).
I got the feeling you are doing SW render on your own so Do not forget to do the perspective divide!. Also take a look at the matrix link above and also at:
3D graphic pipeline
First let me say that I'm not very good with math. I have a canvas with multiple text "boxes" that are rotated to 300°, which basically makes them parallelograms. They are very similar to this:
I'm trying to detect if the mouse is over one of them, but I don't know how to do that. Please help. Thank you!
The simplest method is to use the inverse transform on the mouse point and then do simple rectangle testing on the transformed point. As long as the affine transform you're using doesn't map everything to a line, it will have a well-defined inverse.
Each parallelogram can first of all be contained in a rectangular bounding box like the one illustrated above. If the mouse is not within that rectangle, then it is definitely not a hit. You have many easy tests for that already. The rest of the space can be decomposed into the parallelogram of interest in green, and the areas you don't want. So we just need to test if the mouse is in the red areas with the following tests:
Left: x < a - (a/h)*y
Right: x > (a+b) - (a/h)*y
If either of those conditions is true, then the mouse is outside the parallelogram.
Note, in this case I am assuming y is 0 at the top and increases as you move down, and x is zero at the left and increases as you move right.
For more information about the value of a, we can turn to trig.
If we know the angle theta and h, then
a = h tan(Ɵ)
I'm trying to teach myself some machine learning, and have been using the MNIST database (http://yann.lecun.com/exdb/mnist/) do so. The author of that site wrote a paper in '98 on all different kinds of handwriting recognition techniques, available at http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf.
The 10th method mentioned is a "Tangent Distance Classifier". The idea being that if you place each image in a (NxM)-dimensional vector space, you can compute the distance between two images as the distance between the hyperplanes formed by each where the hyperplane is given by taking the point, and rotating the image, rescaling the image, translating the image, etc.
I can't figure out enough to fill in the missing details. I understand that most of these are indeed linear operators, so how does one use that fact to then create the hyperplane? And once we have a hyperplane, how do we take its distance with other hyperplanes?
I will give you some hints. You need some background knowledge in image processing. Please refer to 2,3 for details.
2 is a c implementation of tangent distance
3 is a paper that describes tangent distance in more details
Image Convolution
According to 3, the first step you need to do is to smooth the picture. Below we show the result of 3 different smooth operations (check section 4 of 3) (The left column shows the result images, the right column shows the original images and the convolution operators). This step is to map the discrete vector to continuous one so that it is differentiable. The author suggests to use a Gaussian function. If you need more background about image convolution, here is an example.
After this step is done, you have calculated the horizontal and vertical shift:
Calculating Scaling Tangent
Here I show you one of the tangent calculations implemented in 2 - the scaling tangent. From 3, we know the transformation is as below:
/* scaling */
for(k=0;k<height;k++)
for(j=0;j<width;j++) {
currentTangent[ind] = ((j+offsetW)*x1[ind] + (k+offsetH)*x2[ind])*factor;
ind++;
}
In the beginning of td.c in 2's implementation, we know the below definition:
factorW=((double)width*0.5);
offsetW=0.5-factorW;
factorW=1.0/factorW;
factorH=((double)height*0.5);
offsetH=0.5-factorH;
factorH=1.0/factorH;
factor=(factorH<factorW)?factorH:factorW; //min
The author is using images with size 16x16. So we know
factor=factorW=factorH=1/8,
and
offsetH=offsetW = 0.5-8 = -7.5
Also note we already computed
x1[ind] = ,
x2[ind] =
So that, we plug in those constants:
currentTangent[ind] = ((j-7.5)*x1[ind] + (k-7.5)*x2[ind])/8
= x1 * (j-7.5)/8 + x2 * (k-7.5)/8.
Since j(also k) is an integer between 0 and 15 inclusive (the width and the height of the image are 16 pixels), (j-7.5)/8 is just a fraction number between -0.9375 to 0.9375.
So I guess (j+offsetW)*factor is the displacement for each pixel, which is proportional to the horizontal distance from the pixel to the center of the image. Similarly you know the vertical displacement (k+offsetH)*factor.
Calculating Rotation Tangent
Rotation tangent is defined as below in 3:
/* rotation */
for(k=0;k<height;k++)
for(j=0;j<width;j++) {
currentTangent[ind] = ((k+offsetH)*x1[ind] - (j+offsetW)*x2[ind])*factor;
ind++;
}
Using the conclusion from previous, we know (k+offsetH)*factor corresponds to y. Similarly - (j+offsetW)*factor corresponds to -x. So you know that is exactly the formula used in 3.
You can find all other tangents described in 3 implemented at 2. I like the below image from 3, which clearly shows the displacements effect of different transformation tangents.
Calculating the tangent distance between images
Just follow the implementation in tangentDistance function:
// determine the tangents of the first image
calculateTangents(imageOne, tangents, numTangents, height, width, choice, background);
// find the orthonormal tangent subspace
numTangentsRemaining = normalizeTangents(tangents, numTangents, height, width);
// determine the distance to the closest point in the subspace
dist=calculateDistance(imageOne, imageTwo, (const double **) tangents, numTangentsRemaining, height, width);
I think the above should be enough to get you started and if anything is missing, please read 3 carefully and see corresponding implementations in 2. Good luck!
I need to know what the visible height of a display object will be after I change it's rotationX value.
I have an application that allows users to lay out a floor in 3D space. I want the size of the floor to automatically stretch after a 3D rotation so that it always covers a certain area.
Anyone know a formula for working this out?
EDIT: I guess what I am really trying to do is convert degrees to pixels.
On a 2D plane say 100 x 100 pixels, a -10 degree change on rotationX means that the plane has a gap at the top where it is no longer visible. I want to know how many pixels this gap will be so that I can stretch the plane.
In Flex, the value for the display objects height property remains the same both before and after applying the rotation, which may in fact be a bug.
EDIT 2: There must be a general math formula to work this out rather than something Flash/Flex specific. When viewing an object in 3D space, if the object rotates backwards (top of object somersaults away from the viewer), what would the new visible height be based on degrees of rotation? This could be in pixels, metres, cubits or whatever.
I don't have a test case, but off the top of my head I'd guess something like:
var d:DisplayObject;
var rotationRadians:Number = d.rotationX * Math.PI / 180;
var visibleHeight:Number = d.height * Math.cos(rotationRadians);
This doesn't take any other transformations into account, though.
Have you tried using the object's bounding rectangle and testing that?
var dO:DisplayObject = new DisplayObject();
dO.rotation = 10;
var rect:Rectangle = dO.getRect();
// rect.topLeft.y is now the new top point.
// rect.width is the new width.
// rect.height is the new height.
As to the floor, I would need more information, but have you tried setting floor.percentWidth = 100? That might work.
Have you checked DisplayObject.transform.pixelBounds? I haven't tried it, but it might be more likely to take the rotation into account.
Rotation actually changes DisplayObject's axis's (i.e. x and y axes are rotated). That is why you are not seeing the difference in height. So for getting the visual height and y you might try this.var dO:DisplayObject = new DisplayObject();
addChild();
var rect1:Rectangle = dO.getRect(dO.parent);
dO.rotation = 10;
var rect2:Rectangle = dO.getRect(dO.parent);
rect1 and rect2 should be different in this case. If you want to check the visual coordinates of the dO then just change dO.parent with root.