Mathematical game wondering - math

Imagine an arm that is 50 px long.
It is placed at 100,100.
The rotation center is at 100, 100.
The arm rotates all the time.
On the arm there is a hook that travels back and forth the full distance of the arm.
My variables:
X = 100;
Y = 100;
RotationAngel = 120; // Loops up to 360.
HookDistanceFromCenter = 25; // Goes 0 -> 50 -> 0 by a loop.
How do I get the position (x,y) of the hook?

From your specific data:
x = 100 - HookDistanceFromCenter * cos(180 - RotationAngle)
y = 100 + HookDistanceFromCenter * sin(180 - RotationAngle)
but it changes depending on which quadrant you are in. This is basic trigonometry. You should be able to use the info here: http://en.wikipedia.org/wiki/Unit_circle except that the radius of your circle is HookDistanceFromCenter and you have to add your rotation center coordinates to the result to get the actual (x,y).

Related

How can I seamlessly wrap map tiles around cylindrically?

I'm creating a game that takes place on a map, and the player should be able to scroll around the map. I'm using real-world data from NASA as a 5700 by 2700 pixel image split into 4 smaller ones, each corresponding to a hemisphere:
How I split up the image:
The player will be viewing the world through a camera, which is currently in a 4:3 aspect ratio, which can be moved around. Its height and width can be described as two variables x and y, currently at 480 and 360 respectively.
Model of the camera:
In practice, the camera is "fixed" and instead the tiles move. The camera's center is described as two variables: xcam and ycam.
Currently, the 4 tiles move and hide flawlessly. The problem arises when the camera passes over the "edge" at 180 degrees latitude. What should happen is that the tiles on one side should show and move as if the world was a cylinder without any noticeable gaps. I update xcam by doing this equation to it:
xcam = ((xcam + (2700 - x) mod (5400 - x)) - (2700 - x)
And the tiles' centers update according to these equations (I will focus only on tiles 1 and 2 for simplicity):
tile1_x = xcam - 1350
tile1_y = ycam + 650
tile2_x = xcam + 1350
tile2_y = ycam + 650
Using this, whenever the camera moves past the leftmost edge of tile 1, it "skips" and instead of tile 1 still being visible with tile 2 in view, it moves enough so that tile 2's rightmost edge is in the camera's rightmost edge.
Here's what happens in reality: ,
and here's what I want to happen: .
So, is there any way to update the equations I'm using (or even completely redo everything) so that I can get smooth wrapping?
I think you unnecessarily hard-code a number of tiles and their sizes, and thus bind your code to those data. In my opinion it would be better to store them in some variables, so that they can be easily modified in one place if data ever changes. This also allows us to write a more flexible code.
So, let's assume we have variables:
// logical size of the whole Earth's map,
// currently 2 and 2
int ncols, nrows;
// single tile's size, currently 2700 and 1350
int wtile, htile;
// the whole Earth map's size
// always ncols*wtile and nrows*htile
int wmap, hmap;
Tile tiles[nrows][ncols];
// viewport's center in map coordinates
int xcam, ycam;
// viewport's size in map coordinates, currently 480 and 360
int wcam, hcam;
Whenever we update the player's position, we need to make sure the position falls within an allowed range. But, we need to establish the coordinates system first in order to define the allowed range. For example, if x values span from 0 to wmap-1, increasing rightwards (towards East), and y values span from 0 to hmap-1, increasing downwards (toward South), then:
// player's displacement
int dx, dy;
xcam = (xcam + dx) mod wmap
ycam = (ycam + dy) mod hmap
assures the camera position is always within the map. (Assumed the mod operator always returns non-negative value. Should it work like the C language % operator, which returns negative result for negative dividend, one needs to add a divisor first to make sure the first argument is non-negative: xcam = (xcam + dx + wmap) mod wmap, etc.)
If you'd rather like to have xcam,ycam = 0,0 at the center of a map (that is, at the Greenwich meridian and the equator), then the allowed range would be -wmap/2 through wmap/2-1 for x and -hmap/2 through hmap/2 - 1 for y. Then:
xcam = (xcam + dx + wmap/2) mod wmap - wmap/2
ycam = (ycam + dy + hmap/2) mod hmap - hmap/2
More generally, let x0, y0 denote the 'zero' position of camera relative to the upper-left corner of the map. Then we can update the camera position by transforming it to the map's coordinates, then shifting and wrapping, and finally transforming back to camera's coordinates:
xmap = xcam + x0
ymap = ycam + y0
xmap = (xmap + dx) mod wmap
ymap = (ymap + dy) mod hmap
xcam = xmap - x0
ycam = ymap - y0
or, more compactly:
xcam = (xcam + dx + x0) mod wmap - x0
ycam = (ycam + dy + y0) mod hmap - y0
Now, when we know the position of the viewport (camera) relative to the map, we need to fill it with the map tiles. And a new decision must be made here.
When we travel from Anchorage, Alaska (western hemisphere) to the North, we eventually reach the North Pole and then we'll find ourselves in the eastern hemisphere, headin South. If we proceed in the same direction, we'll get to Kuusamo, Norway, then Sankt Petersburg, Russia, then Kiev, Ukraine... But that would be a travel to the South! We usually do not describe it as a next part of the initial North route. Consequently, we do not show the part 'past the pole' as an upside-down extension of a map. Hence the map should never show tiles above row number 0 or below row nrows-1.
On the other hand, when we travel along circles of latitude, we smoothly cross the 0 and 180 meridians and switch between the eastern and western hemisphere. So if the camera view covers area on both sides of the left or right edge of the map, we need to continue filling the view with tiles from the other end of the tiles array. If we use a map scaled down, so that it is smaller than the viewport, we may even need to iterate that more than once!
The left edge of a camera view corresponds to the 'longitude' of xleft = xcam - wcam/2 and the right one to xrght = xcam + wcam/2. So we can step across the viewport by the tile's width to find out appropriate columns and show them:
x = xleft
repeat
show a column at x
x = x + wtile
until x >= xrght
The 'show a column at x' part requires finding appropriate column, then iterating across the column to show corresponding tiles. Let's find out which tiles fit the camera view:
ytop = ycam - hcam/2
ybot = ycam + hcam/2
y=ytop
repeat
show a tile at x,y
y = y + htile
until y >= ybot
To show the tile we need to locate appropriate tile and then send it to appropriate position in the camera view.
However, we treat column number differently from the row number: columns wrap while rows do not:
row = y/htile
if (0 <= row) and (row < nrows) then
col = (x/wtile) mod ncols
xtile = x - (x mod wtile)
ytile = y - (y mod htile)
display tile[row][col] at xtile,ytile
endif
Of course xtile and ytile are our map-scale longitude and latitude, so the 'display tile at' routine must transform them to the camera view coordinates by subtracting the camera position from them:
xinwiev = xtile - xcam
yinview = ytile - ycam
and then apply the resulting values relative to the camera view's center at the displaying device (screen).
Another level of complication will appear if you want to implement zooming in and out the view, that is dynamic scaling of the map, but I'm sure you'll find out yourself which calculations will need applying the zoom factor for correct results. :)

Find angle between two points

I am trying to make an image move towards my mouse pointer. Basically, I get the angle between the points, and move along the x axis by the cosine of the angle, and move along the y axis the sine of the angle.
However, I don't have a good way of calculating the angle. I get the difference in x and the difference in y, and use arctangent of Δy/Δx. The resulting angle in quadrant 1 is correct, but the other three quadrants are wrong. Quadrant 2 ranges from -1 to -90 degrees. Quadrant 3 is always equal to quadrant 1, and quadrant 4 always equals quadrant 4. Is there an equation that I can use to find the angle between the two points from 1-360 degrees?
Note: I cannot use atan2(), and I do not know what a vector is.
// This working code is for Windows HDC mouse coordinates gives the angle back that is used in Windows. It assumes point 1 is your origin point
// Tested and working on Visual Studio 2017 using two mouse coordinates in HDC.
//
// Code to call our function.
float angler = get_angle_2points(Point1X, Point1Y, Point2X, Point2Y);
// Takes two window coordinates (points), turns them into vectors using the origin and calculates the angle around the x-axis between them.
// This function can be used for any HDC window. I.e., two mouse points.
float get_angle_2points(int p1x, int p1y, int p2x,int p2y)
{
// Make point1 the origin, and make point2 relative to the origin so we do point1 - point1, and point2-point1,
// Since we don’t need point1 for the equation to work, the equation works correctly with the origin 0,0.
int deltaY = p2y - p1y;
int deltaX = p2x - p1x; // Vector 2 is now relative to origin, the angle is the same, we have just transformed it to use the origin.
float angleInDegrees = atan2(deltaY, deltaX) * 180 / 3.141;
angleInDegrees *= -1; // Y axis is inverted in computer windows, Y goes down, so invert the angle.
//Angle returned as:
// 90
// 135 45
//
// 180 Origin 0
//
//
// -135 -45
//
// -90
// The returned angle can now be used in the C++ window function used in text angle alignment. I.e., plf->lfEscapement = angle*10;
return angleInDegrees;
}
The answers regarding atan2 are correct. For reference, here is atan2 in Scratch block form:
If you're unable to use atan2() directly, you could implement its internal calculations on your own:
atan2(y,x) = atan(y/x) if x>0
atan(y/x) + π if x<0 and y>0
atan(y/x) - π if x<0 and y<0
This is the code I use, and it seems to work perfectly fine.
atan(x/y) + (180*(y<0))
where X is the difference between the Xs of the points (x2 - x1), and Y is the difference between the Ys (y2 - y1).
atan((x2-x1)/(y1-y2)) + (180*((y1-y2)<0))

Rotate 3D vectors on 2D plane

I have two Vec3s, Camera Forward and Turret Forward. Both of these vectors are on different planes where Camera Forward is based on a free-look camera and Turret Forward is determined by the tank it sits on, the terrain the tank is on, etc. Turret Up and Camera Up are rarely ever going to match.
My issue is as follows: I want the turret to be able to rotate using a fixed velocity (44 degrees per second) so that it always converges with the direction that the camera is pointed. If the tank is at a weird angle where it simply cannot converge with the camera, it should find the closest place and sit there instead of jitter around indefinitely.
I cannot for the life of me seem to solve this problem. I've tried several methods I found online that always produce weird results.
local forward = player.direction:rotate(player.turret, player.up)
local side = forward:cross(player.up)
local projection = self.camera.direction:dot(forward) * forward + self.camera.direction:dot(side) * side
local angle = math.atan2(forward.y, forward.x) - math.atan2(projection.y, projection.x)
if angle ~= 0 then
local dt = love.timer.getDelta()
if angle <= turret_speed * dt then
player.turret_velocity = turret_speed
elseif angle >= -turret_speed * dt then
player.turret_velocity = -turret_speed
else
player.turret_velocity = 0
player.turret = player.turret + angle
end
end
I would do it differently
obtain camera direction vector c in GCS (global coordinate system)
I use Z axis as viewing axis so just extract z axis from transform matrix
for more info look here understanding transform matrices
obtain turret direction vector t in GCS
the same as bullet 1.
compute rotated turret direction vectors in booth directions
t0=rotation(-44.0deg/s)*t
t1=rotation(+44.0deg/s)*t
now compute the dot products
a =dot(c,t)
a0=dot(c,t0)
a1=dot(c,t1)
determine turret rotation
if max(a0,a,a1)==a0 rotate(-44.0deg/s)`
if max(a0,a,a1)==a1 rotate(+44.0deg/s)`
[Notes]
this should converge to desired direction
the angle step should be resized to match the time interval used for update this
you can use any common coordinate system for bullets 1,2 not just GCS
in this case the dot product is cos(angle between vectors) because both c,t are unit vectors (if taken from standard transform matrix)
so if cos(angle)==1 then the directions are the same
but your camera can be rotated in different axis so just find the maximum of cos(angle)
After some more research and testing, I ended up with the following solution. It works swimmingly!
function Gameplay:moved_axisright(joystick, x, y)
if not self.manager.id then return end
local turret_speed = math.rad(44)
local stick = cpml.vec2(-x, -y)
local player = self.players[self.manager.id]
-- Mouse and axis control camera view
self.camera:rotateXY(stick.x * 18, stick.y * 9)
-- Get angle between Camera Forward and Turret Forward
local fwd = cpml.vec2(0, 1):rotate(player.orientation.z + player.turret)
local cam = cpml.vec2(1, 0):rotate(math.atan2(self.camera.direction.y, self.camera.direction.x))
local angle = fwd:angle_to(cam)
-- If the turret is not true, adjust it
if math.abs(angle) > 0 then
local function new_angle(direction)
local dt = love.timer.getDelta()
local velocity = direction * turret_speed * dt
return cpml.vec2(0, 1):rotate(player.orientation.z + player.turret + velocity):angle_to(cam)
end
-- Rotate turret into the correct direction
if new_angle(1) < 0 then
player.turret_velocity = turret_speed
elseif new_angle(-1) > 0 then
player.turret_velocity = -turret_speed
else
-- If rotating the turret a full frame will overshoot, set turret to camera position
-- atan2 starts from the left and we need to also add half a rotation. subtract player orientation to convert to local space.
player.turret = math.atan2(self.camera.direction.y, self.camera.direction.x) + (math.pi * 1.5) - player.orientation.z
player.turret_velocity = 0
end
end
local direction = cpml.mat4():rotate(player.turret, { 0, 0, 1 }) * cpml.mat4():rotate(player.orientation.z, { 0, 0, 1 })
player.turret_direction = cpml.vec3(direction * { 0, 1, 0, 1 })
end

How to calculate the z-distance of a camera to view an image at 100% of its original scale in a 3D space

How can one calculate the camera distance from an object in 3D space (an image in this case) such that the image is at its original pixel width.
Am I right in assuming that this is possible given the aspect ratio of the camera, fov, and the original width/height of the image in pixels?
(In case it is relevant, I am using THREE.js in this particular instance).
Thanks to anyone who can help or lead me in the right direction!
Thanks everyone for all the input!
After doing some digging and then working out how this all fits into the exact problem I was trying to solve with THREE.js, this was the answer I came up with in JavaScript as the target Z distance for displaying things at their original scale:
var vFOV = this.camera.fov * (Math.PI / 180), // convert VERTICAL fov to radians
var targetZ = window.innerHeight / (2 * Math.tan(vFOV / 2) );
I was trying to figure out which one to mark as the answer but I kind of combined all of them into this solution.
Trigonometrically:
A line segment of length l at a right angle to the view plane and at a distance of n perpendicular to it will subtend arctan(l/n) degrees on the camera. You can arrive at that result by simple trigonometry.
Hence if your field of view in direction of the line is q, amounting to p pixels, you'll end up occupying p*arctan(l/n)/q pixels.
So, using y as the output number of pixels:
y = p*arctan(l/n)/q
y*q/p = arctan(l/n)
l/tan(y*q/p) = n
Linear algebra:
In a camera with a field-of-view of 90 degrees and a viewport of 2w pixels wide, the projection into screen space is equivalent to:
x' = w - w*x/z
When perpendicular, the length of a line on screen is the difference between two such xs so by normal associativity and commutivity rules:
l' = w - w*l/z
Hence:
w - l' = w*l/z
z = (w - l') / (w*l)
If your field of view is actually q degrees rather than 90 then you can use the cotangent to scale appropriately.
In your original question you said that you're using css3D. I suggest that you do the following:
Set up an orthographic camera with fov = 1..179 degrees, where left = screenWidth / 2, right = screenWidth / - 2, top = screenHeight / 2, bottom = screenHeight / - 2. Near and far planes do not affect CSS3D rendering as far as I can tell from experience.
camera = new THREE.OrthographicCamera(left, right, top, bottom, near, far);
camera.fov = 75;
now you need to calculate the distance between the camera and object in such way that when the object is projected using the camera with settings above, the object has 1:1 coordinate correspondence on screen. This can be done in following way:
var camscale = Math.tan(( camera.fov / 2 ) / 180 * Math.PI);
var camfix = screenHeight / 2 / camscale;
place your div to position: x, y, z
set the camera's position to 0, 0, z + camfix
This should give you 1:1 coordinate correspondence with rendered result and your pixel values in css / div styles. Remember that the origin is in center and the object's position is the center of the object so you need to do adjustments in order to achieve coordinate specs from top-left corner for example
object.x = ( screenWidth - objectWidth ) / 2 + positionLeft
object.y = ( screenHeight - objectHeight ) / 2 + positionTop
object.z = 0
I hope this helps, I was struggling with same thing (exact control of the css3d scene) but managed to figure out that the Orthographic camera + viewport size adjusted distance from object did the trick. Don't alter the camera rotation or its x and y coordinates, just fiddle with the z and you're safe.

Radius of projected Sphere

i want to refine a previous question:
How do i project a sphere onto the screen?
(2) gives a simple solution:
approximate radius on screen[CLIP SPACE] = world radius * cot(fov / 2) / Z
with:
fov = field of view angle
Z = z distance from camera to sphere
result is in clipspace, multiply by viewport size to get size in pixels
Now my problem is that i don't have the FOV. Only the view and projection matrices are known. (And the viewport size if that does help)
Anyone knows how to extract the FOV from the projection matrix?
Update:
This approximation works better in my case:
float radius = glm::atan(radius/distance);
radius *= glm::max(viewPort.width, viewPort.height) / glm::radians(fov);
I'm a bit late to this party. But I came across this thread when I was looking into the same problem. I spent a day looking into this and worked though some excellent articles I found here:
http://www.antongerdelan.net/opengl/virtualcamera.html
I ended up starting with the projection matrix and working backwards. I got the same formula you mention in your post above. ( where cot(x) = 1/tan(x) )
radius_pixels = (radius_worldspace / {tan(fovy/2) * D}) * (screen_height_pixels / 2)
(where D is the distance from camera to the target's bounding sphere)
I'm using this approach to determine the radius of an imaginary trackball that I use to rotate my object.
Btw Florian, you can extract the fovy from the Projection matrix as follows:
If you take the Sy component from the Projection matrix as shown here:
Sx 0 0 0
0 Sy 0 0
0 0 Sz Pz
0 0 -1 0
where Sy = near / range
and where range = tan(fovy/2) x near
(you can find these definitions at the page I linked above)
if you substitute range in the Sy eqn above you get:
Sy = 1 / tan(fovy/2) = cot(fovy/2)
rearranging:
tan(fovy/2) = 1 / Sy
taking arctan (the inverse of tan) of both sides we get:
fovy/2 = arctan(1/Sy)
so,
fovy = 2 x arctan(1/Sy)
Not sure if you still care - its been a while! - but maybe this will help someone else.
Update: see below.
Since you have the view and projection matrices, here's one way to do it, though it's probably not the shortest:
transform the sphere's center into view space using the view matrix: call the result point C
transform a point on the surface of the sphere, e.g. C+(r, 0, 0) in world coordinates where r is the sphere's world radius, into view space; call the result point S
compute rv = distance from C to S (in view space)
let point S1 in view coordinates be C + (rv, 0, 0) - i.e. another point on the surface of the sphere in view space, for which the line C -> S1 is perpendicular to the "look" vector
project C and S1 into screen coords using the projection matrix as Cs and S1s
compute screen radius = distance between Cs and S1s
But yeah, like Brandorf said, if you can preserve the camera variables, like FOVy, it would be a lot easier. :-)
Update:
Here's a more efficient variant on the above: make an inverse of the projection matrix. Use it to transform the viewport edges back into view space. Then you won't have to project every box into screen coordinates.
Even better, do the same with the view matrix and transform the camera frustum back into world space. That would be more efficient for comparing many boxes against; but harder to figure out the math.
The answer posted at your link radiusClipSpace = radius * cot(fov / 2) / Z, where fov is the angle of the field of view, and Z is the z-distance to the sphere, definitely works. However, keep in mind that radiusClipSpace must be multiplied by the viewport's width to get a pixel measure. The value measured in radiusClipSpace will be a value between 0 and 1 if the object fits on the screen.
An alternative solution may be to use the solid angle of the sphere. The solid angle subtended by a sphere in a sky is basically the area it covers when projected to the unit sphere.
The formulae are given at this link but roughly what I'm doing is:
if( (!radius && !distance) || fabsf(radius) > fabsf(distance) )
; // NAN conditions. do something special.
theta=arcsin( radius/distance )
sphereSolidAngle = ( 1 - cosf( theta ) ) ; // not multiplying by 2PI since below ratio used only
frustumSolidAngle = ( 1 - cosf( fovy / 2 ) ) / M_PI ; // I cheated here. I assumed
// the solid angle of a frustum is (conical), then divided by PI
// to turn it into a square (area unit square=area unit circle/PI)
numPxCovered = 768.f*768.f * sphereSolidAngle / frustumSolidAngle ; // 768x768 screen
radiusEstimate = sqrtf( numPxCovered/M_PI ) ; // area=pi*r*r
This works out to roughly the same numbers as radius * cot(fov / 2) / Z. If you only want an estimate of the area covered by the sphere's projection in px, this may be an easy way to go.
I'm not sure if a better estimate of the solid angle of the frustum could be found easily. This method involves more comps than radius * cot(fov / 2) / Z.
The FOV is not directly stored in the projection matrix, but rather used when you call gluPerspective to build the resulting matrix.
The best approach would be to simply keep all of your camera variables in their own class, such as a frustum class, whose member variables are used when you call gluPerspective or similar.
It may be possible to get the FOVy back out of the matrix, but the math required eludes me.

Resources