Dynamic Sizing of Circles Along a Spiral - math

I have created an logarithmic spiral in canvas, and plotted circles along it. Using your mouse scroll wheel you can zoom in and out of the spiral (which works) – but I am having problems updating the size of the circles to match the zoom level... I'm no math expert!
There are two values I am using when trying to calculate the circle radius:
The initial size: This makes circles smaller the further down the spiral they are. I think I have this pretty close.
The growth size: This is the amount that each circle must increase to accurately grow in size as it gets closer to the viewer. Currently circles seem to be the correct size at the beginning and end of the spiral, but are too small in the middle.
I have hacked together some janky math and I'm sure there is an actual formula for this sort of sizing. Any help would be greatly appreciated – I just want the circles to feel "attached" to the spiral and scale appropriately.
Here is the jsFiddle for reference
// a = starting radius of spiral
// spiralNum = spiral length (100.6)
// timeOffset = scroll position
// node_count_visible = number of total circles
offset = (this.spiralNum - 0.05 - this.node_count_visible) + id + (this.timeOffset/30);
var initial = Math.exp(b * offset)/4;
var growth = (a/8.5);
node.radius = initial + growth;
Thank you in advance for any help provided...

I was able to get an affect you are looking for by doing
node.radius = a * Math.exp(b * offset)/6;
6 is an arbitrary number to adjust the size of the circle.

Related

What should I use for the diameter of the camera rotation around an object

I am using graphics.py to make 3D graphics. Yes I know it is stupid and super difficult but that is what makes it fun. Anyways, I use the center of the screen as the origin point to rotate the camera around and use the window height and width to determine the diameter of the radius that you can go around. When I do this and the window height is different than the width, I get 2 different sized circles for rotating around the X axis and the Y axis. What should I do to get the circles the same size.
Current math:
RadiusY = winHeight/2
RadiusX = winWidth/2
RadiusZ = (RadiusX+RadiusY)/2

Percieved width of a decal depending on the rotation angle of the wall

I am creating a raycasting game from scratch using JavaScript canvas.
Part of the challenge (for me) is to decorate walls with random images (pictures). I already implemented drawing of walls, floor an ceiling and sprites.
While drawing walls, I store for each x (depicting screen coordinate) the distance to the wall (Z-BUFFER), the height of the wall (H-BUFFER) and actual coordinates of the pixel in the underlying 2D grid (GRID_BUFFER).
My approach for painting the decals (pictures) on the wall is then the following (after identifying a list of decals that could theoretically be visible):
distance to the decal's position is calculated (position is defined as being in the middle of the grid vertice facing the observer)
screen coordinate decalScreenX is calculated based on the transformation matrix from grid coordinates to screen coordinates. This works correctly:
let decalScreenX = Math.floor((RAYCAST.SCREEN_WIDTH / 2) * (1 + CAMERA.transformX /CAMERA.transformDepth));
Then I retrieve image data for the decal in question and get it's width and height
And based on the distance and the observed angle, I calculate the percieved width of the decal. This is where the real issue lies, as I see that I don't calculate this width completely accurate.
with all this information, it is then easy to calculate left and right screen coordinates - where to begin and and where to end drawing the decal, use H-BUFFER to calculate height factor and use GRID_BUFFER to draw only on grid belonging to this decal.
I saw the width calculation in terms that decal is rotated from the player direction vector by an angle, if the player direction is not opposite of the direction with which decal faces the space (example):
or if player direction is directly opposite to the direction of decal, this angle is 0° (example):
My first approach was to use dot product of the reversed player direction and decal facing direction, thus getting cosine of the angle between vectors and use this as a factor to reduce perceived width:
let CosA = PLAYER.dir.mirror().dot(decal.facingDir);
let widthScale = CosA * (CAMERA.transformDepth / decal.distance);
The problem with this solution is, that when perpendicular , the factor is 0 and the decal is not drawn but as the walls are drawn with perspective, this should not be the case. So I began improvising. I defined CAMERA.minPerspective factor as seen below. Field of vision (FOV) is 70°.
CAMERA.minPerspective = Math.cos(Math.radians((90 + this.FOV) / 2));
My intuition was (as I lack the knowledge of perspective and geometry, alas) that for small angles, the factor should remain 1. And for angles close to 90° there should be some minimal factor, so that decal remains visible. So I came with this "improved" code:
let CosA = PLAYER.dir.mirror().dot(decal.facingDir);
let FACTOR = Math.min(1, CosA + CAMERA.minPerspective);
FACTOR = Math.max(FACTOR, CAMERA.minPerspective);
let widthScale = FACTOR * (CAMERA.transformDepth / decal.distance);
This works considerably better, but it has some flaws. Visually, for angles 0-50° the factor of reduction is too great. This can be observed if I use decals of such width, that they should cover complete grid surface. (see image below; left of the stairs the wall underneath is visible, decal should cover complete grid, but it doesn't, bacause the FACTOR is to small).
I have searched Stack Overflow and the rest of the Web for better solution, by it seems that my knowledge of geometry also prevents me to recognize proper solutions if they are out of this context.
So, please. There are probably deterministic solutions for calculating percieved width, without using raycasting phase again or by using the information I am able to store in raycasting phase. While JavaScript is used in code example, I consider this question not to be specific to any programming language.
I have found solution that retains (or even improves) simplicity and time complexity of the approach in the question.
I have added two points to the decal definition - leftDrawStart and
rightStartDraw. Those are easy to calculate at the point of decal
instantialization, based on real sprite (decal) width and the definition
of the grid (block) size. While doing this calculation, I consider leftDrawStart from the camera perspective (not grid coordinates).
when rendering decal, I calculate using transformation matrix (as in question, code example below) screen coordinates for leftDrawStart and rightStartDraw from their grid coordinates:
transform(spritePos) {
let invDet = 1.0 / (CAMERA.dir.x * PLAYER.dir.y - PLAYER.dir.x * CAMERA.dir.y);
CAMERA.transformX = invDet * (PLAYER.dir.y * spritePos.x - PLAYER.dir.x * spritePos.y);
CAMERA.transformDepth = invDet * (-CAMERA.dir.y * spritePos.x + CAMERA.dir.x * spritePos.y);
}
I distinguish the calculated absolute drawStartX and drawEndX, and their adjustment so that they fit the screen boundaries or return from function if they are completely offscreen
finally, percieved width of the decal is not even required since the texture position can be calculated by using ratio of differences between curent drawing stripe - absolute drawing start and difference of absolute drawing end - absolute drawing start:
let texX = (((stripe - drawStartX_abs) / (drawEndX_abs - drawStartX_abs)) * imageData.width) | 0;
The approach is completelly accurate and considerably faster in comparison to approach where decal casting would be incorporated in the raycasting step.

Centering Perspective Camera on two objects by panning

In Unity, I have a perspective camera, and I've got two transforms in my scene that I want the camera to perfectly center on screen. The camera will pan left/right/up/down to the appropriate location.
So far my approach has been to convert the transform positions to screen positions using Camera.WorldToScreenPoint, and taking their average to find the screen midpoint. From there, I know I want to pan the camera a certain number of units toward that midpoint. What I'm having trouble with is figuring out the formula for deciding how much to pan (or, maybe this isn't even the preferred way to determine this).
I think your approach is great. Let me expand the idea.
So this is your screen :D. Blue circle is where you want your objects to be. There are two scenarios. We will use green dots as an example of zooming scenario. Then red dots for panning scenario.
The trick is, you want to keep the dots as close as possible to circumference of blue circle.
Let's say you get red dots as your objects' screen position. You have to shift them towards the center. Let's calculate CenterOfDots. Then calculate it's difference to CenterOfBlueCircle. That's how much pan you need in screen coordinates.
So you have calculated the pan. Now you want to know how much you need to zoom. Let's say you get green dots this time. Calculate DistanceBetweenDots and compare it to DiameterOfBlueCircle. You want them to be the same. So their difference is how much zoom you need in screen coordinates.
There comes the tricky part. Now you know how much to pan and zoom in screen space. But you need to move the camera in world space. Trying to solve it using geometry magic is fine. But I hate headache :D
So instead, I would iteratively shift my camera using the data I calculated above. Just shift the camera in it's local x-y axes towards HowMuchPan, multiplied by a manually given coefficient PanSpeed. This will give a smooth transition to the camera. Same is for the zoom. This time you shift the camera in it's local z axis using HowMuchZoom multiplied by your manually given coefficient ZoomSpeed.
Hope it helps. Have fun :)
i figured out the mathy approach!
for panning, you want to figure out the average screen position of your objects (i.e. the middle). then you want to generate a couple world points against an arbitrary plane some distance away from the camera. the difference between these points is how much to pan the camera
center=Camera.ScreenToWorldPoint(Screen.width*0.5f, Screen.height*0.5f, 10f)
mid=Camera.ScreenToWorldPoint(averageScreenPoint.x, averageScreenPoint.y, 10f)
Camera.transform.Translate(mid-center)
zooming is a bit more complicated, but very similar to the panning approach. you want to use Camera.ScreenToWorldPoint against an arbitrary plane, but you want to do this for 4 points, which will help you figure out a scale to apply to your camera's z position. psuedocode -
screenMin = Camera.ScreenToWorldPoint(0f,0f,10f);
screenMax = Camera.ScreenToWorldPoint(Screen.width,Screen.height,10f);
objMin = Camera.ScreenToWorldPoint(screenPosMin.x, screenPosMin.y, 10f);
objMax = Camera.ScreenToWorldPoint(screenPosMax.x, screenPosMax.y, 10f);
screenDiff = screenMax-screenMin;
objDiff = objMax-objMin;
Vector3 scale = new Vector3(objDiff.x/screenDiff.x, objDiff.y/screenDiff.y, 0f);
ratio = scale.x < scale.y ? scale.y : scale.x;// pick the one that best puts fits on screen.
Camera.localPosition.z = Mathf.Min(ZoomMin, Camera.localPosition.z*ratio);

Rotating a Rectangle around a Grid To Calculate Players View

I have a player who can rotate and move around a 2D Cartesian grid, I need to calculate where to draw the enemies on screen.
The player should have a certain viewpoint which is the size of the screen in front of the direction the player is facing. (and a little behind)
I've tried tons of ways to implement this messing with Bi-Polar co-ordinates and Trig but I havn't been able to solve the problem of calculating where on the screen the enemies should be drawn.
The problem is best represent in the form of a graph with green being the viewpoint which is a rectangle that can rotate and move around the grid, and dots representing player and enemy.
So I need to work out the positions of the enemies on screen relative to the players rotation and position.
If you're going for a Doom-like perspective, you should imagine the viewing area as a parallelogram, rather than a rectangle. Imagine that behind your character is a camera man with its own position and angle.
The enemy's screen position is related to the angle between the camera and the enemy.
//indicates where on the screen an enemy should be drawn.
//-1 represents the leftmost part of the screen,
//and 1 is the rightmost.
//Anything larger or smaller is off the edge of the screen and should not be drawn.
float calculateXPosition(camera, enemy){
//the camera man can see anything 30 degrees to the left or right of its line of sight.
//This number is arbitrary; adjust to your own tastes.
frustumWidth = 60;
//the angle between the enemy and the camera, in relation to the x axis.
angle = atan2(enemy.y - camera.y, enemy.x - camera.x);
//the angle of the enemy, in relation to the camera's line of sight. If the enemy is on-camera, this should be less than frustumWidth/2.
objectiveAngle = camera.angle - angle;
//scale down from [-frustumWidth/2, frustumWidth/2] to [-1, 1]
return objectiveAngle / (frustrumWidth / 2);
}
These diagrams visualize what the variables I'm using here represent:
Once you have an "X position" in the range of [-1, 1], it should be easy enough to convert that into pixel coordinates. For example, if your screen is 500 pixels wide, you can do something like ((calculateXPosition(camera, enemy) + 1) / 2) * 500;
Edit:
You can do something similar to find the y-coordinate of a point, based on the point's height and distance from the camera.
(I'm not sure how you should define the height of the enemy and camera - any number should be fine as long as they somewhat match the scale set by the x and y dimensions of the cartesian grid.)
//this gives you a number between -1 and 1, just as calculateXPosition does.
//-1 is the bottom of the screen, 1 is the top.
float getYPosition(pointHeight, cameraHeight, distanceFromCamera){
frustrumWidth = 60;
relativeHeight = pointHeight - cameraHeight;
angle = atan2(relativeHeight, distanceFromCamera);
return angle / (frustrumWidth / 2);
}
You can call the method twice to determine the y position of both the top and the bottom of the enemy:
distanceFromCamera = sqrt((enemy.x - camera.x)^2 + (enemy.y - camera.y)^2);
topBoundary = convertToPixels(getYPosition(enemy.height, camera.height, distanceFromCamera));
bottomBoundary = convertToPixels(getYPosition(0, camera.height, distanceFromCamera));
That should give you enough information to properly scale and position the enemy's sprite.
(aside: the frustrumWidths in the two methods don't need to be the same - in fact, they should be different if the screen you are drawing to is rectangular. The ratios of the x frustrum and y frustrum should be equal to the ratios of the width and height of the screen.)

3D, AS3, Flex - Convert degrees of rotation to visible height

I need to know what the visible height of a display object will be after I change it's rotationX value.
I have an application that allows users to lay out a floor in 3D space. I want the size of the floor to automatically stretch after a 3D rotation so that it always covers a certain area.
Anyone know a formula for working this out?
EDIT: I guess what I am really trying to do is convert degrees to pixels.
On a 2D plane say 100 x 100 pixels, a -10 degree change on rotationX means that the plane has a gap at the top where it is no longer visible. I want to know how many pixels this gap will be so that I can stretch the plane.
In Flex, the value for the display objects height property remains the same both before and after applying the rotation, which may in fact be a bug.
EDIT 2: There must be a general math formula to work this out rather than something Flash/Flex specific. When viewing an object in 3D space, if the object rotates backwards (top of object somersaults away from the viewer), what would the new visible height be based on degrees of rotation? This could be in pixels, metres, cubits or whatever.
I don't have a test case, but off the top of my head I'd guess something like:
var d:DisplayObject;
var rotationRadians:Number = d.rotationX * Math.PI / 180;
var visibleHeight:Number = d.height * Math.cos(rotationRadians);
This doesn't take any other transformations into account, though.
Have you tried using the object's bounding rectangle and testing that?
var dO:DisplayObject = new DisplayObject();
dO.rotation = 10;
var rect:Rectangle = dO.getRect();
// rect.topLeft.y is now the new top point.
// rect.width is the new width.
// rect.height is the new height.
As to the floor, I would need more information, but have you tried setting floor.percentWidth = 100? That might work.
Have you checked DisplayObject.transform.pixelBounds? I haven't tried it, but it might be more likely to take the rotation into account.
Rotation actually changes DisplayObject's axis's (i.e. x and y axes are rotated). That is why you are not seeing the difference in height. So for getting the visual height and y you might try this.var dO:DisplayObject = new DisplayObject();
addChild();
var rect1:Rectangle = dO.getRect(dO.parent);
dO.rotation = 10;
var rect2:Rectangle = dO.getRect(dO.parent);
rect1 and rect2 should be different in this case. If you want to check the visual coordinates of the dO then just change dO.parent with root.

Resources