Fixing "hiccups" in ground-based collision code? - grid

I've written some code which lets an arbitrary sized rectangle collide with a grid-based terrain setup (for a platformer game). The way I do it is something like this:
For each tile the rectangle intersects with, do:
Calculate the primary axis that this tile is on with respect to the rectangle
Calculate the interpenetration of this tile into the rectangle along the primary axis (factoring in previous position offsets from other tiles)
If this tile is solid, add that interpenetration to a total collision resolution vector
Adjust the rectangle's position by the total calculated collision resolution vector
Which works just fine, except i run into random "hang-ups" as my rectangle gets pulled into the ground just over the border of two tiles, my code decides that it needs to resolve the collision with this new tile by pushing it in the X axis, thus stopping the rectangle's motion unless it is manually pushed out of the terrain to get over it.
I've tried only resolving the collision on one axis at a time (so it ignores any x axis collision resolution if the Y axis resolution is the largest and vice versa), but that results in jittering when the rectangle is being pressed into a corner (as this is a situation that actually needs both axes resolved at once).
In short, what method can I use to fix both of these problems at once?

This is very hard problem because it has infinity interactions in some cases.
To choose between speed and accuracy:
1.add interaction counter for every object (rectangle)
before collision detection reset all counters to zero
2.if any collision detected increment counters for all objects in collision
3.if counter value exceeds limit value stop computation of interaction for that object
Beware that this approach can also create some hiccups if forced to but will not hang up.

Related

Rendering highly granular and "zoomed out" data

There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?

Calculate a dynamic iteration value when zooming into a Mandelbrot

I'm trying to figure out how to automatically adjust the maximum iteration value when moving around in the Mandelbrot fractal.
All examples I've found uses a constant of 1000 or less but that's not enough when zooming into the fractal set.
Is there a way to determine the number of max_iterations based on for example where you are in the Mandelbrot space (x_start,x_end,y_start,y_end)?
One method I tried was to repetitively pre-process a small area in the region of the Mset boundary with increasing iterations until the percentage change in status from one repetition to the next was small. The problem was, that would vary in different places on the current map, since the "depth" varies across it. How to find the right place to do it? By logging the "deepest" boundary area during the previous generation (that will still be within the next zoom area).
But my best strategy was to avoid iterating wherever possible:
Away from the boundary of the Mset, areas of equal depth can be "contoured" and then filled with that depth. It was not an easy algorithm. Basically I followed a raster scan but when I detected a boundary of iteration change (examining all the neighbours to ensure I wasn't close the the edge of the Mset), I would switch to a curve-stitching method to iterate around a contour back to where it started (obviously not recalculating spots I already did), and then make a second pass filling in the raster lines within the countour with the iteration level. It was fraught with leaks but eventually I cracked it.
Within the Mset, I followed the same approach, because the very last thing you want to do is to plough across vast areas and hit the iteration limit.
The difficult area is close the the boundary, where the iteration results can't be related to smooth contours with the neighbours. The contour stitching method won't work here, since there is only ever 1 pixel of a particular depth.
Using the contour method also will have faults to the lower or Mset sides of this region, but since this area looks chaotic until you zoom deeper, I lived with that.
So having said all that, I simply set the iteration depth as high as I can tolerate, but perhaps you can combine my first paragraph with the area-filling techniques.
BTW colouring the region adjacent to the Mset looks terrible when an animated smooth playback of the zoom is attempted. For that reason I coloured this area in a grey scale, by comparing with neighbours. If there was too much difference, I coloured to 0x808080 at first, then adapted that depending on the predominance of the neighbours' depth. All requiring fine tuning!

determine rectangle rotation point

I would like to know how to compute rotation components of a rectangle in space according to four given points in a projection plane.
Hard to depict in a single sentence, thus I explain my needs.
I have a 3D world viewed from a static camera (located in <0,0,0>).
I have a known rectangular shape (an picture, actually) That I want to place in that space.
I only can define points (up to four) in a spherical/rectangular referencial (camera looking at <0°,0°> (sph) or <0,0,1000> (rect)).
I considere the given polygon to be my rectangle shape rotated (rX,rY,rZ). 3 points are supposed to be enough, 4 points should be too constraintfull. I'm not sure for now.
I want to determine rX, rY and rZ, the rectangle rotation about its center.
--- My first attempt at solving this constrint problem was to fix the first point: given spherical coordinates, I "project" this point onto a camera-facing plane at z=1000. Quite easy, this give me a point.
Then, the second point is considered to be on the <0,0,0>- segment, which is about an infinity of solution ; but I fix this by knowing the width(w) and height(h) of my rectangle: I then get two solutions for my second point ; one is "in front" of the first point, and the other is "far away"... I now have a edge of my rectangle. Two, in fact.
And from there, I don't know what to do. If in the end I have my four points, I don't have a clue about how to calculate the rotation equivalency...
It's hard to be lost in Mathematics...
To get an idea of the goal of all this: I make photospheres and I want to "insert" in them images. For instance, I got on my photo a TV screen, and I want to place a picture in the screen. I know my screen size (or I can guess it), I know the size of the image I want to place in (actually, it has the same aspect ratio), and I know the four screen corner positions in my space (spherical or euclidian). My software allow my to place an image in the scene and to rotate it as I want. I can zoom it (to give the feeling of depth)... I then can do all this manually, but it is a long try-fail process and never exact. I would like then to be able to type in the screen corner positions, and get the final image place and rotation attributes in a click...
The question in pictures:
Images presenting steps of the problem
Note that on the page, I present actual images of my app. I mean I had to manually rotate and scale the picture to get it fits the screen but it is not a photoshop. The parameters found are:
Scale: 0.86362
rX = 18.9375
rY = -12.5875
rZ = -0.105881
center position: <-9.55, 18.76, 1000>
Note: Rotation is not enought to set the picture up: we need scale and translation. I assume the scale can be found once a first edge is fixed (first two points help determining two solutions as initial constraints, and because I then know edge length and picture width and height, I can deduce scale. But the software is kind and allow me to modify picture width and height: thus the constraint is just to be sure the four points are descripbing a rectangle in space, with is simple to check with vectors. Here, the problem seems to place the fourth point as a valid rectangle corner, and then deduce rotation from that rectangle. About translation, it is the center (diagonal cross) of the points once fixed.

Rectangle physics in 2D. Am I doing this right?

I'm writing a 2D game, in which I would like to have crate-like objects. These objects would move around, like real crates do. I have a hypothetical idea of how I would like to achieve that:
Basically I'd store the boxes' corners' coordinates with their force and velocity unit vectors, and in every update I'd basically do the following steps:
1. Apply the forces(gravity, from collisions, etc..) accordingly.
2. Modify velocity vector based on the force.
3. Move every corner of the box, like so:
4. I repeat nr 3. for every corner, so I get the real movement of the cube.
My questions are: Is this approach heading in the right direction? Is this theory even correct? If not, what would be the correct way to move a box around based on vectors in a 2D environment?
Just to clarify: I'm only dragging corner "A" in the picture, but I want to repeat the dragging for every other corner, with their own vectors. By "dragging" I mean the algorithm I just stated.
Keeping each corner's coordinate and speed makes no sense as you would be storing lots of redundant information. Boxes are rigid objects, which means that there are constraints that must be satisfied at any time instant, namely the distance between any two given corners is fixed. This also translates to a constraint that links the velocities of all four corners and so they are not independent values. With rigid bodies the movement of any point is the sum of two independent movements - the linear movement of the centre of mass (CM) and the rotation around a fixed axis - often, but not always, chosen to be the one that goes through the CM. Hence you only need to store the position and the velocity of the crate's CM (which coincides with the geometric centre of the crate) as well as the angle of rotation and the rate of rotation around the CM.
As to the motion, the gravity field is a constant vector field and hence cannot induce rotation in symmetric objects like those rectangular crates. Instead it only produces accelerated vertical motion of the CM. This is also what happens due to all external forces - one has to take their vector sum and apply it to the CM. Only external forces whose direction does not go through the CM give torque and so cause rotation. Such forces are any external pushes/pulls or reaction forces that arise when crates collide with each other or hit the ground / a wall. Computing torque due to external forces is easy but computing reaction forces could be quite involving process because of the constrained dynamics that has to be employed. Once the torque has been computed, one has to divide it by the moment of inertia of the create in order to get the angular acceleration. Often it is more convenient to use another axis and not the one that goes through the CM - Steiner's theorem can be employed in this case in order to compute the moment of inertia around that other axis.
To summarise:
all forces, acting on the create, are first added together (as vectors) and the resultant force (divided by the mass of the create) determines the linear acceleration of the CM;
the torque of all forces is computed and then used to determine the angular acceleration around a given axis.
See here for some sample problems of rigid body motion and how the physics is actually worked out.
Given your algorithm, if by "velocity vector" you actually mean "the velocity of CM", then 1 would be correct - all corners move in the same direction (the linear motion of the CM). But 2 would not be always correct - the proper angle of rotation would depend on the time the torque was applied (e.g. the simulation timestep), and one has to take into account that the lever arm length changes in between as the crate rotates.

Implementing z-axis in a 2D side-scroller

I'm making a side scroller similar to Castle Crashers and right now I'm using SAT for collision detection. That works great, but I want to simulate level "depth" by allowing objects to move up and down on the screen, basically along a z-axis (like this screenshot http://favoniangamers.files.wordpress.com/2009/07/castle-crashers-ps3.jpg). This isn't an isometric game, but rather uses parallax scrolling.
I added a z component to my vector class, and I plan to cull collisions based on the 'thickness' of a shape and it's z position. I'm just not sure how calculate the positions of shapes for rendering or how to add jumping with gravity. How do I calculate the max y value (for the ground) as the z position changes? Basically it's the relationship of the z and y axis that confuses me.
I'd appreciate links to resources if anyone knows of this topic.
Thanks!
It's actually possible to make your collision detection algorithm dimensionally agnostic. Just have a collision detector that works along one dimension, use that to check each dimension, and your answer to "are these colliding or not" is the logical AND of the collision detection along each of the dimensions.
Your game should be organised to keep the interaction of game objects, and the rendering of the game to the screen completely seperate. You can think of these two sections of the program as the "model" and the "view". In the model, you have a full 3D world, with 3 axes. You can't go halvesies on this point without some level of pain. Your model must be proper 3D.
The view will read the location of all the game objects, and project them onto the screen using the camera definition. For this part you don't need a full 3D rendering engine. The correct technical term for the perspective you're talking about is "oblique", and it can be seen in many ancient chinese and japanese scroll paintings and prints- in particular look for images of "The Tale of Genji".
The on screen position of an object (including the ground surface!) goes something like this:
DEPTH_RATIO=0.5;
view_x=model_x-model_z*DEPTH_RATIO-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
you can modify for a straight orthographic front projection:
DEPTH_RATIO=0.5;
view_x=model_x-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
And of course don't forget to cull objects outside the volume defined by the camera.
You can also use this mechanism to handle the positioning of parallax layers for you. This is of course, a matter changing your camera to a 1-point perspective projection instead of an orthographic projection. You don't have to use this to change the rendered size of your sprites, but it will help you manage the x position of objects realistically. if you're up for a challenge, you could even mix projections- use 1 point perspective for deep backgrounds, and the orthographic stuff for the foreground.
You should separate your conceptual Y axis used by you physics calculation (collision detection etc.) and the Y axis you actually draw on the screen. That way it becomes less confusing.
Just do calculations per normal pretending there is no relationship between Y and Z axis then when you actually draw the object on the screen you simulate the Z axis using the Y axis:
screen_Y = Y + Z/some_fudge_factor;
Actually, this is how real 3d engines work. After all the world calculations are done the X, Y and Z coordinates are mapped onto screen_X and screen_Y via a function (usually a bit more complicated than the equation above, but just a bit).
For example, to implement pseudo-isormetric view in your game you can even apply Z to the screen_X axis so objects are displaced diagonally instead of vertically.

Resources