I'm making a side scroller similar to Castle Crashers and right now I'm using SAT for collision detection. That works great, but I want to simulate level "depth" by allowing objects to move up and down on the screen, basically along a z-axis (like this screenshot http://favoniangamers.files.wordpress.com/2009/07/castle-crashers-ps3.jpg). This isn't an isometric game, but rather uses parallax scrolling.
I added a z component to my vector class, and I plan to cull collisions based on the 'thickness' of a shape and it's z position. I'm just not sure how calculate the positions of shapes for rendering or how to add jumping with gravity. How do I calculate the max y value (for the ground) as the z position changes? Basically it's the relationship of the z and y axis that confuses me.
I'd appreciate links to resources if anyone knows of this topic.
Thanks!
It's actually possible to make your collision detection algorithm dimensionally agnostic. Just have a collision detector that works along one dimension, use that to check each dimension, and your answer to "are these colliding or not" is the logical AND of the collision detection along each of the dimensions.
Your game should be organised to keep the interaction of game objects, and the rendering of the game to the screen completely seperate. You can think of these two sections of the program as the "model" and the "view". In the model, you have a full 3D world, with 3 axes. You can't go halvesies on this point without some level of pain. Your model must be proper 3D.
The view will read the location of all the game objects, and project them onto the screen using the camera definition. For this part you don't need a full 3D rendering engine. The correct technical term for the perspective you're talking about is "oblique", and it can be seen in many ancient chinese and japanese scroll paintings and prints- in particular look for images of "The Tale of Genji".
The on screen position of an object (including the ground surface!) goes something like this:
DEPTH_RATIO=0.5;
view_x=model_x-model_z*DEPTH_RATIO-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
you can modify for a straight orthographic front projection:
DEPTH_RATIO=0.5;
view_x=model_x-camera_x;
view_y=model_y+model_z*DEPTH_RATIO-camera_y;
And of course don't forget to cull objects outside the volume defined by the camera.
You can also use this mechanism to handle the positioning of parallax layers for you. This is of course, a matter changing your camera to a 1-point perspective projection instead of an orthographic projection. You don't have to use this to change the rendered size of your sprites, but it will help you manage the x position of objects realistically. if you're up for a challenge, you could even mix projections- use 1 point perspective for deep backgrounds, and the orthographic stuff for the foreground.
You should separate your conceptual Y axis used by you physics calculation (collision detection etc.) and the Y axis you actually draw on the screen. That way it becomes less confusing.
Just do calculations per normal pretending there is no relationship between Y and Z axis then when you actually draw the object on the screen you simulate the Z axis using the Y axis:
screen_Y = Y + Z/some_fudge_factor;
Actually, this is how real 3d engines work. After all the world calculations are done the X, Y and Z coordinates are mapped onto screen_X and screen_Y via a function (usually a bit more complicated than the equation above, but just a bit).
For example, to implement pseudo-isormetric view in your game you can even apply Z to the screen_X axis so objects are displaced diagonally instead of vertically.
Related
There was a gif on the internet where someone used some sort of CAD and drew multiple vector pictures in it. On the first frame they zoom-in on a tiny dot, revealing there a whole new different vector picture just on a different scale, and then they proceed to zoom-in further on another tiny dot, revealing another detailed picture, repeating several times. here is the link to the gif
Or another similar example: imagine you have a time-series with a granularity of a millisecond per sample and you zoom out to reveal years-worth of data.
My questions are: how such a fine-detailed data, in the end, gets rendered, when a huge amount of data ends up getting aliased into a single pixel.
Do you have to go through the whole dataset to render that pixel (i.e. in case of time-series: go through million records to just average them out into 1 line or in case of CAD render whole vector picture and blur it into tiny dot), or there are certain level-of-detail optimizations that can be applied so that you don't have to do this?
If so, how do they work and where one can learn about it?
This is a very well known problem in games development. In the following I am assuming you are using a scene graph, a node-based tree of objects.
Typical solutions involve a mix of these techniques:
Level Of Detail (LOD): multiple resolutions of the same model, which are shown or hidden so that only one is "visible" at any time. When to hide and show is usually determined by the distance between camera and object, but you could also include the scale of the object as a factor. Modern 3d/CAD software will sometimes offer you automatic "simplification" of models, which can be used as the low res LOD models.
At the lowest level, you could even just use the object's bounding
box. Checking whether a bounding box is in view is only around 1-7 point checks depending on how you check. And you can utilise object parenting for transitive bounding boxes.
Clipping: if a polygon is not rendered in the view port at all, no need to render it. In the GIF you posted, when the camera zooms in on a new scene, what is left from the larger model is a single polygon in the background.
Re-scaling of world coordinates: as you zoom in, the coordinates for vertices become sub-zero floating point numbers. Given you want all coordinates as precise as possible and given modern CPUs can only handle floats with 64 bits precision (and often use only 32 for better performance), it's a good idea to reset the scaling of the visible objects. What I mean by that is that as your camera zooms in to say 1/1000 of the previous view, you can scale up the bigger objects by a factor of 1000, and at the same time adjust the camera position and focal length. Any newly attached small model would use its original scale, thus preserving its precision.
This transition would be invisible to the viewer, but allows you to stay within well-defined 3d coordinates while being able to zoom in infinitely.
On a higher level: As you zoom into something and the camera gets closer to an object, it appears as if the world grows bigger relative to the view. While normally the camera space is moving and the world gets multiplied by the camera's matrix, the same effect can be achieved by changing the world coordinates instead of the camera.
First, you can use caching. With tiles, like it's done in cartography. You'll still need to go over all the points, but after that you'll be able zoom-in/zoom-out quite rapidly.
But if you don't have extra memory for cache (not so much actually, much less than the data itself), or don't have time to go over all the points you can use probabilistic approach.
It can be as simple as peeking only every other point (or every 10th point or whatever suits you). It yields decent results for some data. Again in cartography it works quite well for shorelines, but not so well for houses or administrative boarders - anything with a lot of straight lines.
Or you can take a more hardcore probabilistic approach: randomly peek some points, and if, for example, there're 100 data points that hit pixel one and only 50 hit pixel two, then you can more or less safely assume that if you'll continue to peek points still pixel one will be twice as likely to be hit that pixel two. So you can just give up and draw pixel one with a twice more heavy color.
Also consider how much data you can and want to put in a pixel. If you'll draw a pixel in black and white, then there're only 256 variants of color. And you don't need to be more precise. Or if you're going to draw a pixel in full color then you still need to ask yourself: will anyone notice the difference between something like rgb(123,12,54) and rgb(123,11,54)?
I have an problem: My 3d plot gets distorted, if I rotate or pan the plot. I know this behaviour is intended to show the user the most at all times, but it looks silly, so I want an orthogonal view or Axes which are equally long.
Earth distorted.
I have an WPF Application, where the ILN Form/Control is hosted via WindowsFormsHost.
I tried every possibility regarding Plotcube.Projection, have set Plotcube.Limits, changed ILN_Panel autosize etc. My next idea was that maybe I need to configure the WPF and/or Forms window itself.
Thanks!
I think that is not possible. At least not easy within PlotCube. If you do not need PlotCube just put the object into the Camera node and set Projection to Orthogonal. And you are done.
With PlotCube it is much more complicated. You must make sure that all containers have equal aspect ratio (width / height) WinformsHostControl, PlotCube, PlotCube.ScreenRect, PlotCube.DataScreenRect, PlotCube.Plots (data group), and its Limits. If you want to rotate everything free you must make sure to have equal aspect ration on all 3 dimensions wehere applicable.
Vulkan uses a coordinate system where (-1, -1) is in the top left quadrant, instead of the bottom left quadrant as in the standard cartesian coordinate system one typically learns about in school. So (-1, 1) is in the bottom left quadrant in Vulkan's coordinate system.
(image from: http://vulkano.rs/guide/vertex-input)
What are the advantages of using Vulkan's coordinate system? One plain advantage I can see is pedagogical: it forces people to realize that coordinate systems are arbitrary, and one can easily map between them. However, I doubt that's the design reason.
So what are the design reasons for this choice?
Many coordinate systems in computer graphics put the origin at the top-left and point the y axis down.
This is because in early televisions and monitors, the electron beam that draws the picture starts at the top-left of the screen and progresses downward.
The pixels on the screen were generally made by reading memory in sequential addresses as the beam moved down the screen, and modulating that electron beam in accordance with each byte read in sequence. So the y axis corresponds to time, which corresponds to memory address.
Even today, virtually all representations of a bitmap in memory, or in a bitmapped file, start at the top-left.
It is natural when drawing bitmaps in such a medium to use a coordinate system that starts at the top-left too.
Things become a little more complicated when you use a bottom-left origin because finding the byte that corresponds to a pixel requires a little more math and needs to account for the height of the bitmap. There is usually just no reason to introduce the extra complexity.
When you start to introduce matrix transformations however, it becomes much more convenient to work with an upward-pointing y axis, because that lets you use all the vector algebra you learned in school without having to reverse the y axis and all the rotations in your thinking.
So what you'll usually find is that when you are working in a system that lets you do matrix operations, translations, rotations, etc., then you will have an upward-pointing y axis. At some point deep inside, however, the calculations will transform the coordinates into a downward-pointing y axis for the low-level operations.
One of the common sources of confusion and bugs in OpenGL was that NDC and window coordinates had y increasing upwards, which is opposite of the convention used in nearly all window systems and many (but not all) image formats, where y is [0..1] increasing downwards. Developers ended up having to insert a y-flip in their transformation pipeline in many cases, and it wasn't always clear when they did and didn't.
So Vulkan decided to make it so you could load an image from a y-downwards image format directly into memory and draw it to the screen without any explicit y flips, to avoid this source of errors.
Other coordinate systems were then chosen to be consistent with that, in the sense that the y direction never flips direction in the standard Vulkan transformation pipeline. That meant that clip space vertex coordinates also had y increasing downwards.
This ended up meaning that Vulkan clip coordinates have a different orientation than D3D clip coordinates, which was an annoyance for developers supporting both APIs. So the VK_KHR_maintenance1 extension adds the ability to specify a negative viewport height, which essentially introduces a y-flip to the clip-space to framebuffer coordinate transform. (D3D has essentially always had an implicit y-flip here.)
This is how I remember the reasoning in the Vulkan Working Group, anyway. I don't think there's an authoritative public source anywhere.
I'm trying to build an Augmented Reality Demonstration, like this iPhone App:
http://www.acrossair.com/acrossair_app_augmented_reality_nearesttube_london_for_iPhone_3GS.htm
However my geometry/math is a bit rusty nowadays.
This is what I know:
If i have my Android phone on the landscape mode (with the home button on the left), my z axis points to the direction I'm looking.
From the sensors of my phone i know what is the angle my z axis has with the North axis, let's call this angle theta.
If I have a vector from my current position to the point I want to show in my screen, i can calculate the angle this vector does with my z axis. Let's call this angle alpha.
So, based on the alpha angle I have a perception of where the point is, and I'm able to show it in the screen (like the Nearest Tube App).
This is the basic theory of a simple demonstration (of course it's nothing like the App, but it's the first step).
Can someone give me some lights on this matter?
[Update]
I've found this very interesting example, however I need to have the movement on both xx and yy axis. Any hints?
The basics are easy. You need the angle between your location and your destiny (arctangent), and the heading (from the digital compass in your phone). See this answer: Augmented Reality movement There is some objective-c code down there that you can read if you come from java.
What you want is a 3d-Space-Filling-Curve for example a hilbert-curve. That is a spatial index over 3 ccordinate. It is comparable to a octree. You want to store the object in that octree and do a depth-firat search on the coordinate you have recorded with your iphone as fixed coordinate probably the center of the screen. A octree subdivde the space continously in eigth directions and a 3d-Space-Filling-Curve is an hamiltonian path through the space which is like a fracta but it is clearly distinctable from the region of the octree. I use 2d-hilbert-curve to speed search in geospatial databases. Maybe you want to start with this first?
I'm looking for information on how to move (and animate) 2D sprites across an isometric game world, but have their movement animated smoothly as the travel from tile to tile, as opposed to having them jump from the confines of one tile, to the confines of the next.
An example of this would be in the Transport Tycoon Game, where the trains and carriages are often half in one tile and half in the other.
Drawing the sprites in the right place isn't too difficult. The projection formula are:
screen_x = sprite_x - sprite_y
screen_y = (sprite_x + sprite_y) / 2 + sprite_z
sprite_x and sprite_y are fixed point values (or floating point if you want). Usually, the precision of the fixed point is the number of pixels on a tile - so if your tile graphic was 32x16 (a projected 32x32 square) you would have 5 bits of precision, i.e. 1/32th of a tile.
The really hard part is to sort the sprites into an order that renders correctly. If you use OpenGL for drawing, you can use a z-buffer to make this really easy. Using GDI, DirectX, etc, it is really hard. Transport Tycoon doesn't correctly render the sprites in all instances. The original Transport Tycoon had the most horrendous rendering engine you've ever seen. It implemented the three zoom levels are three instanciations of a massive masm macro. TT was written entirely in assembler. I know, because I ported it to the Mac many years ago (and did a cool version for the PS1 dev kit as well, it needed 6Mb though).
P.S. One of the small bungalow graphics in the game was based on the house Chris Sawyer was living in at the time. We were tempted to add a Ferrari parked in the driveway for the Mac version as that was the car he bought with the royalties.
Look up how to do linear interpolation (it's a pretty simple formula). You can then use this to parameterise the transition on a single [0, 1] range. You then simply have a state in your sprites to store the facts:
That they are moving
Start and end points
Start and end times (or start time and duration
and then each frame you can draw it in the correct position using an interpolation from the start point to the end point. Once you have exceeded the duration, the sprite then gets updated to be not-moving and positioned in the end point/tile.
Why are you thinking it'll jump from tile to tile? You can position your sprite at any x,y co-ordinate.
First create your background screen buffer and then place your sprites on top of it.