Vector projection in game development - vector

Where would you use a vector projection in game development. I know that it projects one vector to another, but I don't know where would I use that.
Regards

Here are a few examples:
Vector projection is common in computer graphics, which many games depend on.
In 3D games, during the rendering process the renderer has access to the 3D coordinates of every vertex of every mesh in the game world. These vertices need to be mapped onto a 2D rectangle that's the same shape as your screen. A projection matrix coincidentally called the Projection Matrix does this.
Sometimes projection matrices are used make objects cast shadows onto the surfaces of other objects.
Or suppose you're making a homing missile with a 60-degree field of view. You could say that the missile sees the world through a circular screen, and it loses track of its target if its target goes off the screen. You could use a projection matrix to map the 3D position of the target onto the homing missile's screen, and then decide whether the missile can see the target.

Related

Using Geo-coordintes Instead of Cartesian to Draw in Argon and A-Frame

I would like to create a GPS drawing program in Argon and A-Frame which draws lines based upon people's movements.
Lines can be drawn in A-Frame with, for example, the meshline component which uses Cartesian points:
<a-entity meshline="lineWidth: 20; path: -2 -1 0, 0 -2 0</a-entity>
If I were to do this with a GPS device, I would take the GPS coordinates and map them directly to something like Google maps. Does Argon have any similar functionality such that I can use the GPS coordinates directly as the path like so:
<a-entity meshline="lineWidth: 20; path: 37.32299 -122.04185 0, 37.32298 -122.03224</a-entity>
Since one can specify an LLA point for a reference frame I suppose one way to do this would be to conceive of the center LLA point as "0, 0, 0" and then use a function to map the LLA domain to a Cartesian range.
It would be preferable, however, to use the geo-coordinates directly. Is this possible in Argon?
To understand the answer, you need to first understand the various frames of reference used by Argon.
First, Argon makes use of cesiumjs.org's geospatial math libraries and Entity's so that all "locations" in Argon must either be expressed geospatially OR be relative to a geospatial entity. These are rooted at the center of the earth, in what Cesium calls FIXED coordinates, but are also know as ECEF or ECF coordinates. In that system, coordinates are in meters, with up/down going through the poles, east/west going through the meridian (I believe). Any point on the surface of the earth is represented with pretty large numbers.
This coordinate system is nice because we can represent anything on or near the earth precisely using it. Cesium also supports INERTIAL coordinates, which are used to represent near-earth orbital objects, and can convert between the two frames.
But, it is inconvenient when doing AR for a few reasons:
the numbers used to represent the position of the viewer and objects near them are quite large, even if they are very close, which can lead to mathematical accuracy issues, especially in the 3D graphics system.
The coordinates we "think about" when we think about the world around us have the ground as "flat" and "up" as pointing ... well, up. So, in 3D graphics, an object above another object typically has the same X and Z values, but has a Y that's bigger. In ECEF coordinates, all the numbers change because what we perceive as "up" is really a vector from the center of the earth though us, and is only "up" if we're on the north (or south, depending on your +/-) pole. Most 3D graphics libraries you might want to use (e.g., physics libraries, for example), assume a world in which the ground is one plane (typically the XZ plane) and Y is up (some aeronautics and other engineering applications use Z as up and have XY as the ground, but the issue is the same).
Argon deals with this, as do many geospatial AR systems, by creating a local coordinate system for the graphics and application to use. There are really three options for this:
Pick some arbitrary (but fixed) local place as the origin. Some systems, which are built to work in one place, have this hard-coded. Others let the application set it. We don't do this because it would encourage applications to take the easy path and only work in one place (we've seen this in the past).
Set the local place to the camera. This has the advantage that the math is the most "accurate" because all points are expressed relative to the camera. But, this causes two issues. First, the camera tends to move continuously (even if only due to sensor noise) in AR apps. Second, many libraries (again, like physics libraries) assume that the origin of the system is stable and on the earth, with the camera/user moving through it. These issues can be worked around, but they are tedious for application developers to deal with.
Set the origin of the local coordinates to an arbitrary location near the user, and if the user moves far from it, recenter automatically. The advantage of this is the program doesn't necessarily have to do much to deal with it, and it meshes nicely with 3D graphics libraries. The disadvantage is the local coordinates are arbitrary, and might be different each time a program is run. However, the application developer may have to pay attention to when the origin is recentered.
Argon uses open 3. When the app starts, we create a new local coordinate frame at the user's location, on the plane tangent to the earth. If the user moves far from that location we update the origin and emit an event to the application (currently, we recenter if you are 5km away from the origin). In many simple apps, with only a few frames or reference expressed in geospatial coordinates (and the rest of the application data expressed relative to known geospatial locations), the conversion from geospatial to local can just be done each frame, allowing the app developer to ignore the reentering problem. The programmer is free to use either ENU (east-north-up) or EUS (east-up-south) as their coordinate system; we tend to use EUS because it's similar to what most 3D graphics systems use (Y is up, Z points south, and X is east).
One of the reasons we chose this approach is that we've found in the past that if we had predictable local coordinates, application developers would store data using those coordinates even though that's not a good idea (you data is now tied to some relatively arbitrary application-specific coordinate system, and will now only work in that location).
So, now to your question. Your issue is that you want to use geospatial (cesium's coordinates, that argon uses) coordinates in AFrame. The short answer is you can't use them directly, since AFrame is built assuming a local 3D graphics coordinate system. The argon-aframe package binds aframe to argon by allowing you to specify referenceframe components that position an a-entity at an argon/cesium geospatial location, and take care of all the internal conversions for you.
The assumption when I wrote that code was that authors would then create their content using the local, 3D graphics coordinates, and attach those hunks of graphics to a-entity's that were located in the world with referenceframe's.
In order to have individual coordinates in AFrame correspond to geospatial places, you will need to manage that yourself, perhaps by creating a component to do it for you, or (if the data is known at the start) by converting it up front.
Here's what I'd do.
Assuming you have a list of geospatial coordinates (expressed as LLA), I'd convert each to a local coordinates (by first converting from LLA to Cesium's FIXED ECEF coordinates and creating a Cesium Entity, and then calling Argon's context.getEntityPose() on that entity (which will return it's local coordinates). I would pick one geospatial location in the set (perhaps the first one?) and then subtract it's local coordinates from each of them, so that they are all expressed in local coordinates relative to that known geospatial location.
Then, I'd create an AFrame entity attached to the referenceframe of that unique geospatial entity, and create your graphics content inside of it, using the local coordinates that are expressed relative to it. For example, let's say the geospatial location is LongLat = "-84.398881 33.778463" and you stored those points (local coordinates, relative to LongLat) in userPath, you could do something like this:
<ar-scene>
<ar-geopose id="GT" lla=" -84.398881 33.778463" userotation="false">
<a-entity meshline="lineWidth: 20; path: userPath; color: #E20049"></a-entity>
</ar-geopose>
</ar-scene>

Point cloud:project city square to ground plane

I have a city square with people, cars, trees and buildings in pcl format. I need to automatically determine the ground plane and project this objects on that ground plane to get a 2D map with occupied places.
Any idea?
I think the best thing to do here would be to familiarise yourself with the following two PCL tutorials:
http://pointclouds.org/documentation/tutorials/planar_segmentation.php
http://pointclouds.org/documentation/tutorials/project_inliers.php
The first tutorial makes use of the RANSAC algorithm to find a dominant plane in a scene. I use it to find tables and floors in robotics scenarios. You would use it to find your dominant ground plane.
The second tutorial shows how to project points directly onto a plane. This is what you would use to make your 3D point cloud into a 2D one. Note that, despite the "inlier" keyword, you can pass your whole point cloud to be projected onto the plane.
Actually, if you are after "occupied" places, you might want to project all of the points that aren't in the ground plane (i.e. the outliers), and that are above it (you can use a PCL filter, such as PlaneClipper3D, for example, or just the complement of the outliers from the plane-segmentation operation.
If the plane that you end up with (containing all your projected points) is not in the coordinate frame you want, you may wish to rotate the whole lot, for example, to align with the coordinate axes so that all z-coordinates are zero. See pcl::transformPointCloud for this (the transform will be obtainable from the plane coefficients returned from the plane segmentation).
I hope this is helpful and not at too basic a level, though the question was rather general so I suppose it should be okay.

Generating a 3D prism from any 2D polygon

I am creating a 2D sprite game in Unity, which is a 3D game development environment.
I have constrained all translation of objects to the XY-plane and rotation to the Z-axis.
My problem is that the meshes that are used to detect collisions between objects must still be in 3D. I have the need to detect collisions between the player object (a capsule collider) and a sprite (that has its collision volume defined by a polygonal prism).
I am currently writing the level editor and I have the need to let the user define the collision area for any given tile. In the image below the user clicks the points P1, P2, P3, P4 in that order.
Obviously the points join up to form a quadrilateral. This is the collision area I want, however I must then convert that to a 3D mesh. Basically I need to generate an extrusion of the polygon, then assign the vertex winding and triangles etc. The vertex positions is not a problem to figure out as it is merely a translation of the polygon down the z-axis.
I am having trouble creating an algorithm for assigning the winding order of the vertices, especially since the mesh must consist only of triangles.
Obviously the structure I have illustrated is not important, the polygon may be any 2d shape and will always need to form a prism.
Does anyone know any methods for this?
Thank you all very much for your time.
A simple algorithm that comes to mind is something like this:
extrudedNormal = faceNormal.multiplyScale(sizeOfExtrusion);//multiply the face normal by the extrusion amt. = move along normal
for each(vertex in face){
vPrime = vertex.clone();//copy the position of each vertex to a new object to be modified later
vPrime.addSelf(extrudedNormal);//add translation in the direction of the normal, with the amt. used in the
}
So the idea is basic:
clone the face normal and move it in
the same direction by the amt. you
want to extrude by
clone the face vertices and move them
using the moved(extruded) normal
position
For a more complete, feature rich example, refer to the Procedural Modeling Unity samples. They include a nice Mesh extrusion sample too (see ExtrudedMeshTrail.js which uses MeshExtrusion.cs).
Goodluck!
To create the extruded walls:
For each vertex a (with coordinates ax, ay) in your polygon:
- call the next vertex 'b' (with coordinates bx, by)
- create the extruded rectangle corresponding to the line from 'a' to 'b':
- The rectangle has vertices (ax,ay,z0), (ax,ay,z1), (bx,by,z0), (bx,by,z1)
- This rectangle can be created from two triangles:
- (ax,ay,z0), (ax,ay,z1), (bx,by,z0) and (ax,ay,z1), (bx,by,z0), (bx,by,z1)
If you want to create a triangle strip instead, it's even simpler. For each vertex a, just add (ax,ay,z0) and (ax,ay,z1). Whichever vertex you processed first will also need to be processed again after looping over all other vertices.
To create the end-caps:
This step is probably unnecessary for collision purposes. But, one simple technique is here: http://www.siggraph.org/education/materials/HyperGraph/scanline/outprims/polygon1.htm
Each resulting triangle should be added at depth z0 and z1.

Matrix multiplication - view/projection, world/projection, etc

In HLSL there's a lot of matrix multiplication and while I understand how and where to use them I'm not sure about how they are derived or what their actual goals are.
So I was wondering if there was a resource online that explains this, I'm particularly curious about what is the purpose behind multiplying a world matrix by a view matrix and a world+view matrix by a projection matrix.
You can get some info, from a mathematical viewpoint, on this wikipedia article or on msdn.
Essentially, when you render a 3d model to the screen, you start with a simple collection of vertices scattered in 3d space. These vertices all have their own positions expressed in "object space". That is, they usually have coordinates which have no meaning in the scene that is being rendered, but only express the relations between one vertex and the other of the same model.
For instance, the positions of the vertices of a model could only range from -1 to 1 (or similar, it depends on how the model has been created).
In order to render the model in the correct position, you have to scale, rotate and translate it to the "real" position in your scene. This position you are moving to is expressed in "world space" coordinates which also express the real relationships between vertices in your scene. To do so, you simply multiply each vertex' position with its World matrix. This matrix must be created to include the translation/rotation/scale parameters you need to apply, in order for the object to appear in the correct position in the scene.
At this point (after multiplying all vertices of all your models with a world matrix) your vertices are expressed in world coordinates, but you still cannot render them correctly because their position is not relative to your "view" (i.e. your camera). So, this time you multiply everything using a View matrix which reflects the position and orientation of the viewpoint from which you are rendering the scene.
All vertices are now in the correct position, but in order to simulate perspective you still have to multiply everything with a Projection matrix. This last multiplication determines how the position of the vertices changes based on distance from the camera.
And now finally all vertices, starting from their position in "object space", have been moved to the final position on the screen, where they will be rendered, rasterized and then presented.
Online resources: Direct3D Matrices , Projection Metrices, Direct3D Transformation, The Importance of Matrices in the DirectX API.

Coordinate system Transitions

I have a game world with lots of irregular objects with varying coordinate systems controlling how objects on their surface work. However the camera and these objects can leave and move out into open empty space, where a normal Cartesian coordinate system is used. How do I manage mapping between the two?
One idea I had would be to wrap these objects in a bounds such as a sphere or box, within which said coordinate system would be used, however this becomes problematic if those bounding objects overlap, at which point I'm unsure whether the idea is fundamentally flawed or a solution can be found, since these objects are moving and could overlap at some point
I think you should place all your objects in the cartesian 'empty space' coordinate system by composition of your irregular objects coordinates system with the position matrix.
It adds a level, but will make everything easier.
Regarding the use of bounds I had an idea where the object would use the coordinate system of the smallest bounds it occupied, and then transform according to the heirarchy of systems from top to bottom.
Thus lets say stick figures on a cylinder adjacent to a large object would follow the cylinder rather than flitting between the two objects and their coordinate systems.
Irregardless of the local coordinate system around each of irregular objects, all points will still map to the global world coordinates at one point or another because eventually when you want to render your objects they'll have to get mapped into world space and then camera space. You can use the same object space to world space transform matrices to do the mapping.
You can use Lame's coefficients to transform the dimensions of different coordinate systems.
You can transform any kind of coordinate systems, your own as well. The only condition is to have orthogonal dimensions (every dimension has to be independent from other dimensions).
Here is some document I found: link text.
Hope it helps.

Resources