I'm developing a game, let's say player a put a bomb in location x=100,y=100 and the explosion radius is 100 units... It is pretty easy for me to find all the "items" in the game that were hit by the blast of the bomb (just need to check that their distance from the bomb is lower then 100).
But now i want to take in consideration the obstacles i have in the game, the obstacles are squares, always 64*64 pixels, always aligned to the axis (not rotated).. i want to know if an item was "hidden" behind an obstacle to know he wasn't hit ...
something like this :
The dude on the right wasn't hit, but the dude on the bottom was hit, i filled in gray the hit area, and in green the area which is hidden...
My idea is :
1. find all the items in the scene that their distance from the bomb is lower then 100.
2. find all the obstacles in the scene that their distance from the bomb is lower then 100.
3. calculate the lines from the items to the center of the bomb .. then check if the lines intersect any obstacle , if no .. you were hit.
Finaly, the questions
1. Does any one has a better idea ?
2. Are there free opensource c# compatible engines that can help me ? Box2d can help me here ?
Thanks
It is quite simple, and as Jongware mention in the comments, you should use two lines of visibility.
You should compute the lines of visibility from each "side" of the items in the picture. The origin of each visibility line can be approximated by computing the line from the center of the bomb and get the direction normal to that vector. Your two visibility points are then located one radius out from the center of the item in the normal and negative normal direction. This circle approximation might not represent all possible shapes very well, but is generally a good enough approximation for simple games (and your items look circular in the drawing).
Java-isch pseudocode using 2D-vectors:
// bombCenter and itemCenter are 2D-vectors
bombDirectionVector = bombCenter.minus(itemCenter);
normal = bombDirectionVector.getNormal() // normal vector of length 1
viewPoint1 = itemCenter.plus(normal.times(itemRadius));
viewPoint2 = itemCenter.minus(normal.times(itemRadius));
// Check obstacle intersection with the lines from viewPoint{1,2} to bombCenter
// ...
The visibility lines will then go from the points on the sides of each item to the bomb center. So, for each item, you check if the two visibility lines intersect either the same obstacle or two connected obstacles.
There is no free open source C#-compatible engines I know of that does this, but the only part that can be a bit tricky is the obstacle intersection check. So if you just find something to help you with the intersection check, then the rest should be very straight forward to implement.
I hope this helps, and let me know if anything is unclear and I'll clarify the answer accordingly.
Here is a nice demo/write-up on the subject:
http://www.redblobgames.com/articles/visibility/
If it is a tile-based game and you know the tile coordinates of all of the objects, you could use Bresenham's Line Algorithm: http://roguebasin.roguelikedevelopment.org/index.php?title=Bresenham%27s_Line_Algorithm. Here is an excerpt:
// Author: Jason Morley (Source: http://www.morleydev.co.uk/blog/2010/11/18/generic-bresenhams-line-algorithm-in-visual-basic-net/)
using System;
namespace Bresenhams
{
/// <summary>
/// The Bresenham algorithm collection
/// </summary>
public static class Algorithms
{
private static void Swap<T>(ref T lhs, ref T rhs) { T temp; temp = lhs; lhs = rhs; rhs = temp; }
/// <summary>
/// The plot function delegate
/// </summary>
/// <param name="x">The x co-ord being plotted</param>
/// <param name="y">The y co-ord being plotted</param>
/// <returns>True to continue, false to stop the algorithm</returns>
public delegate bool PlotFunction(int x, int y);
/// <summary>
/// Plot the line from (x0, y0) to (x1, y10
/// </summary>
/// <param name="x0">The start x</param>
/// <param name="y0">The start y</param>
/// <param name="x1">The end x</param>
/// <param name="y1">The end y</param>
/// <param name="plot">The plotting function (if this returns false, the algorithm stops early)</param>
public static void Line(int x0, int y0, int x1, int y1, PlotFunction plot)
{
bool steep = Math.Abs(y1 - y0) > Math.Abs(x1 - x0);
if (steep) { Swap<int>(ref x0, ref y0); Swap<int>(ref x1, ref y1); }
if (x0 > x1) { Swap<int>(ref x0, ref x1); Swap<int>(ref y0, ref y1); }
int dX = (x1 - x0), dY = Math.Abs(y1 - y0), err = (dX / 2), ystep = (y0 < y1 ? 1 : -1), y = y0;
for (int x = x0; x <= x1; ++x)
{
if (!(steep ? plot(y, x) : plot(x, y))) return;
err = err - dY;
if (err < 0) { y += ystep; err += dX; }
}
}
}
}
Related
I am working on Xamarin Forms with Maps. I need to get a certain area within the map and check if the user are inside of this area. for example, There is a certain village that I want to get the whole area and coordinates? I only i tried pin location but there is only 2 coordinates and cannot really determine if the user is inside the area.
I am going to save the area in my database and I already figured how to do this. but I don't know how can I get the coordinates of the certain area. How can I achieve this in Xamarin Forms?
I added an image for example, How Can I the the boxed area?
First of all, Thank you #Jason for the provided links of answers.
I came up with this solution referencing to this question.
Checking if a longitude/latitude coordinate resides inside a complex polygon in an embedded device?
As I have mention on the comments above, I have a set of coordinates(lat/lng)
inside my database and draw it on my maps in android to form a rectangular shape using polyline.
To check if the user is inside the drawn rectangle. I have modified the answer of Drew Noakes in the provided link above of this answer.
CODE
/// <summary>
/// Check if the user's location is inside the rectangle
/// </summary>
/// <param name="location">Current user's location</param>
/// <param name="_vertices">Rectangular shape for checking</param>
/// <returns></returns>
private bool CheckUserLocation(Location location, Polyline _vertices)
{
var lastPoint = _vertices.Geopath[_vertices.Geopath.Count - 1];
var isInside = false;
var x = location.Longitude;
foreach (var point in _vertices.Geopath)
{
var x1 = lastPoint.Longitude;
var x2 = point.Longitude;
var dx = x2 - x1;
if (Math.Abs(dx) > 180.0)
{
if (x > 0)
{
while (x1 < 0)
x1 += 360;
while (x2 < 0)
x2 += 360;
}
else
{
while (x1 > 0)
x1 -= 360;
while (x2 > 0)
x2 -= 360;
}
dx = x2 - x1;
}
if ((x1 <= x && x2 > x) || (x1 >= x && x2 < x))
{
var grad = (point.Latitude - lastPoint.Latitude) / dx;
var intersectAtLat = lastPoint.Latitude + ((x - x1) * grad);
if (intersectAtLat > location.Latitude)
isInside = !isInside;
}
lastPoint = point;
}
return isInside;
}
If you are curious on how I tested it, here is what I did.
First, I retrieved my current location and manually draw a rectangle circulating on my location.
Second, Draw a second rectangle not so far away from my current location but outside the first drawn rectangle.
Third, I passed the first Polyline drawn on the function
CheckUserLocation(myCurrentLocation, firstRectangle) //THIS RETURNS TRUE
Another test is I passed the second rectangle to the function
CheckUserLocation(myCurrentLocation, secondRectangle) //THIS RETURNS FALSE
I will test it again if there is something wrong with this answer. But I hope someone will find this answer helpful.
I'm using QWT library for my widget, there are some curves on the canvas, like this:
void Plot::addCurve1( double x, double y, const char *CurveName,
const char *CurveColor,const char *CurveType )
{
...
*points1 << QPointF(x, y);
curve1->setSamples( *points1 );
curve1->attach( this );
...
}
So, all my curves have the same coordinate system. I'm trying to build navigation interface, so I could put step into TextEdit (for example) and moving by using this step, or I could go the end/start of my defined curve.
I've found method in QwtPlotPanner class, that gives me such opportunity:
double QWT_widget::move_XLeft()
{
//getting step from TextEdit
QString xValStr = _XNavDiscrepancies->toPlainText();
double xVal = xVal.toDouble();
// moveCanvas(int dx, int dy) - the method of QwtPlotPanner
plot->panner->moveCanvas(xVal,0);
x_storage = x_storage - xVal;
return x_storage;
}
So it works ok, but displacement in pixels and I need to stick it to my defined curve and it's coordinate system.
Qwt User's Guide tells, that:
Adjust the enabled axes according to dx/dy
Parameters
dx Pixel offset in x direction
dy Pixel offset in y direction
And this is the only information I've found. How can I convert pixels step into my coordinat system step? I need to go to the end of my curve, so I should return the last QPointF(x,y) of my curve and convert it to pixel-step? Or maybe I'm using wrong class/method?
Thank you very much :)
Thanks to #Pavel Gridin:
(https://ru.stackoverflow.com/a/876184/251026)
"For conversion from pixels to coordinates and back there are two
methods: QwtPlot::transform and QwtPlot::invTransform"
I have a drone following a path for movement. That is, it doesn't use a rigidbody so I don't have access to velocity or magnitude and such. It follows the path just fine, but I would like to add banking to it when it turns left or right. I use a dummy object in front of the drone, thinking I could calculate the bank/tilt amount using the transform vectors from the two objects.
I've been working on this for days as I don't have a lot of math skills. Basically I've been copying pieces of code trying to get things to work. Nothing I do works to make the drone bank. The following code manages to spin (not bank).
// Update is called once per frame
void Update () {
Quaternion rotation = Quaternion.identity;
Vector3 dir = (dummyObject.transform.position - this.transform.position).normalized;
float angle = Vector3.Angle( dir, transform.up );
float rollAngle = CalculateRollAngle(angle);
rotation.SetLookRotation(dir, transform.right);// + rollIntensity * smoothRoll * right);
rotation *= Quaternion.Euler(new Vector3(0, 0, rollAngle));
transform.rotation = rotation;
}
/// <summary>
/// Calculates Roll and smoothes it (to compensates for non C2 continuous control points algorithm) /// </summary>
/// <returns>The roll angle.</returns>
/// <param name="rollFactor">Roll factor.</param>
float CalculateRollAngle(float rollFactor)
{
smoothRoll = Mathf.Lerp(smoothRoll, rollFactor, rollSmoothing * Time.deltaTime);
float angle = Mathf.Atan2(1, smoothRoll * rollIntensity);
angle *= Mathf.Rad2Deg;
angle -= 90;
TurnRollAngle = angle;
angle += RollOffset;
return angle;
}
Assuming you have waypoints the drone is following, you should figure out the angle between the last two (i.e. your "now-facing" and "will be facing" directions). The easy way is to use Vector2.Angle.
I would use this angle to determine the amount I'll tilt the drone's body: the sharper the turn, the harder the banking. I would use a ratio value (public initially so I can manipulate it from the editor).
Next, instead of doing any math I would rely on the engine to do the rotation for me - so I would go for Transform.Rotate function.In case banking can go too high and look silly, I would set a maximum for that and Clamp my calculated banking angle between zero and max.
Without knowing exactly what you do and how, it's not easy to give perfect code, but for a better understand of the above, here's some (untested, i.e. pseudo) code for the solution I visualize:
public float turnSpeed = 7.0f; //the drone will "rotate toward the new waypoint" by this speed
//bankSpeed+turnBankRatio must be two times "faster" (and/or smaller degree) than turning, see details in 'EDIT' as of why:
public float bankSpeed = 14.0f; //banking speed
public float turnBankRatio = .5f; //90 degree turn == 45 degree banking
private float turnAngle = 0.0f; //this is the 'x' degree turning angle we'll "Lerp"
private float turnAngleABS = 0.0f; //same as turnAngle but it's an absolute value. Storing to avoid Mathf.Abs() in Update()!
private float bankAngle = 0.0f; //banking degree
private bool isTurning = false; //are we turning right now?
//when the action is fired for the drone it should go for the next waypoint, call this guy
private void TurningTrigger() {
//remove this line after testing, it's some extra safety
if (isTurning) { Debug.LogError("oups! must not be possible!"); return; }
Vector2 droneOLD2DAngle = GetGO2DPos(transform.position);
//do the code you do for the turning/rotation of drone here!
//or use the next waypoint's .position as the new angle if you are OK
//with the snippet doing the turning for you along with banking. then:
Vector2 droneNEW2DAngle = GetGO2DPos(transform.position);
turnAngle = Vector2.Angle(droneOLD2DAngle, droneNEW2DAngle); //turn degree
turnAngleABS = Mathf.Abs(turnAngle); //avoiding Mathf.Abs() in Update()
bankAngle = turnAngle * turnBankRatio; //bank angle
//you can remove this after testing. This is to make sure banking can
//do a full run before the drone hits the next waypoint!
if ((turnAngle * turnSpeed) < (bankAngle * bankSpeed)) {
Debug.LogError("Banking degree too high, or banking speed too low to complete maneuver!");
}
//you can clamp or set turnAngle based on a min/max here
isTurning = true; //all values were set, turning and banking can start!
}
//get 2D position of a GO (simplified)
private Vector2 GetGO2DPos(Vector3 worldPos) {
return new Vector2(worldPos.x, worldPos.z);
}
private void Update() {
if (isTurning) {
//assuming the drone is banking to the "side" and "side" only
transform.Rotate(0, 0, bankAngle * time.deltaTime * bankSpeed, Space.Self); //banking
//if the drone is facing the next waypoint already, set
//isTurning to false
} else if (turnAngleABS > 0.0f) {
//reset back to original position (with same speed as above)
//at least "normal speed" is a must, otherwise drone might hit the
//next waypoint before the banking reset can finish!
float bankAngle_delta = bankAngle * time.deltaTime * bankSpeed;
transform.Rotate(0, 0, -1 * bankAngle_delta, Space.Self);
turnAngleABS -= (bankAngle_delta > 0.0f) ? bankAngle_delta : -1 * bankAngle_delta;
}
//the banking was probably not set back to exactly 0, as time.deltaTime
//is not a fixed value. if this happened and looks ugly, reset
//drone's "z" to Quaternion.identity.z. if it also looks ugly,
//you need to test if you don't """over bank""" in the above code
//by comparing bankAngle_delta + 'calculated banking angle' against
//the identity.z value, and reset bankAngle_delta if it's too high/low.
//when you are done, your turning animation is over, so:
}
Again, this code might not perfectly fit your needs (or compile :P), so focus on the idea and the approach, not the code itself. Sorry for not being able right now to put something together and test myself - but I hope I helped. Cheers!
EDIT: Instead of a wall of text I tried to answer your question in code (still not perfect, but goal is not doing the job, but to help with some snippets and ideas :)
So. Basically, what you have is a distance and "angle" between two waypoints. This distance and your drone's flight/walk/whatever speed (which I don't know) is the maximum amount of time available for:
1. Turning, so the drone will face in the new direction
2. Banking to the side, and back to zero/"normal"
As there's two times more action on banking side, it either has to be done faster (bankSpeed), or in a smaller angle (turnBankRatio), or both, depending on what looks nice and feels real, what your preference is, etc. So it's 100% subjective. It's also your call if the drone turns+banks quickly and approaches toward the next waypoint, or does things in slow pace and turns just a little if has a lot of time/distance and does things fast only if it has to.
As of isTurning:
You set it to true when the drone reached a waypoint and heads out to the next one AND the variables to (turn and) bank were set properly. When you set it to false? It's up to you, but the goal is to do so when the maneuver is finished (this was buggy in the snippet the first time as this "optimal status" was not possible to ever be reached) so he drone can "reset banking".For further details on what's going on, see code comments.Again, this is just a snippet to support you with a possible solution for your problem. Give it some time and understand what's going on. It really is easy, you just need some time to cope ;)Hope this helps! Enjoy and cheers! :)
I was trying to plot 8 points in a 3D space from the 8 vertices of the above 3D sphare.
I used the following code:
#include "Coordinates2d.h"
#include "Point3d.h"
const double zoom = 500;
int main()
{
Coordinates2d::ShowWindow("3D Primitives!");
std::vector<Point3d> points;
points.push_back(Point3d(0,0,20));
points.push_back(Point3d(0,100,20));
points.push_back(Point3d(120,100,20));
points.push_back(Point3d(120,0,20));
points.push_back(Point3d(0,0,120));
points.push_back(Point3d(0,100,120));
points.push_back(Point3d(120,100,120));
points.push_back(Point3d(120,0,120));
for(int i=0 ; i<points.size() ; i++)
{
Coordinates2d::Draw(points[i], zoom);
}
Coordinates2d::Wait();
}
Where, the Point3D is like the following:
#ifndef _POINT_3D_
#define _POINT_3D_
#include "graphics.h"
#include "Matrix.h"
#include "Point2d.h"
#include <cmath>
#include <iostream>
struct Point3d
{
double x;
double y;
double z;
public:
Point3d();
Point3d(double x, double y, double z);
Point3d(Point3d const & point);
Point3d & operator=(Point3d const & point);
Point3d & operator+(int scalar);
bool operator==(Point3d const & point);
bool operator!=(Point3d const & point);
Point3d Round()
{
return Point3d(floor(this->x + 0.5), floor(this->y + 0.5), floor(this->z + 0.5));
}
void Show()
{
std::cout<<"("<<x<<", "<<y<<", "<<z<<")";
}
bool IsValid();
double Distance(Point3d & point);
void SetMatrix(const Matrix & mat);
Matrix GetMatrix() const;
Point2d ConvertTo2d(double zoom)
{
return Point2d(x*zoom/(zoom-z), y*zoom/(zoom-z));
}
};
#endif
#ifndef _COORDINATES_2D_
#define _COORDINATES_2D_
#include "graphics.h"
#include "Point2d.h"
#include "Point3d.h"
#include "Line3d.h"
class Coordinates2d
{
private:
static Point2d origin;
public:
static void Wait();
static void ShowWindow(char str[]);
private:
static void Draw(Point2d & pt);
public:
static void Draw(Point3d & pt, double zoom)
{
Coordinates2d::Draw(pt.ConvertTo2d(zoom));
}
};
#endif
I was expecting the output to be the following:
But the output became like the following:
I am actually interested to move my viewing camera.
How can I achieve my desired result?
I see from the comments that you achieved your desired result with a clever formula. If you're interested in doing it the 'standard' graphics way, using matrices, I hope this post will help you.
I found an excellent page written explaining projection matrices for OpenGL, which also extends to the general mathematics of projection.
If you want to go in depth, here is the very well written article, explains it's steps in detail, and is just overall highly commendable.
The below image shows the first part of what you're trying to do.
So the image on the left is the 'viewing volume' that you want your camera to see. You can see that in this case, the Center of Projection (basically the focal point of the camera) is at the origin.
But wait, you say, I don't WANT the center of projection to be at the origin! I know, we'll cover that later.
What we're doing here is taking the strangely shaped volume on the left, and converting it to what we call 'normalized coordinate' on the right. So we're mapping out viewing volume onto the range of -1 to 1 in each direction. Basically, we mathmatically stretch the irregularly shaped viewing volume into this 2x2x2 cube centered at the origin.
This operation is accomplished through the following matrix, again, from the excellent article I linked above.
So note you have six variables.
t = top
b = bottom
l = left
r = right
n = near
f = far
Those six variables define you viewing volume. Far is not labeled on the above image, but it is the distance of the furthest plane from the origin in the image.
The above image shows the projection matrix that puts out viewing volume into normalized coordinates. Once coordinates are in this form, you can make it flat by simply ignoring the z coordinate, which is similar to some of the work you have done (nice work!).
So we're all set with that for viewing things from the origin. But let's say we don't want to view from the origin, and would prefer to view from, say somewhere behind and to the side.
Well we can do that! but instead of moving our viewing area (we have the math all nicely worked out right here), it is perhaps counter intuitively, easier to move all the points we are trying to view.
This can be done by multiplying all of the points by a translation matrix.
Here is the wikipedia page for translation, from which I took the following matrix.
Vx, Vy, and Vz are the amount we want to move things in the x, y, and z directions. Keep in mind, if we want to move the camera in the positive x direction, we need a negative Vx, and vice versa. This is because we are moving the points instead of the camera. Feel free to try it and see, if you want.
You may also have noticed that both of the matrices I showed are 4x4, and your coordinates are 3x1. This is because the matrices are meant to be used with homogeneous coordinates. These seem strange because they use 4 variables to represent a 3D point, but its just x, y, z, and w, where you make w =1 for your points. I believe this variable is used for depth buffers, among other things, but it is basically ubiquitously present in graphics' matrix math, so you'll want to get used to using it.
Now that you have these matrices, you can apply the translation one to your points, then apply the perspective one to those points you got out. Then simply ignore the z components, and there you are! You have a 2D image from -1 to 1 in the x and y directions.
I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not.
For my initial testing, I tried with two spheres circles, and the set operation difference. i.e A - B. One circle is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result.
void march(const vec2& cmin, //min x and y for the grid cell
const vec2& cmax, //max x and y for the grid cell
std::vector<vec2>& tri,
float iso,
float (*cmp1)(const vec2&), //distance from stationary circle
float (*cmp2)(const vec2&) //distance from moving circle
)
{
unsigned int squareindex = 0;
float scalar[4];
vec2 verts[8];
/* initial setup of the grid cell */
verts[0] = vec2(cmax.x, cmax.y);
verts[2] = vec2(cmin.x, cmax.y);
verts[4] = vec2(cmin.x, cmin.y);
verts[6] = vec2(cmax.x, cmin.y);
float s1,s2;
/**********************************
********For-loop of interest******
*******Set difference between ****
*******two implicit surfaces******
**********************************/
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
if((s1 < iso)){ //if inside circle1
if((s2 < iso)){ //if inside circle2
scalar[i] = s2; //then set the scalar to the moving circle
} else {
scalar[i] = s1; //only inside circle1
squareindex |= (1<<i); //mark as inside
}
}
else {
scalar[i] = s1; //inside neither circle
}
}
if(squareindex == 0)
return;
/* Usual interpolation between edge points to compute
the new intersection points */
verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]);
verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]);
verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]);
verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]);
for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token
int index = triTable[squareindex][i]; //look up our indices for triangulation
if(index == -1)
break;
tri.push_back(verts[index]);
}
}
This gives me weird jaggies:
(source: mechcore.net)
It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this.
A full testcase can be downloaded HERE
EDIT: Basically, my implementation of marching squares works fine. It is my scalar field which is broken, and I wonder what the correct way would look like. Preferably I'm looking for a general approach to implement the three set operations I discussed above, for the usual primitives (circle, rectangle/square, plane)
EDIT 2: Here are some new images after implementing the answerer's whitepaper:
1.Difference
2.Intersection
3.Union
EDIT 3: I implemented this in 3D too, with proper shading/lighting:
1.Difference between a greater sphere and a smaller sphere
2.Difference between a greater sphere and a smaller sphere in the center, clipped by two planes on both sides, and then union with a sphere in the center.
3.Union between two cylinders.
This is not how you mix the scalar fields. Your scalars say one thing, but your flags whether you are inside or not say another. First merge the fields, then render as if you were doing a single compound object:
for(int i=0,j=0; i<4; ++i, j+=2){
s1 = cmp1(verts[j]);
s2 = cmp2(verts[j]);
s = max(s1, iso-s2); // This is the secret sauce
if(s < iso) { // inside circle1, but not inside circle2
squareindex |= (1<<i);
}
scalar[i] = s;
}
This article might be helpful: Combining CSG modeling with soft blending using
Lipschitz-based implicit surfaces.