I'm trying to make a grid-based movement game but when character jumps, it's affected by gravity; too, the character doesn't receive instructions by the keyboard, but it receive instructions step by step when the players push "GO" button
I have the gravity affecting the character, but I don't know how to move the character step by step on grid-based movement. Someone has an idea or some video tutorial
The var "moviendose" is changed to true when the player press GO
`
extends KinematicBody2D
var moviendose = false
var lista_habilidades_jugador = []
onready var tweene = $AnimatedSprite
const GRAVITY = 9.8
var velocity = Vector2.ZERO
const def_habilidades = {
"Habilidad1": Vector2(16,0),
"Habilidad2": Vector2(-16,0),
"Habilidad3": Vector2(0,(-GRAVITY*16))
}
func _physics_process(delta):
velocity.y += GRAVITY
if lista_habilidades_jugador.size() == 0:
moviendose = false
if moviendose == true:
movimiento()
velocity = move_and_slide(velocity)
func movimiento():
for mov in lista_habilidades_jugador:
if mov == "Habilidad1":
velocity = velocity.move_toward(def_habilidades[mov] , 5)
tweene.flip_h = false
tweene.play("correr")
if mov == "Habilidad2":
velocity = lerp(velocity + def_habilidades[mov], Vector2.ZERO,20)
tweene.flip_h = true
if mov == "Habilidad3":
velocity = velocity + def_habilidades[mov]
self.lista_habilidades_jugador.pop_front()
`
[bar of movement] (https://i.stack.imgur.com/cnwqA.png)
I'll break this into issues:
Units
Sequencing movement
Grid movement
Overshooting
Jump
Units
Presumably, this is an earth-like gravity:
const GRAVITY = 9.8
Which means this is an acceleration, in meters per second square.
But this is a 2D enviroment:
extends KinematicBody2D
So movement units are pixels, not meters. Thus the velocity here is in pixels per seconds:
velocity = move_and_slide(velocity)
And thus here you are adding meters per second squared to pixels per second, and that is not right:
velocity.y += GRAVITY
I bet it looks about right anyway, I'll get back to why.
Do you remember physics? Velocity is displacement over time (speed is distance over time). And we have that time, we call it delta. So this would be meters per second:
GRAVITY * delta
And if we add a conversion constant for pixels to meters, we have:
GRAVITY * delta * pixels_per_meter
How much is that constant? How much you want. Presumably something noticiable, you need to check the size of your sprites to come up with some pixel scaling that works for you.
That means the constant is big, but the time between frames we call detal is small, less than a second. So delta makes the value small and the pixel to meters constant makes it big. And I expect these effects to compensate each other, so it kind of looks good anyway… But having them 1. makes it correct, 2. makes it frame rate independent, 3. gives you control over the scaling.
The alternative is to come up with the gravity value in pixels per second squared. In other words, instead of figuring out a conversion constant from pixels to meters, tweak the gravity value until it works like you want (which is equivalent to having it pre-multiplied by the constant), and just keep delta:
GRAVITY * delta
Now, what is this?
velocity = velocity.move_toward(def_habilidades[mov] , 5)
We are changing the velocity towards some known value at steps of 5. That 5 is a delta velocity (delta meaning change here). You would want an acceleration to make this frame rate independent. Something like this:
velocity = velocity.move_toward(def_habilidades[mov] , 5 * delta)
Which would also imply to pass delta to the movement method.
And what the heck is this?
velocity = lerp(velocity + def_habilidades[mov], Vector2.ZERO,20)
I'll assume the value you are adding to the velocity is also a velocity. I'll challenge that later, it is not the point here.
The point is that you are using lerp weird. The lerp function is meant to give you a value between the first two arguments. The third arguments controls how close is the result towards the first or second argument. When the third argument is zero, you get the first argument as result. And when the third argument is one, you get the second argument. But the third argument is 20?
I don't know what the intention here is. Do you want to go to zero and overshoot?
I don't know. But as far as units are concerned, you are changing the velocity but not by a delta, but by a factor. This is not an acceleration, this is a jerk (look it up).
Sequencing movement
Here you have an unusual structure:
func movimiento(): # you probably will add delta as parameter here
for mov in lista_habilidades_jugador:
# PROCESS THE MOVEMENT
self.lista_habilidades_jugador.pop_front()
You iterate over a list, and each iteration - after processing the movement - you remove an element from the list. Yes, the same list.
If the list stars with three items:
The first iteration looks into the first element and removes the third. The list now has two elements.
The second iteration looks into the second element and removes the second. The list now has one element.
No third iteration.
Don't do that. In general you want to one element, and process it. The minimum would be like this:
func movimiento():
var mov = self.lista_habilidades_jugador.pop_front()
# PROCESS THE MOVEMENT
The next time you call the movement method it will pull the next element, and so on.
Ah, but you are calling the movement method each physics frame. Except presumably the motion should take more than one physics frame. So you should hold up until the current movement finishes.
This is how I would structure it:
func _physics_process(delta):
velocity.y += GRAVITY
movimiento()
velocity = move_and_slide(velocity)
Then the movement method can set the variable:
func movimiento():
# if there are no movements we do nothing
if lista_habilidades_jugador.size() == 0:
return
# get the first movement
var mov = self.lista_habilidades_jugador[0]
moviendose = true
# PROCESS THE MOVEMENT
if not moviendose: # we will come back to this
# we are done moving
self.lista_habilidades_jugador.pop_front()
Grid movement
To process each movement we need to update the velocity and know when it reached the destination. We can't do that if we just have a velocity.
In general we don't want to define velocities for grid movement. But displacements. Which will be an issue for the jump but we will get to that.
Thus, I'll assume that the values you have in your dictionary are not velocities but displacements. In fact, we are going to store a target:
var target := Vector2.ZERO
So we can keep track to were we have to move towards. And we need to update that when we pull a new movement from the list:
func movimiento():
# if there are no movements we do nothing
if lista_habilidades_jugador.size() == 0:
return
# get the first movement
var mov = self.lista_habilidades_jugador[0]
if not moviendose:
# just starting a new movement
var displacement = def_habilidades[mov]
target = position + displacement
moviendose = true
# PROCESS THE MOVEMENT
if not moviendose: # we will come back to this
# we are done moving
self.lista_habilidades_jugador.pop_front()
The other thing we need is how long it should take to move. I'll come up with some value, you tweak it as you see fit:
var step_time := 0.5
Do you remember physics (again)? the velocity is displacement over time.
velocity = displacement / step_time
So:
func movimiento():
# if there are no movements we do nothing
if lista_habilidades_jugador.size() == 0:
return
# get the first movement
var mov = self.lista_habilidades_jugador[0]
if not moviendose:
# just starting a new movement
var displacement = def_habilidades[mov]
velocity = displacement / step_time
target = position + displacement
moviendose = true
# PROCESS THE MOVEMENT
if not moviendose: # we will come back to this
# we are done moving
self.lista_habilidades_jugador.pop_front()
And what is left is finding out if we reached the target. A first approximation is this:
func movimiento():
# if there are no movements we do nothing
if lista_habilidades_jugador.size() == 0:
return
# get the first movement
var mov = self.lista_habilidades_jugador[0]
if not moviendose:
# just starting a new movement
var displacement = def_habilidades[mov]
velocity = displacement / step_time
target = position + displacement
moviendose = position.distance_to(target) > 0
if not moviendose:
# we are done moving
self.lista_habilidades_jugador.pop_front()
And we are done. Right? RIGHT?
Overshooting
We are not done. We move some distance each frame, so the distance to the target will likely not hit zero. And that is without talking about floating point errors.
Thus, instead of finding out if we reached the target, we will find out if we will overshoot the target. And to do that we need to compute how we will move. Start by bringing delta in (I'm also adding type information):
func _physics_process(delta:float) -> void:
velocity.y += GRAVITY
movimiento(delta)
velocity = move_and_slide(velocity)
func movimiento(delta:float) -> void:
# if there are no movements we do nothing
if lista_habilidades_jugador.size() == 0:
return
# get the first movement
var mov:String = self.lista_habilidades_jugador[0]
if not moviendose:
# just starting a new movement
var displacement:Vector2 = def_habilidades[mov]
velocity = displacement / step_time
target = position + displacement
moviendose = position.distance_to(target) > 0
if not moviendose:
# we are done moving
self.lista_habilidades_jugador.pop_front()
Now we can compute how much we will move this frame:
velocity.length() * delta
And compare that with the distance to the target:
moviendose = position.distance_to(target) > velocity.length() * delta
Now that we know we will overshoot, we should prevent it. The first idea is to snap to the target:
if not moviendose:
# we are done moving
position = target
self.lista_habilidades_jugador.pop_front()
For reference I'll also mention that we can compute the velocity to reach the target this frame (within floating point error):
if not moviendose:
# we are done moving
velocity = velocity.normalized() * position.distance_to(target) / delta
self.lista_habilidades_jugador.pop_front()
Here we normalize the velocity to get just its direction. And do you remember physics? Yes, yes. That is distance over time.
However we will not use this one, because it would mess with the jump…
Jump
But the jump does not work like that. You cannot define a jump the same way. There are six ways to define a vertical jump:
By Gravity and Time
By Gravity and Speed
By Gravity and Height
By Time and Speed
By Time and Height
By Speed and Height
Defining the jump by gravity and speed is very common, at least among beginners, because you already have a gravity and then you define some upward velocity and you have a jump.
It is, however, a good idea to define the gravity using height because then you know - by design - how high it can jump which can be useful for scenario design.
In fact, in the spirit of keeping the representation of movement by displacement, using a definition by gravity and height is convenient. So I'll go with that. Yet, we will still need to compute the velocity, which we will do like this:
half_jump_time = sqrt(-2.0 * max_height / gravity)
jump_time = half_jump_time * 2.0
vertical_speed = -gravity * half_jump_time
That comes from the equations of motion. I'll spare how to come up with that, feel free to look it up.
However, note that the vertical component will not be the destination vertical position, but the height of the jump. Which is half way through the jump. So for consistency sake, when you specify a jump the displacement will be to the highest point of the jump…
We will know it is a jump because it has a vertical component at all.
if not moviendose:
# just starting a new movement
var displacement:Vector2 = def_habilidades[mov]
target = position + displacement
if displacement.y == 0:
velocity = displacement / step_time
else:
var half_jump_time = sqrt(-2.0 * displacement.y / gravity)
var jump_time = half_jump_time * 2.0
velocity.y = -gravity * half_jump_time
velocity.x = displacement.x / half_jump_time
And that highlights another problem: We cannot say the motion ended when reached the target, we also need to check if the character is on the ground.
For that effect, I'll have one extra variable (add at the top of the file):
var target_reached := false
So we can say:
var distance:float = velocity.length() * delta
target_reached = target_reached or position.distance_to(target) > distance
moviendose = not (target_reached and is_on_floor())
if not moviendose:
# we are done moving
target_reached = false
position = target
self.lista_habilidades_jugador.pop_front()
Unless I forgot to include something, that should be it.
Addendum: Yes, I forgot something. When the character landed that is not the target position (because the target position is the mid point of the jump). You could, in theory, snap mid jump, but I don't think that is of much use. Instead consider snapping the position (or issuing a move) once landed to the nearest position of the grid. You can use the method snapped on the vector to find out where it aligns to the grid size.
I am using google tango tablet to acquire point cloud data and RGB camera images. I want to create 3D scan of the room. For that i need to map 2D image pixels to point cloud point. I will be doing this with a lot of point clouds and corresponding images.Thus I need to write a code script which has two inputs 1. point cloud and 2. image taken from the same point in same direction and the script should output colored point cloud. How should i approach this & which platforms will be very simple to use?
Here is the math to map a 3D point v to 2D pixel space in the camera image (assuming that v already incorporates the extrinsic camera position and orientation, see note at bottom*):
// Project to tangent space.
vec2 imageCoords = v.xy/v.z;
// Apply radial distortion.
float r2 = dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0 + k1*r2 + k2*r4 + k3*r6;
// Map to pixel space.
vec3 pixelCoords = cameraTransform*vec3(imageCoords, 1);
Where cameraTransform is the 3x3 matrix:
[ fx 0 cx ]
[ 0 fy cy ]
[ 0 0 1 ]
with fx, fy, cx, cy, k1, k2, k3 from TangoCameraIntrinsics.
pixelCoords is declared vec3 but is actually 2D in homogeneous coordinates. The third coordinate is always 1 and so can be ignored for practical purposes.
Note that if you want texture coordinates instead of pixel coordinates, that is just another linear transform that can be premultiplied onto cameraTransform ahead of time (as is any top-to-bottom vs. bottom-to-top scanline addressing).
As for what "platform" (which I loosely interpreted as "language") is simplest, the native API seems to be the most straightforward way to get your hands on camera pixels, though it appears people have also succeeded with Unity and Java.
* Points delivered by TangoXYZij already incorporate the depth camera extrinsic transform. Technically, because the current developer tablet shares the same hardware between depth and color image acquisition, you won't be able to get a color image that exactly matches unless both your device and your scene are stationary. Fortunately in practice, most applications can probably assume that neither the camera pose nor the scene changes enough in one frame time to significantly affect color lookup.
This answer is not original, it is simply meant as a convenience for Unity users who would like the correct answer, as provided by #rhashimoto, worked out for them. My contribution (hopefully) is providing code that reduces the normal 16 multiplies and 12 adds (given Unity only does 4x4 matrices) to 2 multiplies and 2 adds by dropping out all of the zero results. I ran a little under a million points through the test, checking each time that my calculations agreed with the basic matrix calculations - defined as the absolute difference between the two results being less than machine epsilon - I'm as comfortable with this as I can be knowing that #rhashimoto may show up and poke a giant hole in it :-)
If you want to switch back and forth, remember this is C#, so the USEMATRIXMATH define must appear at the beginning of the file.
Given there's only one Tango device right now, and I'm assuming the intrinsics are constant across all of the devices, I just dumped them in as constants, such that
fx = 1042.73999023438
fy = 1042.96997070313
cx = 637.273986816406
cy = 352.928985595703
k1 = 0.228532999753952
k2 = -0.663019001483917
k3 = 0.642908990383148
Yes they can be dumped in as constants, which would make things more readable, and C# is probably smart enough to optimize it out - however, I spent too much of my life in Agner Fogg's stuff, and will always be paranoid.
The commented out code at the bottom is for testing the difference, should you desire. You'll have to uncomment some other stuff, and comment out the returns if you want to test the results.
My thanks again to #rhashimoto, this is far far better than what I had
I have stayed true to his logic, remember these are pixel coordinates, not UV coordinates - he is correct that you can premultiply the transform to get normalized UV values, but since he schooled me on this once already, I will stick with exactly the math he presented before I fiddle with too much :-)
static public Vector2 PictureUV(Vector3 tangoDepthPoint)
{
Vector2 imageCoords = new Vector2(tangoDepthPoint.x / tangoDepthPoint.z, tangoDepthPoint.y / tangoDepthPoint.z);
float r2 = Vector2.Dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0f + 0.228532999753952f*r2 + -0.663019001483917f*r4 + 0.642908990383148f*r6;
Vector3 ic3 = new Vector3(imageCoords.x,imageCoords.y,1);
#if USEMATRIXMATH
Matrix4x4 cameraTransform = new Matrix4x4();
cameraTransform.SetRow(0,new Vector4(1042.73999023438f,0,637.273986816406f,0));
cameraTransform.SetRow(1, new Vector4(0, 1042.96997070313f, 352.928985595703f, 0));
cameraTransform.SetRow(2, new Vector4(0, 0, 1, 0));
cameraTransform.SetRow(3, new Vector4(0, 0, 0, 1));
Vector3 pixelCoords = cameraTransform * ic3;
return new Vector2(pixelCoords.x, pixelCoords.y);
#else
//float v1 = 1042.73999023438f * imageCoords.x + 637.273986816406f;
//float v2 = 1042.96997070313f * imageCoords.y + 352.928985595703f;
//float v3 = 1;
return new Vector2(1042.73999023438f * imageCoords.x + 637.273986816406f,1042.96997070313f * imageCoords.y + 352.928985595703);
#endif
//float dx = Math.Abs(v1 - pixelCoords.x);
//float dy = Math.Abs(v2 - pixelCoords.y);
//float dz = Math.Abs(v3 - pixelCoords.z);
//if (dx > float.Epsilon || dy > float.Epsilon || dz > float.Epsilon)
// UnityEngine.Debug.Log("Well, that didn't work");
//return new Vector2(v1, v2);
}
As one final note, do note the code he provided is GLSL - if you're just using this for pretty pictures, use it - this is for those that actually need to perform additional processing.
I am looking into the possibility to use multiple iBeacons to do a 'rough' indoor position location. The application is a kind of 'museum' setting, and it would be easier to be able to form a grid with locations for the different objects then individual beacons (although that might not be impossible too).
Are there examples, experiences, with using multiple beacons to triangulate into some kind of location, or some logic to help me on the way to write it myself?
I have been making some experiments to get a precise position using three beacons.
Results of trilateration
Unluckily, the results were very disappointing in terms of quality. There were mainly two issues:
In non-controlled environments, where you can find metals, and other objects that affect the signal, the received signal strength of the beacons changes so often that it seems impossible to get error range below 5 meters.
Depending on the way that the user is handling the receiver device, the readings can change a lot as well. If the user puts his/her hand over the bluetooth antenna, then the algorithm will have low signals as input, and thus the beacons will supposed to be very far from the device. See this image to see the precise location of the Bluetooth antenna.
Possible solutions
After talking with an Apple engineer who actively discouraged me to go down this way, the option I feel more inclined to use right now is brute force. Try to set up a beacon every X meters (X being the maximum error tolerated in the system) so we can track on this beacons grid the position of a given device by calculating which beacon on the grid is the closest to the device and assuming that the device is on the same position.
Trilateration algorithm
However, for the sake of completeness, I share below the core function of the trilateration algorithm. It's based on the paragraph 3 ("Three distances known") of this article.
- (CGPoint)getCoordinateWithBeaconA:(CGPoint)a beaconB:(CGPoint)b beaconC:(CGPoint)c distanceA:(CGFloat)dA distanceB:(CGFloat)dB distanceC:(CGFloat)dC {
CGFloat W, Z, x, y, y2;
W = dA*dA - dB*dB - a.x*a.x - a.y*a.y + b.x*b.x + b.y*b.y;
Z = dB*dB - dC*dC - b.x*b.x - b.y*b.y + c.x*c.x + c.y*c.y;
x = (W*(c.y-b.y) - Z*(b.y-a.y)) / (2 * ((b.x-a.x)*(c.y-b.y) - (c.x-b.x)*(b.y-a.y)));
y = (W - 2*x*(b.x-a.x)) / (2*(b.y-a.y));
//y2 is a second measure of y to mitigate errors
y2 = (Z - 2*x*(c.x-b.x)) / (2*(c.y-b.y));
y = (y + y2) / 2;
return CGPointMake(x, y);
}
Here is an open source java library that will perform the trilateration/multilateration:
https://github.com/lemmingapex/Trilateration
It uses a popular nonlinear least squares optimizer, the Levenberg-Marquardt algorithm, from Apache Commons Math.
double[][] positions = new double[][] { { 5.0, -6.0 }, { 13.0, -15.0 }, { 21.0, -3.0 }, { 12.42, -21.2 } };
double[] distances = new double[] { 8.06, 13.97, 23.32, 15.31 };
NonLinearLeastSquaresSolver solver = new NonLinearLeastSquaresSolver(new TrilaterationFunction(positions, distances), new LevenbergMarquardtOptimizer());
Optimum optimum = solver.solve();
// the answer
double[] calculatedPosition = optimum.getPoint().toArray();
// error and geometry information
RealVector standardDeviation = optimum.getSigma(0);
RealMatrix covarianceMatrix = optimum.getCovariances(0);
Most scholarly examples, like the one on wikipedia, deal with exactly three circles and assume perfectly accurate information. These circumstances allow for much simpler problem formulations with exact answers, and are usually not satisfactory for practical situations.
The problem in R2 or R3 euclidean space with distances that contain measurement error, an area (ellipse) or volume (ellipsoid) of interest is usually obtained instead of a point. If a point estimate is desired instead of a region, the area centroid or volume centroid should be used. R2 space requires at least 3 non-degenerate points and distances to obtain a unique region; and similarly R3 space requires at least 4 non-degenerate points and distances to obtain a unique region.
I looked into this. The term you want it trilateration. (In triangulation you have angles from 3 known points. In trilateration you have distance from 3 known points) If you Google it you should find several articles including one on Wiki. It involves solving a set of 3 simultaneous equations. The documents I saw were for 3D trilateration - 2D is easier because you can just drop the Z term.
What I found was abstract math. I haven't taken the time yet to map the general algorithm into specific code, but I plan on tackling it at some point.
Note that the results you get will be VERY crude, especially in anything but an empty room. The signals are weak enough that a person, a statue, or anything that blocks line of sight will increase your distance readings pretty significantly. You might even have places in a building where constructive interference (mostly from the walls) makes some places read as much closer than they actually are.
Accurate indoor positioning with iBeacon will be challenging for the following reasons:
As pointed in earlier comments, iBeacon signal tend to fluctuate a lot. The reason include multipath effect, the dynamic object obstructions between the phone and iBeacon when the person is moving, other 2.4GHz interferences, and more. So ideally you don't want to trust 1 single packet's data and instead do some averaging for several packets from the same beacon. That would require the phone/beacon distance doesn't change too much between those several packets. For general BLE packets (like beacons from StickNFind) can easily be set to 10Hz beaconing rate. However for iBeacon, that'll be hard, because
iBeacon's beaconing frequency probably cannot be higher than 1Hz. I will be glad if anyone can point to source that says otherwise, but all information I've seen so far confirms this assertion. That actually make sense since most iBeacons will be battery powered and high frequency significantly impact the battery life. Considering people's average walking speed is 5.3km (~1.5m/s), so even if you just use a modest 3 beacon packets to do the averaging, you will be hard to get ~5m accuracy.
On the other hand, if you could increase iBeacon frequency to larger than 10Hz (which I doubt is possible), then it's possible to have 5m or higher accuracy using suitable processing method. Firstly trivial solutions based on the Inverse-Square Law, like trilateration, is often not performing well because in practice the distance/RSSI relationship for different beacons are often way off from the Inverse-Sqare Law for the reason 1 above. But as long as the RSSI is relatively stable for a certain beacon in any certain location (which usually is the case), you can use an approach called fingerprinting to achieve higher accuracy. A common method used for fingerprinting is kNN (k-Nearest Neighbor).
Update 2014-04-24
Some iBeacons can broadcast more than 1Hz, like Estimote use 5Hz as default. However, according to this link: "This is Apple restriction. IOS returns beacons update every second, no matter how frequently device is advertising.". There is another comment there (likely from the Estimote vendor) saying "Our beacons can broadcast much faster and it may improve results and measurement". So whether higher iBeacon frequency is beneficial is not clear.
For those who need #Javier Chávarri trilateration function for Android devices (for saving some time):
public static Location getLocationWithTrilateration(Location beaconA, Location beaconB, Location beaconC, double distanceA, double distanceB, double distanceC){
double bAlat = beaconA.getLatitude();
double bAlong = beaconA.getLongitude();
double bBlat = beaconB.getLatitude();
double bBlong = beaconB.getLongitude();
double bClat = beaconC.getLatitude();
double bClong = beaconC.getLongitude();
double W, Z, foundBeaconLat, foundBeaconLong, foundBeaconLongFilter;
W = distanceA * distanceA - distanceB * distanceB - bAlat * bAlat - bAlong * bAlong + bBlat * bBlat + bBlong * bBlong;
Z = distanceB * distanceB - distanceC * distanceC - bBlat * bBlat - bBlong * bBlong + bClat * bClat + bClong * bClong;
foundBeaconLat = (W * (bClong - bBlong) - Z * (bBlong - bAlong)) / (2 * ((bBlat - bAlat) * (bClong - bBlong) - (bClat - bBlat) * (bBlong - bAlong)));
foundBeaconLong = (W - 2 * foundBeaconLat * (bBlat - bAlat)) / (2 * (bBlong - bAlong));
//`foundBeaconLongFilter` is a second measure of `foundBeaconLong` to mitigate errors
foundBeaconLongFilter = (Z - 2 * foundBeaconLat * (bClat - bBlat)) / (2 * (bClong - bBlong));
foundBeaconLong = (foundBeaconLong + foundBeaconLongFilter) / 2;
Location foundLocation = new Location("Location");
foundLocation.setLatitude(foundBeaconLat);
foundLocation.setLongitude(foundBeaconLong);
return foundLocation;
}
If you're anything like me and don't like maths you might want to do a quick search for "indoor positioning sdk". There's lots of companies offering indoor positioning as a service.
Shameless plug: I work for indoo.rs and can recommend this service. It also includes routing and such on top of "just" indoor positioning.
My Architect/Manager, who wrote the following algorithm,
public static Location getLocationWithCenterOfGravity(Location beaconA, Location beaconB, Location beaconC, double distanceA, double distanceB, double distanceC) {
//Every meter there are approx 4.5 points
double METERS_IN_COORDINATE_UNITS_RATIO = 4.5;
//http://stackoverflow.com/a/524770/663941
//Find Center of Gravity
double cogX = (beaconA.getLatitude() + beaconB.getLatitude() + beaconC.getLatitude()) / 3;
double cogY = (beaconA.getLongitude() + beaconB.getLongitude() + beaconC.getLongitude()) / 3;
Location cog = new Location("Cog");
cog.setLatitude(cogX);
cog.setLongitude(cogY);
//Nearest Beacon
Location nearestBeacon;
double shortestDistanceInMeters;
if (distanceA < distanceB && distanceA < distanceC) {
nearestBeacon = beaconA;
shortestDistanceInMeters = distanceA;
} else if (distanceB < distanceC) {
nearestBeacon = beaconB;
shortestDistanceInMeters = distanceB;
} else {
nearestBeacon = beaconC;
shortestDistanceInMeters = distanceC;
}
//http://www.mathplanet.com/education/algebra-2/conic-sections/distance-between-two-points-and-the-midpoint
//Distance between nearest beacon and COG
double distanceToCog = Math.sqrt(Math.pow(cog.getLatitude() - nearestBeacon.getLatitude(),2)
+ Math.pow(cog.getLongitude() - nearestBeacon.getLongitude(),2));
//Convert shortest distance in meters into coordinates units.
double shortestDistanceInCoordinationUnits = shortestDistanceInMeters * METERS_IN_COORDINATE_UNITS_RATIO;
//http://math.stackexchange.com/questions/46527/coordinates-of-point-on-a-line-defined-by-two-other-points-with-a-known-distance?rq=1
//On the line between Nearest Beacon and COG find shortestDistance point apart from Nearest Beacon
double t = shortestDistanceInCoordinationUnits/distanceToCog;
Location pointsDiff = new Location("PointsDiff");
pointsDiff.setLatitude(cog.getLatitude() - nearestBeacon.getLatitude());
pointsDiff.setLongitude(cog.getLongitude() - nearestBeacon.getLongitude());
Location tTimesDiff = new Location("tTimesDiff");
tTimesDiff.setLatitude( pointsDiff.getLatitude() * t );
tTimesDiff.setLongitude(pointsDiff.getLongitude() * t);
//Add t times diff with nearestBeacon to find coordinates at a distance from nearest beacon in line to COG.
Location userLocation = new Location("UserLocation");
userLocation.setLatitude(nearestBeacon.getLatitude() + tTimesDiff.getLatitude());
userLocation.setLongitude(nearestBeacon.getLongitude() + tTimesDiff.getLongitude());
return userLocation;
}
Calculate the centre of gravity for a triangle (3 beacons)
calculate the shortest distance / nearest beacon
Calculate the distance between the beacon and the centre of gravity
Convert the shortest distance to co-ordinate units which is just a constant, he used to predict accuracy. You can test with varing the constant
calculate the distance delta
add the delta with the nearest beacon x,y.
After testing it, I found it accurate to 5 meters.
Please comment me your testing, if we can refine it.
I've implemented a very simple Fingerprint algorithm for android 4.4, tested in a relative 'bad' environment:
nearly 10 wifi AP nearby.
several other Bluetooth signals nearby.
the accurate seems in 5-8 meters and depends on how I placed that 3 Ibeacon broadcaster.
The algorithm is quite simple and I think you can implemented one by yourself, the steps are:
load the indoor map.
sampling with the map for all the pending positioning point.
record all the sampling data, the data should include:
map coordinate, position signals and their RSSI.
so when you start positioning, it's just a reverse of proceeding steps.
We are also trying to find the best way to precisely locate someone into a room using iBeacons. The thing is that the beacon signal power is not constant, and it is affected by other 2.4 Ghz signals, metal objects etc, so to achieve maximum precision it is necessary to calibrate each beacon individually, and once it has been set in the desired position. (and make some field test to see signal fluctuations when other Bluetooth devices are present).
We have also some iBeacons from Estimote (the same of the Konrad Dzwinel's video), and they have already developed some tech demo of what can be done with the iBeacons. Within their App it is possible to see a Radar in which iBeacons are shown. Sometimes is pretty accurate, but sometimes it is not, (and seems phone movement is not being considered to calculate positions). Check the Demo in the video we made here: http://goo.gl/98hiza
Although in theory 3 iBeacons should be enough to achieve a good precision, maybe in real world situations more beacons are needed to ensure the precision you are looking for.
The thing that really helped me was this project on Code.Google.com: https://code.google.com/p/wsnlocalizationscala/ it contains lots of code, several trilateration algorithms, all written in C#. It's a big library, but not really meant to be used "out-of-the-box".
Please check the reference https://proximi.io/accurate-indoor-positioning-bluetooth-beacons/
Proximi SDK will take care of the triangulation. This SDK provides libraries for handling all the logic for beacon positioning, triangulation and filtering automatically in the background. In addition to beacons, you can combine IndoorAtlas, Wi-Fi, GPS and cellular positioning.
I found Vishnu Prahbu's solution very useful. I ported it to c#, if anybody need it.
public static PointF GetLocationWithCenterOfGravity(PointF a, PointF b, PointF c, float dA, float dB, float dC)
{
//http://stackoverflow.com/questions/20332856/triangulate-example-for-ibeacons
var METERS_IN_COORDINATE_UNITS_RATIO = 1.0f;
//http://stackoverflow.com/a/524770/663941
//Find Center of Gravity
var cogX = (a.X + b.X + c.X) / 3;
var cogY = (a.Y + b.Y + c.Y) / 3;
var cog = new PointF(cogX,cogY);
//Nearest Beacon
PointF nearestBeacon;
float shortestDistanceInMeters;
if (dA < dB && dA < dC)
{
nearestBeacon = a;
shortestDistanceInMeters = dA;
}
else if (dB < dC)
{
nearestBeacon = b;
shortestDistanceInMeters = dB;
}
else
{
nearestBeacon = c;
shortestDistanceInMeters = dC;
}
//http://www.mathplanet.com/education/algebra-2/conic-sections/distance-between-two-points-and-the-midpoint
//Distance between nearest beacon and COG
var distanceToCog = (float)(Math.Sqrt(Math.Pow(cog.X - nearestBeacon.X, 2)
+ Math.Pow(cog.Y - nearestBeacon.Y, 2)));
//Convert shortest distance in meters into coordinates units.
var shortestDistanceInCoordinationUnits = shortestDistanceInMeters * METERS_IN_COORDINATE_UNITS_RATIO;
//http://math.stackexchange.com/questions/46527/coordinates-of-point-on-a-line-defined-by-two-other-points-with-a-known-distance?rq=1
//On the line between Nearest Beacon and COG find shortestDistance point apart from Nearest Beacon
var t = shortestDistanceInCoordinationUnits / distanceToCog;
var pointsDiff = new PointF(cog.X - nearestBeacon.X, cog.Y - nearestBeacon.Y);
var tTimesDiff = new PointF(pointsDiff.X * t, pointsDiff.Y * t);
//Add t times diff with nearestBeacon to find coordinates at a distance from nearest beacon in line to COG.
var userLocation = new PointF(nearestBeacon.X + tTimesDiff.X, nearestBeacon.Y + tTimesDiff.Y);
return userLocation;
}
Alternative Equation
- (CGPoint)getCoordinateWithBeaconA:(CGPoint)a beaconB:(CGPoint)b beaconC:(CGPoint)c distanceA:(CGFloat)dA distanceB:(CGFloat)dB distanceC:(CGFloat)dC {
CGFloat x, y;
x = ( ( (pow(dA,2)-pow(dB,2)) + (pow(c.x,2)-pow(a.x,2)) + (pow(b.y,2)-pow(a.y,2)) ) * (2*c.y-2*b.y) - ( (pow(dB,2)-pow(dC,2)) + (pow(c.x,2)-pow(c.x,2)) + (pow(c.y,2)-pow(b.y,2)) ) *(2*b.y-2*a.y) ) / ( (2*b.x-2*c.x)*(2*b.y-2*a.y)-(2*a.x-2*b.x)*(2*c.y-2*b.y) );
y = ( (pow(dA,2)-pow(dB,2)) + (pow(c.x,2)-pow(a.x,2)) + (pow(b.y,2)-pow(a.y,2)) + x*(2*a.x-2*b.x)) / (2*b.y-2*a.y);
return CGPointMake(x, y);
}
Whenever I have a character and I want him to move to an object I always have to convert it to an angle, ex:
int adjacent = myPosition.X - thatPosition.X;
int opposite = myPosition.Y - thatPosition.Y;
double angle = Math.atan2(adjacent, opposite);
myPosition.X += Math.cos(angle);
myPosition.Y += Math.sin(angle);
Is there an easier way to move an object to another by just using vectors and not converting to an angle and back, if so I would appreciate if you showed how and/or pointed me to a site that could show me.
p.s. I am coding in XNA
Yes, you can minimize the trig & take a more linear algebra approach. That would look something like this:
//class scope fields
Vector2 myPosition, velocity, myDestination;
float speed;
//in the initialize method
myPosition = new Vector2(?, ?);
myDestination = new Vector2(?, ?);
speed = ?f;//usually refers to "units (or pixels) per second"
//in the update method to change the direction traveled by changing the destination
float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;
myDestination = new Vector2(?', ?');//change direction by changing destination
Velocity = Vector2.Normalize(myDestination - myPosition);
velocity *= speed;
myPosition += velocity * elapsed;
//or instead of changing destination to change velocity direction, you can simply rotate the velocity vector
velocity = Vector2.Transform(velocity, Matrix.CreateRotationZ(someAngle);
velocity.Normalize();
velocity *= speed;
myPosition += velocity * elapsed;
The only angles/trig used here is the trig that is embedded in the CreateRotationZ() method and it is happening behind the scenes by the xna framework.
Mornin' SO!
I'm just trying to hone my math-fu, and I have some questions regarding Cocos2D in particular. Since Cocos2D wants to 'simplify' things, all sprites have a rotation property, ranging from 0-360 (359?) CW. This forces you to do some rather (for me) mind-humping conversions when dealing with functions like atan.
So f.ex. this method:
- (void)rotateTowardsPoint:(CGPoint)point
{
// vector from me to the point
CGPoint v = ccpSub(self.position, point);
// ccpToAngle is just a cute wrapper for atan2f
// the macro is self explanatory and the - is to flip the direction I guess
float angle = -CC_RADIANS_TO_DEGREES(ccpToAngle(v));
// just to get it all in the range of 0-360
if(angle < 0.f)
angle += 360.0f;
// but since '0' means east in Cocos..
angle += 180.0f;
// get us in the range of 0-360 again
if(angle > 360.0f)
angle -= 360.0f;
self.rotation = angle;
}
works as intended. But to me it looks kind of brute forced. Is there a cleaner way to achieve the same effect?
It is enough to do
float angle = -CC_RADIANS_TO_DEGREES(ccpToAngle(v));
self.rotation = angle + 180.0f;
for equivalent transformations
// vector from me to the point
CGPoint v = ccpSub(self.position, point);
actually, that's vector from point to you.
// just to get it all in the range of 0-360
you don't need to do that.