Triangulate example for iBeacons - bluetooth-lowenergy

I am looking into the possibility to use multiple iBeacons to do a 'rough' indoor position location. The application is a kind of 'museum' setting, and it would be easier to be able to form a grid with locations for the different objects then individual beacons (although that might not be impossible too).
Are there examples, experiences, with using multiple beacons to triangulate into some kind of location, or some logic to help me on the way to write it myself?

I have been making some experiments to get a precise position using three beacons.
Results of trilateration
Unluckily, the results were very disappointing in terms of quality. There were mainly two issues:
In non-controlled environments, where you can find metals, and other objects that affect the signal, the received signal strength of the beacons changes so often that it seems impossible to get error range below 5 meters.
Depending on the way that the user is handling the receiver device, the readings can change a lot as well. If the user puts his/her hand over the bluetooth antenna, then the algorithm will have low signals as input, and thus the beacons will supposed to be very far from the device. See this image to see the precise location of the Bluetooth antenna.
Possible solutions
After talking with an Apple engineer who actively discouraged me to go down this way, the option I feel more inclined to use right now is brute force. Try to set up a beacon every X meters (X being the maximum error tolerated in the system) so we can track on this beacons grid the position of a given device by calculating which beacon on the grid is the closest to the device and assuming that the device is on the same position.
Trilateration algorithm
However, for the sake of completeness, I share below the core function of the trilateration algorithm. It's based on the paragraph 3 ("Three distances known") of this article.
- (CGPoint)getCoordinateWithBeaconA:(CGPoint)a beaconB:(CGPoint)b beaconC:(CGPoint)c distanceA:(CGFloat)dA distanceB:(CGFloat)dB distanceC:(CGFloat)dC {
CGFloat W, Z, x, y, y2;
W = dA*dA - dB*dB - a.x*a.x - a.y*a.y + b.x*b.x + b.y*b.y;
Z = dB*dB - dC*dC - b.x*b.x - b.y*b.y + c.x*c.x + c.y*c.y;
x = (W*(c.y-b.y) - Z*(b.y-a.y)) / (2 * ((b.x-a.x)*(c.y-b.y) - (c.x-b.x)*(b.y-a.y)));
y = (W - 2*x*(b.x-a.x)) / (2*(b.y-a.y));
//y2 is a second measure of y to mitigate errors
y2 = (Z - 2*x*(c.x-b.x)) / (2*(c.y-b.y));
y = (y + y2) / 2;
return CGPointMake(x, y);
}

Here is an open source java library that will perform the trilateration/multilateration:
https://github.com/lemmingapex/Trilateration
It uses a popular nonlinear least squares optimizer, the Levenberg-Marquardt algorithm, from Apache Commons Math.
double[][] positions = new double[][] { { 5.0, -6.0 }, { 13.0, -15.0 }, { 21.0, -3.0 }, { 12.42, -21.2 } };
double[] distances = new double[] { 8.06, 13.97, 23.32, 15.31 };
NonLinearLeastSquaresSolver solver = new NonLinearLeastSquaresSolver(new TrilaterationFunction(positions, distances), new LevenbergMarquardtOptimizer());
Optimum optimum = solver.solve();
// the answer
double[] calculatedPosition = optimum.getPoint().toArray();
// error and geometry information
RealVector standardDeviation = optimum.getSigma(0);
RealMatrix covarianceMatrix = optimum.getCovariances(0);
Most scholarly examples, like the one on wikipedia, deal with exactly three circles and assume perfectly accurate information. These circumstances allow for much simpler problem formulations with exact answers, and are usually not satisfactory for practical situations.
The problem in R2 or R3 euclidean space with distances that contain measurement error, an area (ellipse) or volume (ellipsoid) of interest is usually obtained instead of a point. If a point estimate is desired instead of a region, the area centroid or volume centroid should be used. R2 space requires at least 3 non-degenerate points and distances to obtain a unique region; and similarly R3 space requires at least 4 non-degenerate points and distances to obtain a unique region.

I looked into this. The term you want it trilateration. (In triangulation you have angles from 3 known points. In trilateration you have distance from 3 known points) If you Google it you should find several articles including one on Wiki. It involves solving a set of 3 simultaneous equations. The documents I saw were for 3D trilateration - 2D is easier because you can just drop the Z term.
What I found was abstract math. I haven't taken the time yet to map the general algorithm into specific code, but I plan on tackling it at some point.
Note that the results you get will be VERY crude, especially in anything but an empty room. The signals are weak enough that a person, a statue, or anything that blocks line of sight will increase your distance readings pretty significantly. You might even have places in a building where constructive interference (mostly from the walls) makes some places read as much closer than they actually are.

Accurate indoor positioning with iBeacon will be challenging for the following reasons:
As pointed in earlier comments, iBeacon signal tend to fluctuate a lot. The reason include multipath effect, the dynamic object obstructions between the phone and iBeacon when the person is moving, other 2.4GHz interferences, and more. So ideally you don't want to trust 1 single packet's data and instead do some averaging for several packets from the same beacon. That would require the phone/beacon distance doesn't change too much between those several packets. For general BLE packets (like beacons from StickNFind) can easily be set to 10Hz beaconing rate. However for iBeacon, that'll be hard, because
iBeacon's beaconing frequency probably cannot be higher than 1Hz. I will be glad if anyone can point to source that says otherwise, but all information I've seen so far confirms this assertion. That actually make sense since most iBeacons will be battery powered and high frequency significantly impact the battery life. Considering people's average walking speed is 5.3km (~1.5m/s), so even if you just use a modest 3 beacon packets to do the averaging, you will be hard to get ~5m accuracy.
On the other hand, if you could increase iBeacon frequency to larger than 10Hz (which I doubt is possible), then it's possible to have 5m or higher accuracy using suitable processing method. Firstly trivial solutions based on the Inverse-Square Law, like trilateration, is often not performing well because in practice the distance/RSSI relationship for different beacons are often way off from the Inverse-Sqare Law for the reason 1 above. But as long as the RSSI is relatively stable for a certain beacon in any certain location (which usually is the case), you can use an approach called fingerprinting to achieve higher accuracy. A common method used for fingerprinting is kNN (k-Nearest Neighbor).
Update 2014-04-24
Some iBeacons can broadcast more than 1Hz, like Estimote use 5Hz as default. However, according to this link: "This is Apple restriction. IOS returns beacons update every second, no matter how frequently device is advertising.". There is another comment there (likely from the Estimote vendor) saying "Our beacons can broadcast much faster and it may improve results and measurement". So whether higher iBeacon frequency is beneficial is not clear.

For those who need #Javier Chávarri trilateration function for Android devices (for saving some time):
public static Location getLocationWithTrilateration(Location beaconA, Location beaconB, Location beaconC, double distanceA, double distanceB, double distanceC){
double bAlat = beaconA.getLatitude();
double bAlong = beaconA.getLongitude();
double bBlat = beaconB.getLatitude();
double bBlong = beaconB.getLongitude();
double bClat = beaconC.getLatitude();
double bClong = beaconC.getLongitude();
double W, Z, foundBeaconLat, foundBeaconLong, foundBeaconLongFilter;
W = distanceA * distanceA - distanceB * distanceB - bAlat * bAlat - bAlong * bAlong + bBlat * bBlat + bBlong * bBlong;
Z = distanceB * distanceB - distanceC * distanceC - bBlat * bBlat - bBlong * bBlong + bClat * bClat + bClong * bClong;
foundBeaconLat = (W * (bClong - bBlong) - Z * (bBlong - bAlong)) / (2 * ((bBlat - bAlat) * (bClong - bBlong) - (bClat - bBlat) * (bBlong - bAlong)));
foundBeaconLong = (W - 2 * foundBeaconLat * (bBlat - bAlat)) / (2 * (bBlong - bAlong));
//`foundBeaconLongFilter` is a second measure of `foundBeaconLong` to mitigate errors
foundBeaconLongFilter = (Z - 2 * foundBeaconLat * (bClat - bBlat)) / (2 * (bClong - bBlong));
foundBeaconLong = (foundBeaconLong + foundBeaconLongFilter) / 2;
Location foundLocation = new Location("Location");
foundLocation.setLatitude(foundBeaconLat);
foundLocation.setLongitude(foundBeaconLong);
return foundLocation;
}

If you're anything like me and don't like maths you might want to do a quick search for "indoor positioning sdk". There's lots of companies offering indoor positioning as a service.
Shameless plug: I work for indoo.rs and can recommend this service. It also includes routing and such on top of "just" indoor positioning.

My Architect/Manager, who wrote the following algorithm,
public static Location getLocationWithCenterOfGravity(Location beaconA, Location beaconB, Location beaconC, double distanceA, double distanceB, double distanceC) {
//Every meter there are approx 4.5 points
double METERS_IN_COORDINATE_UNITS_RATIO = 4.5;
//http://stackoverflow.com/a/524770/663941
//Find Center of Gravity
double cogX = (beaconA.getLatitude() + beaconB.getLatitude() + beaconC.getLatitude()) / 3;
double cogY = (beaconA.getLongitude() + beaconB.getLongitude() + beaconC.getLongitude()) / 3;
Location cog = new Location("Cog");
cog.setLatitude(cogX);
cog.setLongitude(cogY);
//Nearest Beacon
Location nearestBeacon;
double shortestDistanceInMeters;
if (distanceA < distanceB && distanceA < distanceC) {
nearestBeacon = beaconA;
shortestDistanceInMeters = distanceA;
} else if (distanceB < distanceC) {
nearestBeacon = beaconB;
shortestDistanceInMeters = distanceB;
} else {
nearestBeacon = beaconC;
shortestDistanceInMeters = distanceC;
}
//http://www.mathplanet.com/education/algebra-2/conic-sections/distance-between-two-points-and-the-midpoint
//Distance between nearest beacon and COG
double distanceToCog = Math.sqrt(Math.pow(cog.getLatitude() - nearestBeacon.getLatitude(),2)
+ Math.pow(cog.getLongitude() - nearestBeacon.getLongitude(),2));
//Convert shortest distance in meters into coordinates units.
double shortestDistanceInCoordinationUnits = shortestDistanceInMeters * METERS_IN_COORDINATE_UNITS_RATIO;
//http://math.stackexchange.com/questions/46527/coordinates-of-point-on-a-line-defined-by-two-other-points-with-a-known-distance?rq=1
//On the line between Nearest Beacon and COG find shortestDistance point apart from Nearest Beacon
double t = shortestDistanceInCoordinationUnits/distanceToCog;
Location pointsDiff = new Location("PointsDiff");
pointsDiff.setLatitude(cog.getLatitude() - nearestBeacon.getLatitude());
pointsDiff.setLongitude(cog.getLongitude() - nearestBeacon.getLongitude());
Location tTimesDiff = new Location("tTimesDiff");
tTimesDiff.setLatitude( pointsDiff.getLatitude() * t );
tTimesDiff.setLongitude(pointsDiff.getLongitude() * t);
//Add t times diff with nearestBeacon to find coordinates at a distance from nearest beacon in line to COG.
Location userLocation = new Location("UserLocation");
userLocation.setLatitude(nearestBeacon.getLatitude() + tTimesDiff.getLatitude());
userLocation.setLongitude(nearestBeacon.getLongitude() + tTimesDiff.getLongitude());
return userLocation;
}
Calculate the centre of gravity for a triangle (3 beacons)
calculate the shortest distance / nearest beacon
Calculate the distance between the beacon and the centre of gravity
Convert the shortest distance to co-ordinate units which is just a constant, he used to predict accuracy. You can test with varing the constant
calculate the distance delta
add the delta with the nearest beacon x,y.
After testing it, I found it accurate to 5 meters.
Please comment me your testing, if we can refine it.

I've implemented a very simple Fingerprint algorithm for android 4.4, tested in a relative 'bad' environment:
nearly 10 wifi AP nearby.
several other Bluetooth signals nearby.
the accurate seems in 5-8 meters and depends on how I placed that 3 Ibeacon broadcaster.
The algorithm is quite simple and I think you can implemented one by yourself, the steps are:
load the indoor map.
sampling with the map for all the pending positioning point.
record all the sampling data, the data should include:
map coordinate, position signals and their RSSI.
so when you start positioning, it's just a reverse of proceeding steps.

We are also trying to find the best way to precisely locate someone into a room using iBeacons. The thing is that the beacon signal power is not constant, and it is affected by other 2.4 Ghz signals, metal objects etc, so to achieve maximum precision it is necessary to calibrate each beacon individually, and once it has been set in the desired position. (and make some field test to see signal fluctuations when other Bluetooth devices are present).
We have also some iBeacons from Estimote (the same of the Konrad Dzwinel's video), and they have already developed some tech demo of what can be done with the iBeacons. Within their App it is possible to see a Radar in which iBeacons are shown. Sometimes is pretty accurate, but sometimes it is not, (and seems phone movement is not being considered to calculate positions). Check the Demo in the video we made here: http://goo.gl/98hiza
Although in theory 3 iBeacons should be enough to achieve a good precision, maybe in real world situations more beacons are needed to ensure the precision you are looking for.

The thing that really helped me was this project on Code.Google.com: https://code.google.com/p/wsnlocalizationscala/ it contains lots of code, several trilateration algorithms, all written in C#. It's a big library, but not really meant to be used "out-of-the-box".

Please check the reference https://proximi.io/accurate-indoor-positioning-bluetooth-beacons/
Proximi SDK will take care of the triangulation. This SDK provides libraries for handling all the logic for beacon positioning, triangulation and filtering automatically in the background. In addition to beacons, you can combine IndoorAtlas, Wi-Fi, GPS and cellular positioning.

I found Vishnu Prahbu's solution very useful. I ported it to c#, if anybody need it.
public static PointF GetLocationWithCenterOfGravity(PointF a, PointF b, PointF c, float dA, float dB, float dC)
{
//http://stackoverflow.com/questions/20332856/triangulate-example-for-ibeacons
var METERS_IN_COORDINATE_UNITS_RATIO = 1.0f;
//http://stackoverflow.com/a/524770/663941
//Find Center of Gravity
var cogX = (a.X + b.X + c.X) / 3;
var cogY = (a.Y + b.Y + c.Y) / 3;
var cog = new PointF(cogX,cogY);
//Nearest Beacon
PointF nearestBeacon;
float shortestDistanceInMeters;
if (dA < dB && dA < dC)
{
nearestBeacon = a;
shortestDistanceInMeters = dA;
}
else if (dB < dC)
{
nearestBeacon = b;
shortestDistanceInMeters = dB;
}
else
{
nearestBeacon = c;
shortestDistanceInMeters = dC;
}
//http://www.mathplanet.com/education/algebra-2/conic-sections/distance-between-two-points-and-the-midpoint
//Distance between nearest beacon and COG
var distanceToCog = (float)(Math.Sqrt(Math.Pow(cog.X - nearestBeacon.X, 2)
+ Math.Pow(cog.Y - nearestBeacon.Y, 2)));
//Convert shortest distance in meters into coordinates units.
var shortestDistanceInCoordinationUnits = shortestDistanceInMeters * METERS_IN_COORDINATE_UNITS_RATIO;
//http://math.stackexchange.com/questions/46527/coordinates-of-point-on-a-line-defined-by-two-other-points-with-a-known-distance?rq=1
//On the line between Nearest Beacon and COG find shortestDistance point apart from Nearest Beacon
var t = shortestDistanceInCoordinationUnits / distanceToCog;
var pointsDiff = new PointF(cog.X - nearestBeacon.X, cog.Y - nearestBeacon.Y);
var tTimesDiff = new PointF(pointsDiff.X * t, pointsDiff.Y * t);
//Add t times diff with nearestBeacon to find coordinates at a distance from nearest beacon in line to COG.
var userLocation = new PointF(nearestBeacon.X + tTimesDiff.X, nearestBeacon.Y + tTimesDiff.Y);
return userLocation;
}

Alternative Equation
- (CGPoint)getCoordinateWithBeaconA:(CGPoint)a beaconB:(CGPoint)b beaconC:(CGPoint)c distanceA:(CGFloat)dA distanceB:(CGFloat)dB distanceC:(CGFloat)dC {
CGFloat x, y;
x = ( ( (pow(dA,2)-pow(dB,2)) + (pow(c.x,2)-pow(a.x,2)) + (pow(b.y,2)-pow(a.y,2)) ) * (2*c.y-2*b.y) - ( (pow(dB,2)-pow(dC,2)) + (pow(c.x,2)-pow(c.x,2)) + (pow(c.y,2)-pow(b.y,2)) ) *(2*b.y-2*a.y) ) / ( (2*b.x-2*c.x)*(2*b.y-2*a.y)-(2*a.x-2*b.x)*(2*c.y-2*b.y) );
y = ( (pow(dA,2)-pow(dB,2)) + (pow(c.x,2)-pow(a.x,2)) + (pow(b.y,2)-pow(a.y,2)) + x*(2*a.x-2*b.x)) / (2*b.y-2*a.y);
return CGPointMake(x, y);
}

Related

Calculating bank angle between two objects

I have a drone following a path for movement. That is, it doesn't use a rigidbody so I don't have access to velocity or magnitude and such. It follows the path just fine, but I would like to add banking to it when it turns left or right. I use a dummy object in front of the drone, thinking I could calculate the bank/tilt amount using the transform vectors from the two objects.
I've been working on this for days as I don't have a lot of math skills. Basically I've been copying pieces of code trying to get things to work. Nothing I do works to make the drone bank. The following code manages to spin (not bank).
// Update is called once per frame
void Update () {
Quaternion rotation = Quaternion.identity;
Vector3 dir = (dummyObject.transform.position - this.transform.position).normalized;
float angle = Vector3.Angle( dir, transform.up );
float rollAngle = CalculateRollAngle(angle);
rotation.SetLookRotation(dir, transform.right);// + rollIntensity * smoothRoll * right);
rotation *= Quaternion.Euler(new Vector3(0, 0, rollAngle));
transform.rotation = rotation;
}
/// <summary>
/// Calculates Roll and smoothes it (to compensates for non C2 continuous control points algorithm) /// </summary>
/// <returns>The roll angle.</returns>
/// <param name="rollFactor">Roll factor.</param>
float CalculateRollAngle(float rollFactor)
{
smoothRoll = Mathf.Lerp(smoothRoll, rollFactor, rollSmoothing * Time.deltaTime);
float angle = Mathf.Atan2(1, smoothRoll * rollIntensity);
angle *= Mathf.Rad2Deg;
angle -= 90;
TurnRollAngle = angle;
angle += RollOffset;
return angle;
}
Assuming you have waypoints the drone is following, you should figure out the angle between the last two (i.e. your "now-facing" and "will be facing" directions). The easy way is to use Vector2.Angle.
I would use this angle to determine the amount I'll tilt the drone's body: the sharper the turn, the harder the banking. I would use a ratio value (public initially so I can manipulate it from the editor).
Next, instead of doing any math I would rely on the engine to do the rotation for me - so I would go for Transform.Rotate function.In case banking can go too high and look silly, I would set a maximum for that and Clamp my calculated banking angle between zero and max.
Without knowing exactly what you do and how, it's not easy to give perfect code, but for a better understand of the above, here's some (untested, i.e. pseudo) code for the solution I visualize:
public float turnSpeed = 7.0f; //the drone will "rotate toward the new waypoint" by this speed
//bankSpeed+turnBankRatio must be two times "faster" (and/or smaller degree) than turning, see details in 'EDIT' as of why:
public float bankSpeed = 14.0f; //banking speed
public float turnBankRatio = .5f; //90 degree turn == 45 degree banking
private float turnAngle = 0.0f; //this is the 'x' degree turning angle we'll "Lerp"
private float turnAngleABS = 0.0f; //same as turnAngle but it's an absolute value. Storing to avoid Mathf.Abs() in Update()!
private float bankAngle = 0.0f; //banking degree
private bool isTurning = false; //are we turning right now?
//when the action is fired for the drone it should go for the next waypoint, call this guy
private void TurningTrigger() {
//remove this line after testing, it's some extra safety
if (isTurning) { Debug.LogError("oups! must not be possible!"); return; }
Vector2 droneOLD2DAngle = GetGO2DPos(transform.position);
//do the code you do for the turning/rotation of drone here!
//or use the next waypoint's .position as the new angle if you are OK
//with the snippet doing the turning for you along with banking. then:
Vector2 droneNEW2DAngle = GetGO2DPos(transform.position);
turnAngle = Vector2.Angle(droneOLD2DAngle, droneNEW2DAngle); //turn degree
turnAngleABS = Mathf.Abs(turnAngle); //avoiding Mathf.Abs() in Update()
bankAngle = turnAngle * turnBankRatio; //bank angle
//you can remove this after testing. This is to make sure banking can
//do a full run before the drone hits the next waypoint!
if ((turnAngle * turnSpeed) < (bankAngle * bankSpeed)) {
Debug.LogError("Banking degree too high, or banking speed too low to complete maneuver!");
}
//you can clamp or set turnAngle based on a min/max here
isTurning = true; //all values were set, turning and banking can start!
}
//get 2D position of a GO (simplified)
private Vector2 GetGO2DPos(Vector3 worldPos) {
return new Vector2(worldPos.x, worldPos.z);
}
private void Update() {
if (isTurning) {
//assuming the drone is banking to the "side" and "side" only
transform.Rotate(0, 0, bankAngle * time.deltaTime * bankSpeed, Space.Self); //banking
//if the drone is facing the next waypoint already, set
//isTurning to false
} else if (turnAngleABS > 0.0f) {
//reset back to original position (with same speed as above)
//at least "normal speed" is a must, otherwise drone might hit the
//next waypoint before the banking reset can finish!
float bankAngle_delta = bankAngle * time.deltaTime * bankSpeed;
transform.Rotate(0, 0, -1 * bankAngle_delta, Space.Self);
turnAngleABS -= (bankAngle_delta > 0.0f) &#63 bankAngle_delta : -1 * bankAngle_delta;
}
//the banking was probably not set back to exactly 0, as time.deltaTime
//is not a fixed value. if this happened and looks ugly, reset
//drone's "z" to Quaternion.identity.z. if it also looks ugly,
//you need to test if you don't """over bank""" in the above code
//by comparing bankAngle_delta + 'calculated banking angle' against
//the identity.z value, and reset bankAngle_delta if it's too high/low.
//when you are done, your turning animation is over, so:
}
Again, this code might not perfectly fit your needs (or compile :P), so focus on the idea and the approach, not the code itself. Sorry for not being able right now to put something together and test myself - but I hope I helped. Cheers!
EDIT: Instead of a wall of text I tried to answer your question in code (still not perfect, but goal is not doing the job, but to help with some snippets and ideas :)
So. Basically, what you have is a distance and "angle" between two waypoints. This distance and your drone's flight/walk/whatever speed (which I don't know) is the maximum amount of time available for:
1. Turning, so the drone will face in the new direction
2. Banking to the side, and back to zero/"normal"
As there's two times more action on banking side, it either has to be done faster (bankSpeed), or in a smaller angle (turnBankRatio), or both, depending on what looks nice and feels real, what your preference is, etc. So it's 100% subjective. It's also your call if the drone turns+banks quickly and approaches toward the next waypoint, or does things in slow pace and turns just a little if has a lot of time/distance and does things fast only if it has to.
As of isTurning:
You set it to true when the drone reached a waypoint and heads out to the next one AND the variables to (turn and) bank were set properly. When you set it to false? It's up to you, but the goal is to do so when the maneuver is finished (this was buggy in the snippet the first time as this "optimal status" was not possible to ever be reached) so he drone can "reset banking".For further details on what's going on, see code comments.Again, this is just a snippet to support you with a possible solution for your problem. Give it some time and understand what's going on. It really is easy, you just need some time to cope ;)Hope this helps! Enjoy and cheers! :)

Find lead to hit a moving target considering gravity in 3D

I'm trying to find a point in 3D space at which I'd have to aim in order to hit a moving target with a projectile which is affected by gravity.
To give you a better picture: imagine an anti-aircraft gun trying to hit an aircraft flying above.
We can assume that the target and the projectile move at a constant rate, other than gravity in the case of the projectile. We can also assume that the shooter is stationary, since if he's not we can just use relative speeds.
After some research I found this article.
I was able to implement his first solution, since he was so kind to give a code example. That looks like this:
public static Vector3 CalculateLead(Vector3 targetVelocity, Vector3 targetPosition, Vector3 gunPosition, float projectileSpeed)
{
Vector3 direction = targetPosition - gunPosition;
float a = targetVelocity.sqrMagnitude - projectileSpeed * projectileSpeed;
float b = 2 * Vector3.Dot(direction, targetVelocity);
float c = direction.sqrMagnitude;
if (a >= 0)
return targetPosition;
else
{
float rt = Mathf.Sqrt(b * b - 4 * a * c);
float dt1 = (-b + rt) / (2 * a);
float dt2 = (-b - rt) / (2 * a);
float dt = (dt1 > 0 ? dt1 : dt2);
return targetPosition + targetVelocity * dt;
}
}
With this code I'm able to perfectly hit the target, as long as the projectile isn't affected by gravity. However, I'd like it to be. Unfortunately I'm not even remotely close to understanding the math posted in the article so I wasn't able to translate it into working code. And after spending several hours trying to find a solution which includes gravity I figured I'd just ask you guys for help.

how to color point cloud from image pixels?

I am using google tango tablet to acquire point cloud data and RGB camera images. I want to create 3D scan of the room. For that i need to map 2D image pixels to point cloud point. I will be doing this with a lot of point clouds and corresponding images.Thus I need to write a code script which has two inputs 1. point cloud and 2. image taken from the same point in same direction and the script should output colored point cloud. How should i approach this & which platforms will be very simple to use?
Here is the math to map a 3D point v to 2D pixel space in the camera image (assuming that v already incorporates the extrinsic camera position and orientation, see note at bottom*):
// Project to tangent space.
vec2 imageCoords = v.xy/v.z;
// Apply radial distortion.
float r2 = dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0 + k1*r2 + k2*r4 + k3*r6;
// Map to pixel space.
vec3 pixelCoords = cameraTransform*vec3(imageCoords, 1);
Where cameraTransform is the 3x3 matrix:
[ fx 0 cx ]
[ 0 fy cy ]
[ 0 0 1 ]
with fx, fy, cx, cy, k1, k2, k3 from TangoCameraIntrinsics.
pixelCoords is declared vec3 but is actually 2D in homogeneous coordinates. The third coordinate is always 1 and so can be ignored for practical purposes.
Note that if you want texture coordinates instead of pixel coordinates, that is just another linear transform that can be premultiplied onto cameraTransform ahead of time (as is any top-to-bottom vs. bottom-to-top scanline addressing).
As for what "platform" (which I loosely interpreted as "language") is simplest, the native API seems to be the most straightforward way to get your hands on camera pixels, though it appears people have also succeeded with Unity and Java.
* Points delivered by TangoXYZij already incorporate the depth camera extrinsic transform. Technically, because the current developer tablet shares the same hardware between depth and color image acquisition, you won't be able to get a color image that exactly matches unless both your device and your scene are stationary. Fortunately in practice, most applications can probably assume that neither the camera pose nor the scene changes enough in one frame time to significantly affect color lookup.
This answer is not original, it is simply meant as a convenience for Unity users who would like the correct answer, as provided by #rhashimoto, worked out for them. My contribution (hopefully) is providing code that reduces the normal 16 multiplies and 12 adds (given Unity only does 4x4 matrices) to 2 multiplies and 2 adds by dropping out all of the zero results. I ran a little under a million points through the test, checking each time that my calculations agreed with the basic matrix calculations - defined as the absolute difference between the two results being less than machine epsilon - I'm as comfortable with this as I can be knowing that #rhashimoto may show up and poke a giant hole in it :-)
If you want to switch back and forth, remember this is C#, so the USEMATRIXMATH define must appear at the beginning of the file.
Given there's only one Tango device right now, and I'm assuming the intrinsics are constant across all of the devices, I just dumped them in as constants, such that
fx = 1042.73999023438
fy = 1042.96997070313
cx = 637.273986816406
cy = 352.928985595703
k1 = 0.228532999753952
k2 = -0.663019001483917
k3 = 0.642908990383148
Yes they can be dumped in as constants, which would make things more readable, and C# is probably smart enough to optimize it out - however, I spent too much of my life in Agner Fogg's stuff, and will always be paranoid.
The commented out code at the bottom is for testing the difference, should you desire. You'll have to uncomment some other stuff, and comment out the returns if you want to test the results.
My thanks again to #rhashimoto, this is far far better than what I had
I have stayed true to his logic, remember these are pixel coordinates, not UV coordinates - he is correct that you can premultiply the transform to get normalized UV values, but since he schooled me on this once already, I will stick with exactly the math he presented before I fiddle with too much :-)
static public Vector2 PictureUV(Vector3 tangoDepthPoint)
{
Vector2 imageCoords = new Vector2(tangoDepthPoint.x / tangoDepthPoint.z, tangoDepthPoint.y / tangoDepthPoint.z);
float r2 = Vector2.Dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0f + 0.228532999753952f*r2 + -0.663019001483917f*r4 + 0.642908990383148f*r6;
Vector3 ic3 = new Vector3(imageCoords.x,imageCoords.y,1);
#if USEMATRIXMATH
Matrix4x4 cameraTransform = new Matrix4x4();
cameraTransform.SetRow(0,new Vector4(1042.73999023438f,0,637.273986816406f,0));
cameraTransform.SetRow(1, new Vector4(0, 1042.96997070313f, 352.928985595703f, 0));
cameraTransform.SetRow(2, new Vector4(0, 0, 1, 0));
cameraTransform.SetRow(3, new Vector4(0, 0, 0, 1));
Vector3 pixelCoords = cameraTransform * ic3;
return new Vector2(pixelCoords.x, pixelCoords.y);
#else
//float v1 = 1042.73999023438f * imageCoords.x + 637.273986816406f;
//float v2 = 1042.96997070313f * imageCoords.y + 352.928985595703f;
//float v3 = 1;
return new Vector2(1042.73999023438f * imageCoords.x + 637.273986816406f,1042.96997070313f * imageCoords.y + 352.928985595703);
#endif
//float dx = Math.Abs(v1 - pixelCoords.x);
//float dy = Math.Abs(v2 - pixelCoords.y);
//float dz = Math.Abs(v3 - pixelCoords.z);
//if (dx > float.Epsilon || dy > float.Epsilon || dz > float.Epsilon)
// UnityEngine.Debug.Log("Well, that didn't work");
//return new Vector2(v1, v2);
}
As one final note, do note the code he provided is GLSL - if you're just using this for pretty pictures, use it - this is for those that actually need to perform additional processing.

How to make an object move in the path of an arc?

I'm making a game where there should be a robot throwing ball-shaped objects at another robot.
The balls thrown should fly in the shape of a symmetrical arc. Pretty sure the math-word for this is a parabola.
Both robots are on the x axis.
How can I implement such a thing in my game? I tried different approaches, none worked.
The current system of moving things in my game, is like so: Every object has x and y co-ordinates (variables), and dx and dy variables.
Every object has a move() method, that get's called every cycle of the game-loop. It simply adds dx to x and dy to y.
How can I implement what I described, into this system?
If there is a lot of math involved, please try to explain in a simply way, because I'm not great with math.
My situation:
Thanks a lot
You should add velocity to your missiles.
Velocity is a vector, which means it says how fast the missile moves in x-axis and how fast in y-axis. Now, instead of using Move() use something like Update(). Something like this:
void Update()
{
position.X += velocity.X;
position.Y += velocity.Y;
}
Now let's think, what happens to the missile, once it is shot:
In the beginning it has some start velocity. For example somebody shot the missile with speed of 1 m/s in x, and -0.5 m/s in y. Then as it files, the missile will be pulled to the ground - it's Y velocity will be growing towards ground.
void Update()
{
velocity.Y += gravity;
position.X += velocity.X;
position.Y += velocity.Y;
}
This will make your missile move accordingly to physics (excluding air resistance) and will generate a nice-looking parabola.
Edit:
You might ask how to calculate the initial velocity. Let's assume we have a given angle of shot (between line of shot and the ground), and the initial speed (we may know how fast the missiles after the shot are, just don't know the X and Y values). Then:
velocity.X = cos(angle) * speed;
velocity.Y = sin(angle) * speed;
Adding to Michal's answer, to make sure the missile hits the robot (if you want it to track the robot), you need to adjust its x velocity.
void Update()
{
ball.dy += gravity; // gravity = -9.8 or whatever makes sense in your game
ball.dx = (target.x - ball.x); // this needs to be normalized.
double ballNorm = sqrt(ball.dx^2 + ball.dy^2);
ball.dx /= ballNorm;
ball.x += ball.dx;
ball.y += ball.dy
}
This will cause the missile to track your target. Normalizing the x component of your vector ensures that it will never go above a velocity of one. It's nor fully "normalizing" the vector because normally you would have to do this to the y component too. If we didn't normalize here, we would end up with a ball that jumps all the way to your target on the first update. If you want to make your missile travel faster, just multiply ballNorm by some amount.
you can get every thing you need from these few equations
for the max height to time to distance.
g = gravity
v = start vorticity M/S
a = start angle deg
g = KG of object * 9.81
time = v*2 * sin(a) / g
range = v^2 * sin(a * 2) / g
height = v^2 * sin(a)^2 / 2*g

Potential floating point issue with cosine acceleration curve

I am using a cosine curve to apply a force on an object between the range [0, pi]. By my calculations, that should give me a sine curve for the velocity which, at t=pi/2 should have a velocity of 1.0f
However, for the simplest of examples, I get a top speed of 0.753.
Now if this is a floating point issue, that is fine, but that is a very significant error so I am having trouble accepting that it is (and if it is, why is there such a huge error computing these values).
Some code:
// the function that gives the force to apply (totalTime = pi, maxForce = 1.0 in this example)
return ((Mathf.Cos(time * (Mathf.PI / totalTime)) * maxForce));
// the engine stores this value and in the next fixed update applies it to the rigidbody
// the mass is 1 so isn't affecting the result
engine.ApplyAccelerateForce(applyingForce * ship.rigidbody2D.mass);
Update
There is no gravity being applied to the object, no other objects in the world for it to interact with and no drag. I'm also using a RigidBody2D so the object is only moving on the plane.
Update 2
Ok have tried a super simple example and I get the result I am expecting so there must be something in my code. Will update once I have isolated what is different.
For the record, super simple code:
float forceThisFrame;
float startTime;
// Use this for initialization
void Start () {
forceThisFrame = 0.0f;
startTime = Time.fixedTime;
}
// Update is called once per frame
void Update () {
float time = Time.fixedTime - startTime;
if(time <= Mathf.PI)
{
forceThisFrame = Mathf.Cos (time);
if(time >= (Mathf.PI /2.0f)- 0.01f && time <= (Mathf.PI /2.0f) + 0.01f)
{
print ("Speed: " + rigidbody2D.velocity);
}
}
else
{
forceThisFrame = 0.0f;
}
}
void FixedUpdate()
{
rigidbody2D.AddForce(forceThisFrame * Vector2.up);
}
Update 3
I have changed my original code to match the above example as near as I can (remaining differences listed below) and I still get the discrepancy.
Here are my results of velocity against time. Neither of them make sense to me, with a constant force of 1N, that should result in a linear velocity function v(t) = t but that isn't quite what is produced by either example.
Remaining differences:
The code that is "calculating" the force (now just returning 1) is being run via a non-unity DLL, though the code itself resides within a Unity DLL (can explain more but can't believe this is relevant!)
The behaviour that is applying the force to the rigid body is a separate behaviour.
One is moving a cube in an empty enviroment, the other is moving a Model3D and there is a plane nearby - tried a cube with same code in broken project, same problem
Other than that, I can't see any difference and I certainly can't see why any of those things would affect it. They both apply a force of 1 on an object every fixed update.
For the cosine case this isn't a floating point issue, per se, it's an integration issue.
[In your 'fixed' acceleration case there are clearly also minor floating point issues].
Obviously acceleration is proportional to force (F = ma) but you can't just simply add the acceleration to get the velocity, especially if the time interval between frames is not constant.
Simplifying things by assuming that the inter-frame acceleration is constant, and therefore following v = u + at (or alternately ∂v = a.∂t) you need to scale the effect of the acceleration in proportion to the time elapsed since the last frame. It follows that the smaller ∂t is, the more accurate your integration.
This was a multi-part problem that started with me not fully understanding Update vs. FixedUpdate in Unity, see this question on GameDev.SE for more info on that part.
My "fix" from that was advancing a timer that went with the fixed update so as to not apply the force wrong. The problem, as demonstrated by Eric Postpischil was because the FixedUpdate, despite its name, is not called every 0.02s but instead at most every 0.02s. The fix for this was, in my update to apply some scaling to the force to apply to accomodate for missed fixed updates. My code ended up looking something like:
Called From Update
float oldTime = time;
time = Time.fixedTime - startTime;
float variableFixedDeltaTime = time - oldTime;
float fixedRatio = variableFixedDeltaTime / Time.fixedDeltaTime;
if(time <= totalTime)
{
applyingForce = forceFunction.GetValue(time) * fixedRatio;
Vector2 currentVelocity = ship.rigidbody2D.velocity;
Vector2 direction = new Vector2(ship.transform.right.x, ship.transform.right.y);
float velocityAlongDir = Vector2.Dot(currentVelocity, direction);
float velocityPrediction = velocityAlongDir + (applyingForce * lDeltaTime);
if(time > 0.0f && // we are not interested if we are just starting
((velocityPrediction < 0.0f && velocityAlongDir > 0.0f ) ||
(velocityPrediction > 0.0f && velocityAlongDir < 0.0f ) ))
{
float ratio = Mathf.Abs((velocityAlongDir / (applyingForce * lDeltaTime)));
applyingForce = applyingForce * ratio;
// We have reversed the direction so we must have arrived
Deactivate();
}
engine.ApplyAccelerateForce(applyingForce);
}
Where ApplyAccelerateForce does:
public void ApplyAccelerateForce(float requestedForce)
{
forceToApply += requestedForce;
}
Called from FixedUpdate
rigidbody2D.AddForce(forceToApply * new Vector2(transform.right.x, transform.right.y));
forceToApply = 0.0f;

Resources