how to color point cloud from image pixels? - point-cloud-library

I am using google tango tablet to acquire point cloud data and RGB camera images. I want to create 3D scan of the room. For that i need to map 2D image pixels to point cloud point. I will be doing this with a lot of point clouds and corresponding images.Thus I need to write a code script which has two inputs 1. point cloud and 2. image taken from the same point in same direction and the script should output colored point cloud. How should i approach this & which platforms will be very simple to use?

Here is the math to map a 3D point v to 2D pixel space in the camera image (assuming that v already incorporates the extrinsic camera position and orientation, see note at bottom*):
// Project to tangent space.
vec2 imageCoords = v.xy/v.z;
// Apply radial distortion.
float r2 = dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0 + k1*r2 + k2*r4 + k3*r6;
// Map to pixel space.
vec3 pixelCoords = cameraTransform*vec3(imageCoords, 1);
Where cameraTransform is the 3x3 matrix:
[ fx 0 cx ]
[ 0 fy cy ]
[ 0 0 1 ]
with fx, fy, cx, cy, k1, k2, k3 from TangoCameraIntrinsics.
pixelCoords is declared vec3 but is actually 2D in homogeneous coordinates. The third coordinate is always 1 and so can be ignored for practical purposes.
Note that if you want texture coordinates instead of pixel coordinates, that is just another linear transform that can be premultiplied onto cameraTransform ahead of time (as is any top-to-bottom vs. bottom-to-top scanline addressing).
As for what "platform" (which I loosely interpreted as "language") is simplest, the native API seems to be the most straightforward way to get your hands on camera pixels, though it appears people have also succeeded with Unity and Java.
* Points delivered by TangoXYZij already incorporate the depth camera extrinsic transform. Technically, because the current developer tablet shares the same hardware between depth and color image acquisition, you won't be able to get a color image that exactly matches unless both your device and your scene are stationary. Fortunately in practice, most applications can probably assume that neither the camera pose nor the scene changes enough in one frame time to significantly affect color lookup.

This answer is not original, it is simply meant as a convenience for Unity users who would like the correct answer, as provided by #rhashimoto, worked out for them. My contribution (hopefully) is providing code that reduces the normal 16 multiplies and 12 adds (given Unity only does 4x4 matrices) to 2 multiplies and 2 adds by dropping out all of the zero results. I ran a little under a million points through the test, checking each time that my calculations agreed with the basic matrix calculations - defined as the absolute difference between the two results being less than machine epsilon - I'm as comfortable with this as I can be knowing that #rhashimoto may show up and poke a giant hole in it :-)
If you want to switch back and forth, remember this is C#, so the USEMATRIXMATH define must appear at the beginning of the file.
Given there's only one Tango device right now, and I'm assuming the intrinsics are constant across all of the devices, I just dumped them in as constants, such that
fx = 1042.73999023438
fy = 1042.96997070313
cx = 637.273986816406
cy = 352.928985595703
k1 = 0.228532999753952
k2 = -0.663019001483917
k3 = 0.642908990383148
Yes they can be dumped in as constants, which would make things more readable, and C# is probably smart enough to optimize it out - however, I spent too much of my life in Agner Fogg's stuff, and will always be paranoid.
The commented out code at the bottom is for testing the difference, should you desire. You'll have to uncomment some other stuff, and comment out the returns if you want to test the results.
My thanks again to #rhashimoto, this is far far better than what I had
I have stayed true to his logic, remember these are pixel coordinates, not UV coordinates - he is correct that you can premultiply the transform to get normalized UV values, but since he schooled me on this once already, I will stick with exactly the math he presented before I fiddle with too much :-)
static public Vector2 PictureUV(Vector3 tangoDepthPoint)
{
Vector2 imageCoords = new Vector2(tangoDepthPoint.x / tangoDepthPoint.z, tangoDepthPoint.y / tangoDepthPoint.z);
float r2 = Vector2.Dot(imageCoords, imageCoords);
float r4 = r2*r2;
float r6 = r2*r4;
imageCoords *= 1.0f + 0.228532999753952f*r2 + -0.663019001483917f*r4 + 0.642908990383148f*r6;
Vector3 ic3 = new Vector3(imageCoords.x,imageCoords.y,1);
#if USEMATRIXMATH
Matrix4x4 cameraTransform = new Matrix4x4();
cameraTransform.SetRow(0,new Vector4(1042.73999023438f,0,637.273986816406f,0));
cameraTransform.SetRow(1, new Vector4(0, 1042.96997070313f, 352.928985595703f, 0));
cameraTransform.SetRow(2, new Vector4(0, 0, 1, 0));
cameraTransform.SetRow(3, new Vector4(0, 0, 0, 1));
Vector3 pixelCoords = cameraTransform * ic3;
return new Vector2(pixelCoords.x, pixelCoords.y);
#else
//float v1 = 1042.73999023438f * imageCoords.x + 637.273986816406f;
//float v2 = 1042.96997070313f * imageCoords.y + 352.928985595703f;
//float v3 = 1;
return new Vector2(1042.73999023438f * imageCoords.x + 637.273986816406f,1042.96997070313f * imageCoords.y + 352.928985595703);
#endif
//float dx = Math.Abs(v1 - pixelCoords.x);
//float dy = Math.Abs(v2 - pixelCoords.y);
//float dz = Math.Abs(v3 - pixelCoords.z);
//if (dx > float.Epsilon || dy > float.Epsilon || dz > float.Epsilon)
// UnityEngine.Debug.Log("Well, that didn't work");
//return new Vector2(v1, v2);
}
As one final note, do note the code he provided is GLSL - if you're just using this for pretty pictures, use it - this is for those that actually need to perform additional processing.

Related

How to unproject a point on screen to object space coordinates in vulkan?

I need to be able to unproject a screen pixel into object space using Vulkan, but somewhere my math is going wrong.
Here is the shader as it stands today for reference:
void main()
{
//the depth of this pixel is between 0 and 1
vec4 obj_space = vec4( float(gl_FragCoord.x)/ubo.screen_width, float(gl_FragCoord.y)/ubo.screen_height, gl_FragCoord.z, 1.0f);
//this puts us in normalized device coordinates [-1,1 ] range
obj_space.xy = ( obj_space.xy * 2.0f ) -1.0f;
//this two lines will put is in object space coordinates
//mvp_inverse is derived from this in the c++ side:
//glm::inverse(app.three_d_camera->get_projection_matrix() * app.three_d_camera->view_matrix * model);
obj_space = ubo.mvp_inverse * obj_space;
obj_space.xyz /= obj_space.w;
//the resulting position here is wrong
out_color = obj_space;
}
when I output the position in color, the colors are off. I know I can simply pass in the object space position from the vertex shader to the fragment shader, but I'd like to understand why my math is not working, it will help me understand Vulkan and maybe learn a little math myself.
Thanks!
I'm not entirely sure what your problem is, but lets go over potential problems.
Remember, vulkan clip space is:
positive y = down,
positive x = right,
positive z = out,
centered at the middle of the screen.
Additionally, despite OpenGL's GLSL docs saying it is centered at the bottom left corner, in vulkan gl_FragCoord is centered at the top left corner.
in this step:
obj_space.xy = ( obj_space.xy * 2.0f ) -1.0f;
obj_space is now:
left x : -1.0
right x : 1.0
top y = -1.0
bottom y = 1.0
out z = 1.0
back z = 0
I'm almost entirely sure you don't mean your object space to have Y be negative at the top. The reasoning for y increasing starting from top to bottom is for images and textures, which on the CPU are ordered the same way, and now are ordered like that in vulkan.
Some other notes:
You claim your inverse is derivied from glm::inverse here:
glm::inverse(app.three_d_camera->get_projection_matrix() * app.three_d_camera->view_matrix * model);
But GLM uses OpenGL notation for matrix dimensions and handedness, and unless you force it to the correct coordinate system, it is going to assume right handed positive Y up, z negative out. You'll need to include the following #defines before it works correctly (or physically change your calculations to accommodate this).
#define GLM_FORCE_DEPTH_ZERO_TO_ONE
#define GLM_FORCE_LEFT_HANDED
Additionally you'll need to modify your matrices to account for the negative Y direction. Here is an example of how I've handled this in the past (modifying the perspective matrix directly):
ubo.model = glm::translate(glm::mat4(1.0f), glm::vec3(pos_x,pos_y,pos_z));
ubo.model *= glm::rotate(glm::mat4(1.0f), time * glm::radians(0.0f), glm::vec3(0.0f, 0.0f, 1.0f));
ubo.view = glm::lookAt(glm::vec3(0.0f, 0.0f, -10.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float) swapChainExtent.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1; // makes the y axis projected to the same as vulkans

How to make an object move in the path of an arc?

I'm making a game where there should be a robot throwing ball-shaped objects at another robot.
The balls thrown should fly in the shape of a symmetrical arc. Pretty sure the math-word for this is a parabola.
Both robots are on the x axis.
How can I implement such a thing in my game? I tried different approaches, none worked.
The current system of moving things in my game, is like so: Every object has x and y co-ordinates (variables), and dx and dy variables.
Every object has a move() method, that get's called every cycle of the game-loop. It simply adds dx to x and dy to y.
How can I implement what I described, into this system?
If there is a lot of math involved, please try to explain in a simply way, because I'm not great with math.
My situation:
Thanks a lot
You should add velocity to your missiles.
Velocity is a vector, which means it says how fast the missile moves in x-axis and how fast in y-axis. Now, instead of using Move() use something like Update(). Something like this:
void Update()
{
position.X += velocity.X;
position.Y += velocity.Y;
}
Now let's think, what happens to the missile, once it is shot:
In the beginning it has some start velocity. For example somebody shot the missile with speed of 1 m/s in x, and -0.5 m/s in y. Then as it files, the missile will be pulled to the ground - it's Y velocity will be growing towards ground.
void Update()
{
velocity.Y += gravity;
position.X += velocity.X;
position.Y += velocity.Y;
}
This will make your missile move accordingly to physics (excluding air resistance) and will generate a nice-looking parabola.
Edit:
You might ask how to calculate the initial velocity. Let's assume we have a given angle of shot (between line of shot and the ground), and the initial speed (we may know how fast the missiles after the shot are, just don't know the X and Y values). Then:
velocity.X = cos(angle) * speed;
velocity.Y = sin(angle) * speed;
Adding to Michal's answer, to make sure the missile hits the robot (if you want it to track the robot), you need to adjust its x velocity.
void Update()
{
ball.dy += gravity; // gravity = -9.8 or whatever makes sense in your game
ball.dx = (target.x - ball.x); // this needs to be normalized.
double ballNorm = sqrt(ball.dx^2 + ball.dy^2);
ball.dx /= ballNorm;
ball.x += ball.dx;
ball.y += ball.dy
}
This will cause the missile to track your target. Normalizing the x component of your vector ensures that it will never go above a velocity of one. It's nor fully "normalizing" the vector because normally you would have to do this to the y component too. If we didn't normalize here, we would end up with a ball that jumps all the way to your target on the first update. If you want to make your missile travel faster, just multiply ballNorm by some amount.
you can get every thing you need from these few equations
for the max height to time to distance.
g = gravity
v = start vorticity M/S
a = start angle deg
g = KG of object * 9.81
time = v*2 * sin(a) / g
range = v^2 * sin(a * 2) / g
height = v^2 * sin(a)^2 / 2*g

Triangulate example for iBeacons

I am looking into the possibility to use multiple iBeacons to do a 'rough' indoor position location. The application is a kind of 'museum' setting, and it would be easier to be able to form a grid with locations for the different objects then individual beacons (although that might not be impossible too).
Are there examples, experiences, with using multiple beacons to triangulate into some kind of location, or some logic to help me on the way to write it myself?
I have been making some experiments to get a precise position using three beacons.
Results of trilateration
Unluckily, the results were very disappointing in terms of quality. There were mainly two issues:
In non-controlled environments, where you can find metals, and other objects that affect the signal, the received signal strength of the beacons changes so often that it seems impossible to get error range below 5 meters.
Depending on the way that the user is handling the receiver device, the readings can change a lot as well. If the user puts his/her hand over the bluetooth antenna, then the algorithm will have low signals as input, and thus the beacons will supposed to be very far from the device. See this image to see the precise location of the Bluetooth antenna.
Possible solutions
After talking with an Apple engineer who actively discouraged me to go down this way, the option I feel more inclined to use right now is brute force. Try to set up a beacon every X meters (X being the maximum error tolerated in the system) so we can track on this beacons grid the position of a given device by calculating which beacon on the grid is the closest to the device and assuming that the device is on the same position.
Trilateration algorithm
However, for the sake of completeness, I share below the core function of the trilateration algorithm. It's based on the paragraph 3 ("Three distances known") of this article.
- (CGPoint)getCoordinateWithBeaconA:(CGPoint)a beaconB:(CGPoint)b beaconC:(CGPoint)c distanceA:(CGFloat)dA distanceB:(CGFloat)dB distanceC:(CGFloat)dC {
CGFloat W, Z, x, y, y2;
W = dA*dA - dB*dB - a.x*a.x - a.y*a.y + b.x*b.x + b.y*b.y;
Z = dB*dB - dC*dC - b.x*b.x - b.y*b.y + c.x*c.x + c.y*c.y;
x = (W*(c.y-b.y) - Z*(b.y-a.y)) / (2 * ((b.x-a.x)*(c.y-b.y) - (c.x-b.x)*(b.y-a.y)));
y = (W - 2*x*(b.x-a.x)) / (2*(b.y-a.y));
//y2 is a second measure of y to mitigate errors
y2 = (Z - 2*x*(c.x-b.x)) / (2*(c.y-b.y));
y = (y + y2) / 2;
return CGPointMake(x, y);
}
Here is an open source java library that will perform the trilateration/multilateration:
https://github.com/lemmingapex/Trilateration
It uses a popular nonlinear least squares optimizer, the Levenberg-Marquardt algorithm, from Apache Commons Math.
double[][] positions = new double[][] { { 5.0, -6.0 }, { 13.0, -15.0 }, { 21.0, -3.0 }, { 12.42, -21.2 } };
double[] distances = new double[] { 8.06, 13.97, 23.32, 15.31 };
NonLinearLeastSquaresSolver solver = new NonLinearLeastSquaresSolver(new TrilaterationFunction(positions, distances), new LevenbergMarquardtOptimizer());
Optimum optimum = solver.solve();
// the answer
double[] calculatedPosition = optimum.getPoint().toArray();
// error and geometry information
RealVector standardDeviation = optimum.getSigma(0);
RealMatrix covarianceMatrix = optimum.getCovariances(0);
Most scholarly examples, like the one on wikipedia, deal with exactly three circles and assume perfectly accurate information. These circumstances allow for much simpler problem formulations with exact answers, and are usually not satisfactory for practical situations.
The problem in R2 or R3 euclidean space with distances that contain measurement error, an area (ellipse) or volume (ellipsoid) of interest is usually obtained instead of a point. If a point estimate is desired instead of a region, the area centroid or volume centroid should be used. R2 space requires at least 3 non-degenerate points and distances to obtain a unique region; and similarly R3 space requires at least 4 non-degenerate points and distances to obtain a unique region.
I looked into this. The term you want it trilateration. (In triangulation you have angles from 3 known points. In trilateration you have distance from 3 known points) If you Google it you should find several articles including one on Wiki. It involves solving a set of 3 simultaneous equations. The documents I saw were for 3D trilateration - 2D is easier because you can just drop the Z term.
What I found was abstract math. I haven't taken the time yet to map the general algorithm into specific code, but I plan on tackling it at some point.
Note that the results you get will be VERY crude, especially in anything but an empty room. The signals are weak enough that a person, a statue, or anything that blocks line of sight will increase your distance readings pretty significantly. You might even have places in a building where constructive interference (mostly from the walls) makes some places read as much closer than they actually are.
Accurate indoor positioning with iBeacon will be challenging for the following reasons:
As pointed in earlier comments, iBeacon signal tend to fluctuate a lot. The reason include multipath effect, the dynamic object obstructions between the phone and iBeacon when the person is moving, other 2.4GHz interferences, and more. So ideally you don't want to trust 1 single packet's data and instead do some averaging for several packets from the same beacon. That would require the phone/beacon distance doesn't change too much between those several packets. For general BLE packets (like beacons from StickNFind) can easily be set to 10Hz beaconing rate. However for iBeacon, that'll be hard, because
iBeacon's beaconing frequency probably cannot be higher than 1Hz. I will be glad if anyone can point to source that says otherwise, but all information I've seen so far confirms this assertion. That actually make sense since most iBeacons will be battery powered and high frequency significantly impact the battery life. Considering people's average walking speed is 5.3km (~1.5m/s), so even if you just use a modest 3 beacon packets to do the averaging, you will be hard to get ~5m accuracy.
On the other hand, if you could increase iBeacon frequency to larger than 10Hz (which I doubt is possible), then it's possible to have 5m or higher accuracy using suitable processing method. Firstly trivial solutions based on the Inverse-Square Law, like trilateration, is often not performing well because in practice the distance/RSSI relationship for different beacons are often way off from the Inverse-Sqare Law for the reason 1 above. But as long as the RSSI is relatively stable for a certain beacon in any certain location (which usually is the case), you can use an approach called fingerprinting to achieve higher accuracy. A common method used for fingerprinting is kNN (k-Nearest Neighbor).
Update 2014-04-24
Some iBeacons can broadcast more than 1Hz, like Estimote use 5Hz as default. However, according to this link: "This is Apple restriction. IOS returns beacons update every second, no matter how frequently device is advertising.". There is another comment there (likely from the Estimote vendor) saying "Our beacons can broadcast much faster and it may improve results and measurement". So whether higher iBeacon frequency is beneficial is not clear.
For those who need #Javier Chávarri trilateration function for Android devices (for saving some time):
public static Location getLocationWithTrilateration(Location beaconA, Location beaconB, Location beaconC, double distanceA, double distanceB, double distanceC){
double bAlat = beaconA.getLatitude();
double bAlong = beaconA.getLongitude();
double bBlat = beaconB.getLatitude();
double bBlong = beaconB.getLongitude();
double bClat = beaconC.getLatitude();
double bClong = beaconC.getLongitude();
double W, Z, foundBeaconLat, foundBeaconLong, foundBeaconLongFilter;
W = distanceA * distanceA - distanceB * distanceB - bAlat * bAlat - bAlong * bAlong + bBlat * bBlat + bBlong * bBlong;
Z = distanceB * distanceB - distanceC * distanceC - bBlat * bBlat - bBlong * bBlong + bClat * bClat + bClong * bClong;
foundBeaconLat = (W * (bClong - bBlong) - Z * (bBlong - bAlong)) / (2 * ((bBlat - bAlat) * (bClong - bBlong) - (bClat - bBlat) * (bBlong - bAlong)));
foundBeaconLong = (W - 2 * foundBeaconLat * (bBlat - bAlat)) / (2 * (bBlong - bAlong));
//`foundBeaconLongFilter` is a second measure of `foundBeaconLong` to mitigate errors
foundBeaconLongFilter = (Z - 2 * foundBeaconLat * (bClat - bBlat)) / (2 * (bClong - bBlong));
foundBeaconLong = (foundBeaconLong + foundBeaconLongFilter) / 2;
Location foundLocation = new Location("Location");
foundLocation.setLatitude(foundBeaconLat);
foundLocation.setLongitude(foundBeaconLong);
return foundLocation;
}
If you're anything like me and don't like maths you might want to do a quick search for "indoor positioning sdk". There's lots of companies offering indoor positioning as a service.
Shameless plug: I work for indoo.rs and can recommend this service. It also includes routing and such on top of "just" indoor positioning.
My Architect/Manager, who wrote the following algorithm,
public static Location getLocationWithCenterOfGravity(Location beaconA, Location beaconB, Location beaconC, double distanceA, double distanceB, double distanceC) {
//Every meter there are approx 4.5 points
double METERS_IN_COORDINATE_UNITS_RATIO = 4.5;
//http://stackoverflow.com/a/524770/663941
//Find Center of Gravity
double cogX = (beaconA.getLatitude() + beaconB.getLatitude() + beaconC.getLatitude()) / 3;
double cogY = (beaconA.getLongitude() + beaconB.getLongitude() + beaconC.getLongitude()) / 3;
Location cog = new Location("Cog");
cog.setLatitude(cogX);
cog.setLongitude(cogY);
//Nearest Beacon
Location nearestBeacon;
double shortestDistanceInMeters;
if (distanceA < distanceB && distanceA < distanceC) {
nearestBeacon = beaconA;
shortestDistanceInMeters = distanceA;
} else if (distanceB < distanceC) {
nearestBeacon = beaconB;
shortestDistanceInMeters = distanceB;
} else {
nearestBeacon = beaconC;
shortestDistanceInMeters = distanceC;
}
//http://www.mathplanet.com/education/algebra-2/conic-sections/distance-between-two-points-and-the-midpoint
//Distance between nearest beacon and COG
double distanceToCog = Math.sqrt(Math.pow(cog.getLatitude() - nearestBeacon.getLatitude(),2)
+ Math.pow(cog.getLongitude() - nearestBeacon.getLongitude(),2));
//Convert shortest distance in meters into coordinates units.
double shortestDistanceInCoordinationUnits = shortestDistanceInMeters * METERS_IN_COORDINATE_UNITS_RATIO;
//http://math.stackexchange.com/questions/46527/coordinates-of-point-on-a-line-defined-by-two-other-points-with-a-known-distance?rq=1
//On the line between Nearest Beacon and COG find shortestDistance point apart from Nearest Beacon
double t = shortestDistanceInCoordinationUnits/distanceToCog;
Location pointsDiff = new Location("PointsDiff");
pointsDiff.setLatitude(cog.getLatitude() - nearestBeacon.getLatitude());
pointsDiff.setLongitude(cog.getLongitude() - nearestBeacon.getLongitude());
Location tTimesDiff = new Location("tTimesDiff");
tTimesDiff.setLatitude( pointsDiff.getLatitude() * t );
tTimesDiff.setLongitude(pointsDiff.getLongitude() * t);
//Add t times diff with nearestBeacon to find coordinates at a distance from nearest beacon in line to COG.
Location userLocation = new Location("UserLocation");
userLocation.setLatitude(nearestBeacon.getLatitude() + tTimesDiff.getLatitude());
userLocation.setLongitude(nearestBeacon.getLongitude() + tTimesDiff.getLongitude());
return userLocation;
}
Calculate the centre of gravity for a triangle (3 beacons)
calculate the shortest distance / nearest beacon
Calculate the distance between the beacon and the centre of gravity
Convert the shortest distance to co-ordinate units which is just a constant, he used to predict accuracy. You can test with varing the constant
calculate the distance delta
add the delta with the nearest beacon x,y.
After testing it, I found it accurate to 5 meters.
Please comment me your testing, if we can refine it.
I've implemented a very simple Fingerprint algorithm for android 4.4, tested in a relative 'bad' environment:
nearly 10 wifi AP nearby.
several other Bluetooth signals nearby.
the accurate seems in 5-8 meters and depends on how I placed that 3 Ibeacon broadcaster.
The algorithm is quite simple and I think you can implemented one by yourself, the steps are:
load the indoor map.
sampling with the map for all the pending positioning point.
record all the sampling data, the data should include:
map coordinate, position signals and their RSSI.
so when you start positioning, it's just a reverse of proceeding steps.
We are also trying to find the best way to precisely locate someone into a room using iBeacons. The thing is that the beacon signal power is not constant, and it is affected by other 2.4 Ghz signals, metal objects etc, so to achieve maximum precision it is necessary to calibrate each beacon individually, and once it has been set in the desired position. (and make some field test to see signal fluctuations when other Bluetooth devices are present).
We have also some iBeacons from Estimote (the same of the Konrad Dzwinel's video), and they have already developed some tech demo of what can be done with the iBeacons. Within their App it is possible to see a Radar in which iBeacons are shown. Sometimes is pretty accurate, but sometimes it is not, (and seems phone movement is not being considered to calculate positions). Check the Demo in the video we made here: http://goo.gl/98hiza
Although in theory 3 iBeacons should be enough to achieve a good precision, maybe in real world situations more beacons are needed to ensure the precision you are looking for.
The thing that really helped me was this project on Code.Google.com: https://code.google.com/p/wsnlocalizationscala/ it contains lots of code, several trilateration algorithms, all written in C#. It's a big library, but not really meant to be used "out-of-the-box".
Please check the reference https://proximi.io/accurate-indoor-positioning-bluetooth-beacons/
Proximi SDK will take care of the triangulation. This SDK provides libraries for handling all the logic for beacon positioning, triangulation and filtering automatically in the background. In addition to beacons, you can combine IndoorAtlas, Wi-Fi, GPS and cellular positioning.
I found Vishnu Prahbu's solution very useful. I ported it to c#, if anybody need it.
public static PointF GetLocationWithCenterOfGravity(PointF a, PointF b, PointF c, float dA, float dB, float dC)
{
//http://stackoverflow.com/questions/20332856/triangulate-example-for-ibeacons
var METERS_IN_COORDINATE_UNITS_RATIO = 1.0f;
//http://stackoverflow.com/a/524770/663941
//Find Center of Gravity
var cogX = (a.X + b.X + c.X) / 3;
var cogY = (a.Y + b.Y + c.Y) / 3;
var cog = new PointF(cogX,cogY);
//Nearest Beacon
PointF nearestBeacon;
float shortestDistanceInMeters;
if (dA < dB && dA < dC)
{
nearestBeacon = a;
shortestDistanceInMeters = dA;
}
else if (dB < dC)
{
nearestBeacon = b;
shortestDistanceInMeters = dB;
}
else
{
nearestBeacon = c;
shortestDistanceInMeters = dC;
}
//http://www.mathplanet.com/education/algebra-2/conic-sections/distance-between-two-points-and-the-midpoint
//Distance between nearest beacon and COG
var distanceToCog = (float)(Math.Sqrt(Math.Pow(cog.X - nearestBeacon.X, 2)
+ Math.Pow(cog.Y - nearestBeacon.Y, 2)));
//Convert shortest distance in meters into coordinates units.
var shortestDistanceInCoordinationUnits = shortestDistanceInMeters * METERS_IN_COORDINATE_UNITS_RATIO;
//http://math.stackexchange.com/questions/46527/coordinates-of-point-on-a-line-defined-by-two-other-points-with-a-known-distance?rq=1
//On the line between Nearest Beacon and COG find shortestDistance point apart from Nearest Beacon
var t = shortestDistanceInCoordinationUnits / distanceToCog;
var pointsDiff = new PointF(cog.X - nearestBeacon.X, cog.Y - nearestBeacon.Y);
var tTimesDiff = new PointF(pointsDiff.X * t, pointsDiff.Y * t);
//Add t times diff with nearestBeacon to find coordinates at a distance from nearest beacon in line to COG.
var userLocation = new PointF(nearestBeacon.X + tTimesDiff.X, nearestBeacon.Y + tTimesDiff.Y);
return userLocation;
}
Alternative Equation
- (CGPoint)getCoordinateWithBeaconA:(CGPoint)a beaconB:(CGPoint)b beaconC:(CGPoint)c distanceA:(CGFloat)dA distanceB:(CGFloat)dB distanceC:(CGFloat)dC {
CGFloat x, y;
x = ( ( (pow(dA,2)-pow(dB,2)) + (pow(c.x,2)-pow(a.x,2)) + (pow(b.y,2)-pow(a.y,2)) ) * (2*c.y-2*b.y) - ( (pow(dB,2)-pow(dC,2)) + (pow(c.x,2)-pow(c.x,2)) + (pow(c.y,2)-pow(b.y,2)) ) *(2*b.y-2*a.y) ) / ( (2*b.x-2*c.x)*(2*b.y-2*a.y)-(2*a.x-2*b.x)*(2*c.y-2*b.y) );
y = ( (pow(dA,2)-pow(dB,2)) + (pow(c.x,2)-pow(a.x,2)) + (pow(b.y,2)-pow(a.y,2)) + x*(2*a.x-2*b.x)) / (2*b.y-2*a.y);
return CGPointMake(x, y);
}

Calculating if or not a 3D eyepoint is behind a 2D plane or upwards

The setup
Draw XY-coordinate axes on a piece of paper. Write a word on it along X-axis, so that the word's centerpoint is at origo (half on positive side of X/Y, the other half on negative side of X/Y).
Now, if you flip the paper upside down you'll notice that the word is mirrored in relation to both X- and Y-axis. If you look from behind the paper, it's mirrored in relation to Y-axis. If you look at it from behind and upside down, it's mirrored in relation to X-axis.
Ok, I have points in 2D-plane (vertices) that are created in similar way at the origo and I need to apply exactly the same rule for them. To make things interesting:
The 2D plane is actually 3D, each point (vertex) being (x, y, 0). Initially the vertices are positioned to the origo and their normal is Pn(0,0,1). => Correctly seen when looked at from point Pn towards origo.
The vertex-plane has it's own rotation matrix [Rp] and position P(x,y,z) in the 3D-world. The rotation is applied before positioning.
The 3D world is "right handed". The viewer would be looking towards origo from some distance along positive Z-axis but the world is also oriented by rotation matrix [Rw]. [Rw] * (0,0,1) would point directly to the viewer's eye.
From those I need to calculate when the vertex-plane should be mirrored and by which axis. The mirroring itself can be done before applying [Rp] and P by:
Vertices vertices = Get2DPlanePoints();
int MirrorX = 1; // -1 to mirror, 1 NOT to mirror
int MirrorY = 1; // -1 to mirror, 1 NOT to mirror
Matrix WorldRotation = GetWorldRotationMatrix();
MirrorX = GetMirrorXFactor(WorldRotation);
MirrorY = GetMirrorYFactor(WorldRotation);
foreach(Vertex v in vertices)
{
v.X = v.X * MirrorX * MirrorY;
v.Y = V.Y * MirrorY;
}
// Apply rotation...
// Add position...
The question
So I need GetMirrorXFactor() & ..YFactor() -functions that return -1 if the viewer's eyepoint is at greater "X/Y"-angle than +-90 degrees in relation to the vertex-plane's normal after the rotation and world orientation. I have already solved this, but I'm looking for more "elegant" mathematics. I know that rotation matrices somehow contain info about how much is rotated by which axis and I believe that can be utilized here.
My Solution for MirrorX:
// Matrix multiplications. Vectors are vertical matrices here.
Pnr = [Rp] * Pn // Rotated vertices's normal
Pur = [Rp] * (0,1,0) // Rotated vertices's "up-vector"
Wnr = [Rw] * (0,0,1) // Rotated eye-vector with world's orientation
// = vector pointing directly at the viewer's eye
// Use rotated up-vector as a normal some new plane and project viewer's
// eye on it. dot = dot product between vectors.
Wnrx = Wnr - (Wnr dot Pur) * Pur // "X-projected" eye.
// Calculate angle between eye's X-component and plane's rotated normal.
// ||V|| = V's norm.
angle = arccos( (Wnrx dot Pnr) / ( ||Wnrx|| * ||Pnr|| ) )
if (angle > PI / 2)
MirrorX = -1; // DO mirror
else
MirrorX = 1; // DON'T mirror
Solution for mirrorY can be done in similar way using viewer's up and vertex-plane's right -vectors.
Better solution?
if (([Rp]*(1,0,0)) dot ([Rw]*(1,0,0))) < 0
MirrorX = -1; // DO mirror
else
MirrorX = 1; // DON'T mirror
if (([Rp]*(0,1,0)) dot ([Rw]*(0,1,0))) < 0
MirrorY = -1; // DO mirror
else
MirrorY = 1; // DON'T mirror
Explaining in more detail is difficult without diagrams, but if you have trouble with this solution we can work through some cases.

How do I take a 2D point, and project it into a 3D Vector by a perspective camera

I have a 2D Point (x,y) and I want to project it to a Vector, so that I can perform a ray-trace to check if the user clicked on a 3D Object, I have written all the other code, Except when I got back to my function to get the Vector from the xy cords of the mouse, I was not accounting for Field-Of-View, and I don't want to guess what the factor would be, as 'voodoo' fixes are not a good idea for a library. any math-magicians wanna help? :-).
Heres my current code, that needs FOV of the camera applied:
sf::Vector3<float> Camera::Get3DVector(int Posx, int Posy, sf::Vector2<int> ScreenSize){
//not using a "wide lens", and will maintain the aspect ratio of the viewport
int window_x = Posx - ScreenSize.x/2;
int window_y = (ScreenSize.y - Posy) - ScreenSize.y/2;
float Ray_x = float(window_x)/float(ScreenSize.x/2);
float Ray_y = float(window_y)/float(ScreenSize.y/2);
sf::Vector3<float> Vector(Ray_x,Ray_y, -_zNear);
// to global cords
return MultiplyByMatrix((Vector/LengthOfVector(Vector)), _XMatrix, _YMatrix, _ZMatrix);
}
You're not too fart off, one thing is to make sure your mouse is in -1 to 1 space (not 0 to 1)
Then you create 2 vectors:
Vector3 orig = Vector3(mouse.X,mouse.Y,0.0f);
Vector3 far = Vector3(mouse.X,mouse.Y,1.0f);
You also need to use the inverse of your perspective tranform (or viewprojection if you want world space)
Matrix ivp = Matrix::Invert(Projection)
Then you do:
Vector3 rayorigin = Vector3::TransformCoordinate(orig,ivp);
Vector3 rayfar = Vector3::TransformCoordinate(far,ivp);
If you want a ray, you also need direction, which is simply:
Vector3 raydir = Normalize(rayfar-rayorigin);

Resources