I was working on a project where I get analog values from a resistive touchscreen and turn them into intersection points.
Here is an example:
Here is my code for the data collection using an Arduino Uno and construction of the points using tool called processing.
#define side1 2
#define side2 3
#define side3 4
#define side4 5
#define contact A0
void setup() {
pinMode(contact, INPUT);
pinMode(side1, OUTPUT);
pinMode(side2, OUTPUT);
pinMode(side3, OUTPUT);
pinMode(side4, OUTPUT);
Serial.begin(9600);
}
void loop() {
int sensorValue1;
int sensorValue2;
int sensorValue3;
int sensorValue4;
// SENSOR VALUE 1:
digitalWrite(side1, LOW);
digitalWrite(side2, HIGH);
digitalWrite(side3, HIGH);
digitalWrite(side4, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue1 = analogRead(contact);
}
// SENSOR VALUE 2:
digitalWrite(side2, LOW);
digitalWrite(side3, HIGH);
digitalWrite(side4, HIGH);
digitalWrite(side1, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue2 = analogRead(contact);
}
// SENSOR VALUE 3:
digitalWrite(side3, LOW);
digitalWrite(side2, HIGH);
digitalWrite(side4, HIGH);
digitalWrite(side1, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue3 = analogRead(contact);
}
// SENSOR VALUE 2:
digitalWrite(side4, LOW);
digitalWrite(side3, HIGH);
digitalWrite(side2, HIGH);
digitalWrite(side1, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue4 = analogRead(contact);
}
Serial.print(sensorValue1);
Serial.print(",");
Serial.print(sensorValue2);
Serial.print(",");
Serial.print(sensorValue3);
Serial.print(",");
Serial.print(sensorValue4);
Serial.println();
}
This is the Processing code for the construction of the graph.
import processing.serial.*;
Serial myPort; // The serial port
int maxNumberOfSensors = 4;
float[] sensorValues = new float[maxNumberOfSensors];
float sensorValueX;
float sensorValueX1;
float sensorValueY;
float sensorValueY1;
int scaleValue = 2;
void setup () {
size(600, 600); // set up the window to whatever size you want
//println(Serial.list()); // List all the available serial ports
String portName = "COM5";
myPort = new Serial(this, portName, 9600);
myPort.clear();
myPort.bufferUntil('\n'); // don't generate a serialEvent() until you get a newline (\n) byte
background(255); // set inital background
smooth(); // turn on antialiasing
}
void draw () {
//background(255);
//noFill();
fill(100,100,100,100);
ellipse(height,0, scaleValue*sensorValues[0], scaleValue*sensorValues[0]);
ellipse(0,width, scaleValue*sensorValues[1], scaleValue*sensorValues[1]);
ellipse(height,width, scaleValue*sensorValues[2], scaleValue*sensorValues[2]);
ellipse(0,0, scaleValue*sensorValues[3], scaleValue*sensorValues[3]);
//ellipse(sensorValueY, sensorValueX, 10,10);
//println(sensorValueY,sensorValueX);
sensorValueX = ((sensorValues[3]*sensorValues[3])-(sensorValues[2]*sensorValues[2])+600*600)/2000;
sensorValueX1 = ((sensorValues[0]*sensorValues[0])-(sensorValues[1]*sensorValues[1])+600*600)/2000;
sensorValueY = ((sensorValues[3]*sensorValues[3])-(sensorValues[2]*sensorValues[2])+(600*600))/2000;
sensorValueY1 = ((sensorValues[1]*sensorValues[1])-(sensorValues[0]*sensorValues[0])+(600*600))/2000;
line(0, scaleValue*sensorValueX, height,scaleValue* sensorValueX);
line(scaleValue*sensorValueY, 0, scaleValue*sensorValueY, width);
ellipse(scaleValue*sensorValueY, scaleValue*sensorValueX, 20,20);
line(0, scaleValue*sensorValueX1, height,scaleValue* sensorValueX1);
line(scaleValue*sensorValueY1, 0, scaleValue*sensorValueY1, width);
ellipse(scaleValue*sensorValueY1, scaleValue*sensorValueX1, 20,20);
println(scaleValue*sensorValueX,scaleValue*sensorValueY);
}
void serialEvent (Serial myPort) {
String inString = myPort.readStringUntil('\n'); // get the ASCII string
if (inString != null) { // if it's not empty
inString = trim(inString); // trim off any whitespace
int incomingValues[] = int(split(inString, ",")); // convert to an array of ints
if (incomingValues.length <= maxNumberOfSensors && incomingValues.length > 0) {
for (int i = 0; i < incomingValues.length; i++) {
// map the incoming values (0 to 1023) to an appropriate gray-scale range (0-255):
sensorValues[i] = map(incomingValues[i], 0, 1023, 0, width);
//println(incomingValues[i]+ " " + sensorValues[i]);
}
}
}
}
I was wondering how I could convert the intersection of those points to a coordinate? Example: in the image, I showed you, I set the parameters for the dimensions to be (600,600). Is it possible to change that intersection are to a coordinate value? Currently, my code is printing out coordinates however they are diagonals such at the x and y values are equal. I want the coordinates of x and y to have different quantities so that I can get coordinates for different sides in the square. Can somebody help?
By reading your code I'm assuming that you know the position of all n sensors and the distance from each n sensor to a target. So what you're essentially trying to do is trilateration (as mentioned by Nico Schertler). In other words determining a relative position based on the distance between n points.
Just a quick definition note in case of confusion:
Triangulation = Working with angles
Trilateration = Working with distances
Trilateration requires at least 3 points and distances.
1 sensor gives you the distance the target is away from the sensor
2 sensors gives you 2 possible locations the target can be
3 sensors tells you which of the 2 locations the target is at
The first solution that probably comes to mind is calculating the intersections
between 3 sensors treating them as circles. Given that there might be some error in the distances this means that the circles might not always intersect. Which rules out this solution.
The following code has all been done in Processing.
I took the liberty of making a class Sensor.
class Sensor {
public PVector p; // position
public float d; // distance from sensor to target (radius of the circle)
public Sensor(float x, float y) {
this.p = new PVector(x, y);
this.d = 0;
}
}
Now to calculate and approximate the intersection point between the sensors/circles, do the following:
PVector trilateration(Sensor s1, Sensor s2, Sensor s3) {
PVector s = PVector.sub(s2.p, s1.p).div(PVector.sub(s2.p, s1.p).mag());
float a = s.dot(PVector.sub(s3.p, s1.p));
PVector t = PVector.sub(s3.p, s1.p).sub(PVector.mult(s, a)).div(PVector.sub(s3.p, s1.p).sub(PVector.mult(s, a)).mag());
float b = t.dot(PVector.sub(s3.p, s1.p));
float c = PVector.sub(s2.p, s1.p).mag();
float x = (sq(s1.d) - sq(s2.d) + sq(c)) / (c * 2);
float y = ((sq(s1.d) - sq(s3.d) + sq(a) + sq(b)) / (b * 2)) - ((a / b) * x);
s.mult(x);
t.mult(y);
return PVector.add(s1.p, s).add(t);
}
Where s1, s2, s3 is any of your 3 sensors, do the following to calculate the the intersection point between the given sensors:
PVector target = trilateration(s1, s2, s3);
While it is possible to calculate the intersection between any amount of sensors. It becomes more and more complex the more sensors you want to include. Especially since you're doing it yourself. If you're able to use external Java libraries, then it would be a lot easier.
If you're able to use external Java libraries, then I highly recommend using com.lemmingapex.trilateration. Then you'd be able to calculate the intersection point between 4 sensors by doing:
Considering s1, s2, s3, s4 as instances of the previously mentioned class Sensor.
double[][] positions = new double[][] { { s1.x, s1.y }, { s2.x, s2.y }, { s3.x, s3.y }, { s4.x, s4.y } };
double[] distances = new double[] { s1.d, s2.d, s3.d, s4.d };
NonLinearLeastSquaresSolver solver = new NonLinearLeastSquaresSolver(
new TrilaterationFunction(positions, distances),
new LevenbergMarquardtOptimizer());
Optimum optimum = solver.solve();
double[] target = optimum.getPoint().toArray();
double x = target[0];
double y = target[1];
The following examples, are examples of the trilateration() method I wrote and not an example of the library above.
Example 1 - No Sensor Error
The 3 big circles being any 3 sensors and the single red circle being the approximated point.
Example 2 - With Sensor Error
The 3 big circles being any 3 sensors and the single red circle being the approximated point.
What you need to compute is the point that it nearest to the a set of circles,
let denote their centers by (x1,y1), (x2,y2), (x3,y3), (x4,y4) and their radii by r1,r2,r3,r4.
You want to find (x,y) that minimizes
F(x,y) = Sum_i [ square( d2( (x,y), (xi,yi)) - ri) ]
This can be achieved by using Newton's algorithm. Newton's algorithm works from an "initial guess" (let's say at the center of the screen), improved iteratively by solving a series of linear systems (in this case, with 2 variables, easy to solve).
M P = -G
where M is the (2x2) matrix of the second order derivatives of F with respect to x and y (called the Hessian), and G the vector of the first order derivatives of F with
respect to x and y (the gradient). This gives the "update" vector P, that tells how to move the coordinates:
Then (x,y) is updated by x = x + Px, y = y + Py, and so on and so forth (recompute M and G, solve for P, update x and y, recompute M and G, solve for P, update x and y). In your case it will probably converge in a handful of iterations.
Since you got two variables only, the 2x2 linear solve is trivial, and the expression of F and its derivatives is simple, thus you can implement it without needing an external library.
Note1: the Levenberg-Marquardt algorithm mentioned in the other answer is a variant of Newton's algorithm (specialized for sum of squares, like here, and that neglects some terms, and that regularizes the matrix M by adding small numbers to its diagonal coefficients). More on this here.
Note2: a simple gradient descent will also probably work (a bit simpler to implement, since it only uses first order derivatives), but given that you only got two variables to implement, the 2x2 linear solve is trivial, so Newton is probably worth it (requires a much much smaller number of iterations for convergence, may be critial if your system is interactive).
I have a velocity vector in altitude, longitude, altitude, I would like to convert it to Cartesian coordinates, vx,vy,vz. The format is from WGS84 standard.
here is the formula
//------------------------------------------------------------------------------
template <class T>
TVectorXYZ<T> WGS84::ToCartesian(T latitude, T longitude, T elevation)
//------------------------------------------------------------------------------
{
double sinlat, coslat;
double sinlon, coslon;
sincos_degree(latitude, sinlat, coslat);
sincos_degree(longitude, sinlon, coslon);
const double v = a / sqrt(1 - WGS84::ee * sinlat*sinlat);
TVectorXYZ<T> coord
(
static_cast<T>((v + elevation) * coslat * sinlon),
static_cast<T>(((1 - WGS84::ee) * v + elevation) * sinlat),
static_cast<T>((v + elevation) * coslat * coslon)
);
return coord;
}
OK based on your previous question and long comment flow lets assume your input is:
lon [rad], lat [rad], alt [m] // WGS84 position
vlon [m/s], vlat [m/s], alt [m/s] // speed in WGS84 lon,lat,alt directions but in [m/s]
And want output:
x,y,z // Cartesian position [m/s]
vx,vy,vz // Cartesian velocity [m/s]
And have valid transformation to Cartesian coordinates for positions at your disposal this is mine:
void WGS84toXYZ(double &x,double &y,double &z,double lon,double lat,double alt) // [rad,rad,m] -> [m,m,m]
{
const double _earth_a=6378137.00000; // [m] WGS84 equator radius
const double _earth_b=6356752.31414; // [m] WGS84 epolar radius
const double _earth_e=8.1819190842622e-2; // WGS84 eccentricity
const double _aa=_earth_a*_earth_a;
const double _ee=_earth_e*_earth_e;
double a,b,x,y,z,h,l,c,s;
a=lon;
b=lat;
h=alt;
c=cos(b);
s=sin(b);
// WGS84 from eccentricity
l=_earth_a/sqrt(1.0-(_ee*s*s));
x=(l+h)*c*cos(a);
y=(l+h)*c*sin(a);
z=(((1.0-_ee)*l)+h)*s;
}
And routine for normalize vector to unit size:
void normalize(double &x,double &y,double &z)
{
double l=sqrt(x*x+y*y+z*z);
if (l>1e-6) l=1.0/l;
x*=l; y*=l; z*=l;
}
Yes you can try to derive the formula lihe #MvG suggest but from your rookie mistakes I strongly doubt it would lead to successful result. Instead you can do this:
obtain lon,lat,alt direction vectors for your position (x,y,z)
that is easy just use some small step increment in WGS84 position convert to Cartesian substract and normalize to unit vectors. Let call these direction basis vectors U,V,W.
double Ux,Uy,Uz; // [m]
double Vx,Vy,Vz; // [m]
double Wx,Wy,Wz; // [m]
double da=1.567e-7; // [rad] angular step ~ 1.0 m in lon direction
double dl=1.0; // [m] altitide step 1.0 m
WGS84toXYZ( x, y, z,lon ,lat,alt ); // actual position
WGS84toXYZ(Ux,Uy,Uz,lon+da,lat,alt ); // lon direction Nort
WGS84toXYZ(Vx,Vy,Vz,lon,lat+da,alt ); // lat direction East
WGS84toXYZ(Wx,Wy,Wz,lon,lat ,alt+dl); // alt direction High/Up
Ux-=x; Uy-=y; Uz-=z;
Vx-=x; Vy-=y; Vz-=z;
Wx-=x; Wy-=y; Wz-=z;
normalize(Ux,Uy,Uz);
normalize(Vx,Vy,Vz);
normalize(Wx,Wy,Wz);
convert velocity from lon,lat,alt to vx,vy,vz
vx = vlon*Ux + vlat*Vx + valt*Wx;
vy = vlon*Uy + vlat*Vy + valt*Wy;
vz = vlon*Uz + vlat*Vz + valt*Wz;
Hope it is clear enough. As usual be careful about the units deg/rad and m/ft/km because units matters a lot.
Btw U,V,W basis vectors form NEH reference frame and in the same time are the direction derivates MvG is mentioning.
[Edit1] more precise conversions
//---------------------------------------------------------------------------
//--- WGS84 transformations ver: 1.00 ---------------------------------------
//---------------------------------------------------------------------------
#ifndef _WGS84_h
#define _WGS84_h
//---------------------------------------------------------------------------
// http://www.navipedia.net/index.php/Ellipsoidal_and_Cartesian_Coordinates_Conversion
//---------------------------------------------------------------------------
// WGS84(a,b,h) = (long,lat,alt) [rad,rad,m]
// XYZ(x,y,z) [m]
//---------------------------------------------------------------------------
const double _earth_a=6378137.00000; // [m] WGS84 equator radius
const double _earth_b=6356752.31414; // [m] WGS84 epolar radius
const double _earth_e=8.1819190842622e-2; // WGS84 eccentricity
//const double _earth_e=sqrt(1.0-((_earth_b/_earth_a)*(_earth_b/_earth_a)));
const double _earth_ee=_earth_e*_earth_e;
//---------------------------------------------------------------------------
const double kmh=1.0/3.6; // [km/h] -> [m/s]
//---------------------------------------------------------------------------
void XYZtoWGS84 (double *abh ,double *xyz ); // [m,m,m] -> [rad,rad,m]
void WGS84toXYZ (double *xyz ,double *abh ); // [rad,rad,m] -> [m,m,m]
void WGS84toXYZ_posvel(double *xyzpos,double *xyzvel,double *abhpos,double *abhvel); // [rad,rad,m],[m/s,m/s,m/s] -> [m,m,m],[m/s,m/s,m/s]
void WGS84toNEH (reper &neh ,double *abh ); // [rad,rad,m] -> NEH [m]
void WGS84_m2rad (double &da,double &db,double *abh); // [rad,rad,m] -> [rad],[rad] representing 1m angle step
void XYZ_interpolate (double *pt,double *p0,double *p1,double t); // [m,m,m] pt = p0 + (p1-p0)*t in ellipsoid space t = <0,1>
//---------------------------------------------------------------------------
void XYZtoWGS84(double *abh,double *xyz)
{
int i;
double a,b,h,l,n,db,s;
a=atanxy(xyz[0],xyz[1]);
l=sqrt((xyz[0]*xyz[0])+(xyz[1]*xyz[1]));
// estimate lat
b=atanxy((1.0-_earth_ee)*l,xyz[2]);
// iterate to improve accuracy
for (i=0;i<100;i++)
{
s=sin(b); db=b;
n=divide(_earth_a,sqrt(1.0-(_earth_ee*s*s)));
h=divide(l,cos(b))-n;
b=atanxy((1.0-divide(_earth_ee*n,n+h))*l,xyz[2]);
db=fabs(db-b);
if (db<1e-12) break;
}
if (b>0.5*pi) b-=pi2;
abh[0]=a;
abh[1]=b;
abh[2]=h;
}
//---------------------------------------------------------------------------
void WGS84toXYZ(double *xyz,double *abh)
{
double a,b,h,l,c,s;
a=abh[0];
b=abh[1];
h=abh[2];
c=cos(b);
s=sin(b);
// WGS84 from eccentricity
l=_earth_a/sqrt(1.0-(_earth_ee*s*s));
xyz[0]=(l+h)*c*cos(a);
xyz[1]=(l+h)*c*sin(a);
xyz[2]=(((1.0-_earth_ee)*l)+h)*s;
}
//---------------------------------------------------------------------------
void WGS84toNEH(reper &neh,double *abh)
{
double N[3],E[3],H[3]; // [m]
double p[3],xyzpos[3];
const double da=1.567e-7; // [rad] angular step ~ 1.0 m in lon direction
const double dl=1.0; // [m] altitide step 1.0 m
vector_copy(p,abh);
// actual position
WGS84toXYZ(xyzpos,abh);
// NEH
p[0]+=da; WGS84toXYZ(N,p); p[0]-=da;
p[1]+=da; WGS84toXYZ(E,p); p[1]-=da;
p[2]+=dl; WGS84toXYZ(H,p); p[2]-=dl;
vector_sub(N,N,xyzpos);
vector_sub(E,E,xyzpos);
vector_sub(H,H,xyzpos);
vector_one(N,N);
vector_one(E,E);
vector_one(H,H);
neh._rep=1;
neh._inv=0;
// axis X
neh.rep[ 0]=N[0];
neh.rep[ 1]=N[1];
neh.rep[ 2]=N[2];
// axis Y
neh.rep[ 4]=E[0];
neh.rep[ 5]=E[1];
neh.rep[ 6]=E[2];
// axis Z
neh.rep[ 8]=H[0];
neh.rep[ 9]=H[1];
neh.rep[10]=H[2];
// gpos
neh.rep[12]=xyzpos[0];
neh.rep[13]=xyzpos[1];
neh.rep[14]=xyzpos[2];
neh.orto(1);
}
//---------------------------------------------------------------------------
void WGS84toXYZ_posvel(double *xyzpos,double *xyzvel,double *abhpos,double *abhvel)
{
reper neh;
WGS84toNEH(neh,abhpos);
neh.gpos_get(xyzpos);
neh.l2g_dir(xyzvel,abhvel);
}
//---------------------------------------------------------------------------
void WGS84_m2rad(double &da,double &db,double *abh)
{
// WGS84 from eccentricity
double p[3],rr;
WGS84toXYZ(p,abh);
rr=(p[0]*p[0])+(p[1]*p[1]);
da=divide(1.0,sqrt(rr));
rr+=p[2]*p[2];
db=divide(1.0,sqrt(rr));
}
//---------------------------------------------------------------------------
void XYZ_interpolate(double *pt,double *p0,double *p1,double t)
{
const double mz=_earth_a/_earth_b;
const double _mz=_earth_b/_earth_a;
double p[3],r,r0,r1;
// compute spherical radiuses of input points
r0=sqrt((p0[0]*p0[0])+(p0[1]*p0[1])+(p0[2]*p0[2]*mz*mz));
r1=sqrt((p1[0]*p1[0])+(p1[1]*p1[1])+(p1[2]*p1[2]*mz*mz));
// linear interpolation
r = r0 +(r1 -r0 )*t;
p[0]= p0[0]+(p1[0]-p0[0])*t;
p[1]= p0[1]+(p1[1]-p0[1])*t;
p[2]=(p0[2]+(p1[2]-p0[2])*t)*mz;
// correct radius and rescale back
r/=sqrt((p[0]*p[0])+(p[1]*p[1])+(p[2]*p[2]));
pt[0]=p[0]*r;
pt[1]=p[1]*r;
pt[2]=p[2]*r*_mz;
}
//---------------------------------------------------------------------------
#endif
//---------------------------------------------------------------------------
However they require basic 3D vector math see here for equations:
Understanding 4x4 homogenous transform matrices
Take the formula you use to convert positions from geographic to Cartesian coordinates. That's some vector p(λ,φ,h) ∈ ℝ³, i.e. you turn latitude, longitude and altitude into a three-element vector of x,y,z coordinates. Now compute the partial derivatives of this formula with respect to the three parameters. You will get three vectors, which should be orthogonal to one another. The derivative with respect to longitude λ should be pointing locally east, the one with respect to latitude φ pointing north, the one with respect to altitude h pointing up. Multiply these vectors with the velocities you have to obtain a Cartesian velocity vector.
Observe how the units match: the position is in meters, the first two derivatives are meters per degree, and the velocity would be degrees per second. Or something else, perhaps miles and radians.
All of this is fairly easy for the sphere. For the WGS84 ellipsoid the position formula is a bit more involved, and that complexity will carry through the computation.
the firstsensor is my lm335z output.
int firstSensor = 0;
int secondSensor = 0;
int thirdSensor = 0;
int inByte = 0;
void setup()
{
Serial.begin(9600);
establishContact(); // send a byte to establish contact until receiver responds
}
void loop()
{
if (Serial.available() > 0) {
inByte = Serial.read();
firstSensor = analogRead(0);
delay(10);
secondSensor = analogRead(1);
thirdSensor = analogRead(2);
Serial.print(firstSensor, DEC);
Serial.print(",");
Serial.print(secondSensor, DEC);
Serial.print(",");
Serial.println(thirdSensor, DEC);
}
}
void establishContact() {
}
Based on its datasheet, the temperature output will vary at 10mV/K. But if you find a reference voltage at a known reference temperature, you can use this helpful equation from the datasheet:
V_out = V_ref * T_out/T_ref which is equivalent to T_out = T_ref * (V_out/V_ref)
So say your voltage is 2.982V at 25 degrees C or 298.15 degrees Kelvin (this is suggested in datasheet), then you can set your equation to:
T_out = (298.15 Kelvin)(V_out/2.982V)-273.15
So assuming you already can convert an analog reading into a voltage*, just plug in the measured voltage and this should give you your temp in degrees C.
*The Arduino has a built-in 10-bit ADC and the maximum voltage it can read is 5v. Therefore, you can factor in 5v/1024 ADC steps = 0.00488V per ADC step. (i.e. V_out = firstSensor*0.00488). So plugging in for V_out, the equation becomes:
T_out = (298.15)(firstSensor*0.001637)-273.15 where 0.001637 = 0.00488/2.982.