Haversine Formula Error - Incorrect Distance - Arduino - arduino

Currently taking part in a project that requires to take readings from a gps module and then using these calculate the distance between the readings and a fixed waypoint. The Gps works and gives the values of LAT - 54.9289 and LON - -1.368 this should give a distance of about 3,200 meters. however it gives around 6105. I also have a feeling that 6105 is km to haha. Im wondering if its not taking the negative numbers correctly or if i have some variable conflicts in the code. Any light shed on this would be great, thanks.
#include <TinyGPS.h>
#include <SoftwareSerial.h>
#include <rgb_lcd.h>
#include <Wire.h>
//Sets TX And RX Pins
SoftwareSerial GPS(2,3);
TinyGPS gps;
void gpsdump(TinyGPS &gps);
bool feedgps();
void CheckGPS();
void GetCoords();
long lat, lon;
float LAT, LON; // Latitude is gained from GPS and stored in another variable to avoid errors - Should change with changing GPS value - Which would alter distance to waypoint.
float LAT1,LON1;
rgb_lcd lcd;
void setup()
{
// Sets Baud Rate
GPS.begin(9600);
Serial.begin(115200);
}
// Determines The Distance Between Current Location And Waypoint
void GetDistance()
{
// Calculating Distance Between Waypoints
double Distance_Lat; // Distance between Lattitude values
double Distance_Lon; // Distance between Lonitude values
double Distance_Total = 0;// Total Distance
double val,val2; // Subsidary variable for holding numbers. - No actual value represented.
double fLAT1,fLAT2;
double LAT2 = 54.900000; // Waypoint Latitude
double LON2 = -1.368072; // Waypoint Longitude
// Initialising Calculation
Distance_Lat = radians(LAT2-LAT1); // Must be done in radians
fLAT1 = radians(LAT1);
fLAT2 = radians(LAT2);
Distance_Lon = radians((LON2)-(LON1));
// Calculating Distance - Using Haversines Formulae
Distance_Total = (sin(Distance_Lat/2.0)*sin(Distance_Lat/2.0));
val = cos(fLAT1);
val = val*(cos(fLAT2));
val = val*(sin(Distance_Lon/2.0));
val = val*(sin(Distance_Lon/2.0));
Distance_Total = Distance_Total + val;
Distance_Total = 2*atan2(sqrt(Distance_Total),sqrt(1.0-Distance_Total));
Distance_Total = Distance_Total*6371.0000; // Converting to meters.
Serial.println("Distance: ");
Serial.println(Distance_Total);
//---------------------------------------------------------------------------------
}
// Returns Latitude And Longitude As Decimal Degrees (DD).
void GetCoords()
{
long lat, lon;
CheckGPS();
Serial.print("Latitude : ");
Serial.print(LAT/1000000,7);
Serial.print(" :: Longitude : ");
Serial.println(LON/1000000,7);
}
void CheckGPS()
{
bool newdata = false;
unsigned long start = millis();
// Every 1 seconds, Print an update
while (millis() - start < 1000)
{
if (feedgps ())
newdata = true;
if (newdata)
gpsdump(gps);
}
}
// Checks If The GPS Has Any Data To Transmit
bool feedgps()
{
while (GPS.available())
if (gps.encode(GPS.read()))
return true;
else
return false;
}
// Transmits GPS Data And Gets Latitude And Longitude Positions.
void gpsdump(TinyGPS &gps)
{ gps.get_position(&lat, &lon);
LAT = lat;
LON = lon;
//Keeps The GPS Fed To Avoid Checksum Errors.
feedgps();
}
void loop()
{
// Function That Returns The GPS Coordinates In DD.
GetCoords();
GetDistance();
}

The haversine formula I'm looking at on Wikipedia at right now, https://en.wikipedia.org/wiki/Haversine_formula, has arcsin(sqrt(Distance_Total)) where you have your atan2.

Related

How to show distance below than cm use sensor TF-Luna Lidar

I want to ask about TF-Lunar Lidar, I've written code to know the distance in "cm" by reading the array data, right now I need some help about how to read the data so it can be showing below "cm", (mm or below but in decimal).
this is the code
#include <SoftwareSerial.h> //header file of software serial port
SoftwareSerial Serial1(2,3); //define software serial port name as Serial1 and define pin2 as RX and pin3 as TX
/* For Arduinoboards with multiple serial ports like DUEboard, interpret above two pieces of code and
directly use Serial1 serial port*/
float dist; //actual distance measurements of LiDAR
int strength; //signal strength of LiDAR
float temprature;
int check; //save check value
int i;
int uart[9]; //save data measured by LiDAR
const int HEADER=0x59; //frame header of data package
void setup() {
Serial.begin(9600);
Serial1.begin(115200); //set bit rate of serial port connecting LiDAR with Arduino
}
void loop() {
if (Serial1.available()) { //check if serial port has data input
if(Serial1.read() == HEADER) { //assess data package frame header 0x59
uart[0]=HEADER;
if (Serial1.read() == HEADER) { //assess data package frame header 0x59
uart[1] = HEADER;
for (i = 2; i < 9; i++) { //save data in array
uart[i] = Serial1.read();
}
check = uart[0] + uart[1] + uart[2] + uart[3] + uart[4] + uart[5] + uart[6] + uart[7];
if (uart[8] == (check & 0xff)){ //verify the received data as per protocol
dist = uart[2] + uart[3]*256; //calculate distance value
strength = uart[4] + uart[5] * 256; //calculate signal strength value
temprature = uart[6] + uart[7] *256;//calculate chip temprature
temprature = temprature/8 - 256;
Serial.print("dist = ");
Serial.print(dist);//output measure distance value of LiDAR
Serial.print('\t');
Serial.print("strength = ");
Serial.print(strength); //output signal strength value
Serial.print('\t');
Serial.print("Chip Temprature = ");
Serial.print(temprature);
Serial.print(" celcius degree"); //output chip temperature of Lidar
Serial.print('\t');
Serial.print("check");
Serial.println(check);
}
}
}
}
}
I don't believe your code is flawed in any way, but your request is quite impossible. If my understanding is correct, the Luna's max resolution is 1 cm, so you shouldn't have any data for mm or decimals past cm. See specs as listed on dfrobot. https://www.dfrobot.com/product-1995.html

Converting area of intersection to generate a coordinate

I was working on a project where I get analog values from a resistive touchscreen and turn them into intersection points.
Here is an example:
Here is my code for the data collection using an Arduino Uno and construction of the points using tool called processing.
#define side1 2
#define side2 3
#define side3 4
#define side4 5
#define contact A0
void setup() {
pinMode(contact, INPUT);
pinMode(side1, OUTPUT);
pinMode(side2, OUTPUT);
pinMode(side3, OUTPUT);
pinMode(side4, OUTPUT);
Serial.begin(9600);
}
void loop() {
int sensorValue1;
int sensorValue2;
int sensorValue3;
int sensorValue4;
// SENSOR VALUE 1:
digitalWrite(side1, LOW);
digitalWrite(side2, HIGH);
digitalWrite(side3, HIGH);
digitalWrite(side4, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue1 = analogRead(contact);
}
// SENSOR VALUE 2:
digitalWrite(side2, LOW);
digitalWrite(side3, HIGH);
digitalWrite(side4, HIGH);
digitalWrite(side1, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue2 = analogRead(contact);
}
// SENSOR VALUE 3:
digitalWrite(side3, LOW);
digitalWrite(side2, HIGH);
digitalWrite(side4, HIGH);
digitalWrite(side1, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue3 = analogRead(contact);
}
// SENSOR VALUE 2:
digitalWrite(side4, LOW);
digitalWrite(side3, HIGH);
digitalWrite(side2, HIGH);
digitalWrite(side1, HIGH);
delay(5);
for (int i = 0; i < 10; i++){
sensorValue4 = analogRead(contact);
}
Serial.print(sensorValue1);
Serial.print(",");
Serial.print(sensorValue2);
Serial.print(",");
Serial.print(sensorValue3);
Serial.print(",");
Serial.print(sensorValue4);
Serial.println();
}
This is the Processing code for the construction of the graph.
import processing.serial.*;
Serial myPort; // The serial port
int maxNumberOfSensors = 4;
float[] sensorValues = new float[maxNumberOfSensors];
float sensorValueX;
float sensorValueX1;
float sensorValueY;
float sensorValueY1;
int scaleValue = 2;
void setup () {
size(600, 600); // set up the window to whatever size you want
//println(Serial.list()); // List all the available serial ports
String portName = "COM5";
myPort = new Serial(this, portName, 9600);
myPort.clear();
myPort.bufferUntil('\n'); // don't generate a serialEvent() until you get a newline (\n) byte
background(255); // set inital background
smooth(); // turn on antialiasing
}
void draw () {
//background(255);
//noFill();
fill(100,100,100,100);
ellipse(height,0, scaleValue*sensorValues[0], scaleValue*sensorValues[0]);
ellipse(0,width, scaleValue*sensorValues[1], scaleValue*sensorValues[1]);
ellipse(height,width, scaleValue*sensorValues[2], scaleValue*sensorValues[2]);
ellipse(0,0, scaleValue*sensorValues[3], scaleValue*sensorValues[3]);
//ellipse(sensorValueY, sensorValueX, 10,10);
//println(sensorValueY,sensorValueX);
sensorValueX = ((sensorValues[3]*sensorValues[3])-(sensorValues[2]*sensorValues[2])+600*600)/2000;
sensorValueX1 = ((sensorValues[0]*sensorValues[0])-(sensorValues[1]*sensorValues[1])+600*600)/2000;
sensorValueY = ((sensorValues[3]*sensorValues[3])-(sensorValues[2]*sensorValues[2])+(600*600))/2000;
sensorValueY1 = ((sensorValues[1]*sensorValues[1])-(sensorValues[0]*sensorValues[0])+(600*600))/2000;
line(0, scaleValue*sensorValueX, height,scaleValue* sensorValueX);
line(scaleValue*sensorValueY, 0, scaleValue*sensorValueY, width);
ellipse(scaleValue*sensorValueY, scaleValue*sensorValueX, 20,20);
line(0, scaleValue*sensorValueX1, height,scaleValue* sensorValueX1);
line(scaleValue*sensorValueY1, 0, scaleValue*sensorValueY1, width);
ellipse(scaleValue*sensorValueY1, scaleValue*sensorValueX1, 20,20);
println(scaleValue*sensorValueX,scaleValue*sensorValueY);
}
void serialEvent (Serial myPort) {
String inString = myPort.readStringUntil('\n'); // get the ASCII string
if (inString != null) { // if it's not empty
inString = trim(inString); // trim off any whitespace
int incomingValues[] = int(split(inString, ",")); // convert to an array of ints
if (incomingValues.length <= maxNumberOfSensors && incomingValues.length > 0) {
for (int i = 0; i < incomingValues.length; i++) {
// map the incoming values (0 to 1023) to an appropriate gray-scale range (0-255):
sensorValues[i] = map(incomingValues[i], 0, 1023, 0, width);
//println(incomingValues[i]+ " " + sensorValues[i]);
}
}
}
}
I was wondering how I could convert the intersection of those points to a coordinate? Example: in the image, I showed you, I set the parameters for the dimensions to be (600,600). Is it possible to change that intersection are to a coordinate value? Currently, my code is printing out coordinates however they are diagonals such at the x and y values are equal. I want the coordinates of x and y to have different quantities so that I can get coordinates for different sides in the square. Can somebody help?
By reading your code I'm assuming that you know the position of all n sensors and the distance from each n sensor to a target. So what you're essentially trying to do is trilateration (as mentioned by Nico Schertler). In other words determining a relative position based on the distance between n points.
Just a quick definition note in case of confusion:
Triangulation = Working with angles
Trilateration = Working with distances
Trilateration requires at least 3 points and distances.
1 sensor gives you the distance the target is away from the sensor
2 sensors gives you 2 possible locations the target can be
3 sensors tells you which of the 2 locations the target is at
The first solution that probably comes to mind is calculating the intersections
between 3 sensors treating them as circles. Given that there might be some error in the distances this means that the circles might not always intersect. Which rules out this solution.
The following code has all been done in Processing.
I took the liberty of making a class Sensor.
class Sensor {
public PVector p; // position
public float d; // distance from sensor to target (radius of the circle)
public Sensor(float x, float y) {
this.p = new PVector(x, y);
this.d = 0;
}
}
Now to calculate and approximate the intersection point between the sensors/circles, do the following:
PVector trilateration(Sensor s1, Sensor s2, Sensor s3) {
PVector s = PVector.sub(s2.p, s1.p).div(PVector.sub(s2.p, s1.p).mag());
float a = s.dot(PVector.sub(s3.p, s1.p));
PVector t = PVector.sub(s3.p, s1.p).sub(PVector.mult(s, a)).div(PVector.sub(s3.p, s1.p).sub(PVector.mult(s, a)).mag());
float b = t.dot(PVector.sub(s3.p, s1.p));
float c = PVector.sub(s2.p, s1.p).mag();
float x = (sq(s1.d) - sq(s2.d) + sq(c)) / (c * 2);
float y = ((sq(s1.d) - sq(s3.d) + sq(a) + sq(b)) / (b * 2)) - ((a / b) * x);
s.mult(x);
t.mult(y);
return PVector.add(s1.p, s).add(t);
}
Where s1, s2, s3 is any of your 3 sensors, do the following to calculate the the intersection point between the given sensors:
PVector target = trilateration(s1, s2, s3);
While it is possible to calculate the intersection between any amount of sensors. It becomes more and more complex the more sensors you want to include. Especially since you're doing it yourself. If you're able to use external Java libraries, then it would be a lot easier.
If you're able to use external Java libraries, then I highly recommend using com.lemmingapex.trilateration. Then you'd be able to calculate the intersection point between 4 sensors by doing:
Considering s1, s2, s3, s4 as instances of the previously mentioned class Sensor.
double[][] positions = new double[][] { { s1.x, s1.y }, { s2.x, s2.y }, { s3.x, s3.y }, { s4.x, s4.y } };
double[] distances = new double[] { s1.d, s2.d, s3.d, s4.d };
NonLinearLeastSquaresSolver solver = new NonLinearLeastSquaresSolver(
new TrilaterationFunction(positions, distances),
new LevenbergMarquardtOptimizer());
Optimum optimum = solver.solve();
double[] target = optimum.getPoint().toArray();
double x = target[0];
double y = target[1];
The following examples, are examples of the trilateration() method I wrote and not an example of the library above.
Example 1 - No Sensor Error
The 3 big circles being any 3 sensors and the single red circle being the approximated point.
Example 2 - With Sensor Error
The 3 big circles being any 3 sensors and the single red circle being the approximated point.
What you need to compute is the point that it nearest to the a set of circles,
let denote their centers by (x1,y1), (x2,y2), (x3,y3), (x4,y4) and their radii by r1,r2,r3,r4.
You want to find (x,y) that minimizes
F(x,y) = Sum_i [ square( d2( (x,y), (xi,yi)) - ri) ]
This can be achieved by using Newton's algorithm. Newton's algorithm works from an "initial guess" (let's say at the center of the screen), improved iteratively by solving a series of linear systems (in this case, with 2 variables, easy to solve).
M P = -G
where M is the (2x2) matrix of the second order derivatives of F with respect to x and y (called the Hessian), and G the vector of the first order derivatives of F with
respect to x and y (the gradient). This gives the "update" vector P, that tells how to move the coordinates:
Then (x,y) is updated by x = x + Px, y = y + Py, and so on and so forth (recompute M and G, solve for P, update x and y, recompute M and G, solve for P, update x and y). In your case it will probably converge in a handful of iterations.
Since you got two variables only, the 2x2 linear solve is trivial, and the expression of F and its derivatives is simple, thus you can implement it without needing an external library.
Note1: the Levenberg-Marquardt algorithm mentioned in the other answer is a variant of Newton's algorithm (specialized for sum of squares, like here, and that neglects some terms, and that regularizes the matrix M by adding small numbers to its diagonal coefficients). More on this here.
Note2: a simple gradient descent will also probably work (a bit simpler to implement, since it only uses first order derivatives), but given that you only got two variables to implement, the 2x2 linear solve is trivial, so Newton is probably worth it (requires a much much smaller number of iterations for convergence, may be critial if your system is interactive).

How to convert a spherical velocity coordinates into cartesian

I have a velocity vector in altitude, longitude, altitude, I would like to convert it to Cartesian coordinates, vx,vy,vz. The format is from WGS84 standard.
here is the formula
//------------------------------------------------------------------------------
template <class T>
TVectorXYZ<T> WGS84::ToCartesian(T latitude, T longitude, T elevation)
//------------------------------------------------------------------------------
{
double sinlat, coslat;
double sinlon, coslon;
sincos_degree(latitude, sinlat, coslat);
sincos_degree(longitude, sinlon, coslon);
const double v = a / sqrt(1 - WGS84::ee * sinlat*sinlat);
TVectorXYZ<T> coord
(
static_cast<T>((v + elevation) * coslat * sinlon),
static_cast<T>(((1 - WGS84::ee) * v + elevation) * sinlat),
static_cast<T>((v + elevation) * coslat * coslon)
);
return coord;
}
OK based on your previous question and long comment flow lets assume your input is:
lon [rad], lat [rad], alt [m] // WGS84 position
vlon [m/s], vlat [m/s], alt [m/s] // speed in WGS84 lon,lat,alt directions but in [m/s]
And want output:
x,y,z // Cartesian position [m/s]
vx,vy,vz // Cartesian velocity [m/s]
And have valid transformation to Cartesian coordinates for positions at your disposal this is mine:
void WGS84toXYZ(double &x,double &y,double &z,double lon,double lat,double alt) // [rad,rad,m] -> [m,m,m]
{
const double _earth_a=6378137.00000; // [m] WGS84 equator radius
const double _earth_b=6356752.31414; // [m] WGS84 epolar radius
const double _earth_e=8.1819190842622e-2; // WGS84 eccentricity
const double _aa=_earth_a*_earth_a;
const double _ee=_earth_e*_earth_e;
double a,b,x,y,z,h,l,c,s;
a=lon;
b=lat;
h=alt;
c=cos(b);
s=sin(b);
// WGS84 from eccentricity
l=_earth_a/sqrt(1.0-(_ee*s*s));
x=(l+h)*c*cos(a);
y=(l+h)*c*sin(a);
z=(((1.0-_ee)*l)+h)*s;
}
And routine for normalize vector to unit size:
void normalize(double &x,double &y,double &z)
{
double l=sqrt(x*x+y*y+z*z);
if (l>1e-6) l=1.0/l;
x*=l; y*=l; z*=l;
}
Yes you can try to derive the formula lihe #MvG suggest but from your rookie mistakes I strongly doubt it would lead to successful result. Instead you can do this:
obtain lon,lat,alt direction vectors for your position (x,y,z)
that is easy just use some small step increment in WGS84 position convert to Cartesian substract and normalize to unit vectors. Let call these direction basis vectors U,V,W.
double Ux,Uy,Uz; // [m]
double Vx,Vy,Vz; // [m]
double Wx,Wy,Wz; // [m]
double da=1.567e-7; // [rad] angular step ~ 1.0 m in lon direction
double dl=1.0; // [m] altitide step 1.0 m
WGS84toXYZ( x, y, z,lon ,lat,alt ); // actual position
WGS84toXYZ(Ux,Uy,Uz,lon+da,lat,alt ); // lon direction Nort
WGS84toXYZ(Vx,Vy,Vz,lon,lat+da,alt ); // lat direction East
WGS84toXYZ(Wx,Wy,Wz,lon,lat ,alt+dl); // alt direction High/Up
Ux-=x; Uy-=y; Uz-=z;
Vx-=x; Vy-=y; Vz-=z;
Wx-=x; Wy-=y; Wz-=z;
normalize(Ux,Uy,Uz);
normalize(Vx,Vy,Vz);
normalize(Wx,Wy,Wz);
convert velocity from lon,lat,alt to vx,vy,vz
vx = vlon*Ux + vlat*Vx + valt*Wx;
vy = vlon*Uy + vlat*Vy + valt*Wy;
vz = vlon*Uz + vlat*Vz + valt*Wz;
Hope it is clear enough. As usual be careful about the units deg/rad and m/ft/km because units matters a lot.
Btw U,V,W basis vectors form NEH reference frame and in the same time are the direction derivates MvG is mentioning.
[Edit1] more precise conversions
//---------------------------------------------------------------------------
//--- WGS84 transformations ver: 1.00 ---------------------------------------
//---------------------------------------------------------------------------
#ifndef _WGS84_h
#define _WGS84_h
//---------------------------------------------------------------------------
// http://www.navipedia.net/index.php/Ellipsoidal_and_Cartesian_Coordinates_Conversion
//---------------------------------------------------------------------------
// WGS84(a,b,h) = (long,lat,alt) [rad,rad,m]
// XYZ(x,y,z) [m]
//---------------------------------------------------------------------------
const double _earth_a=6378137.00000; // [m] WGS84 equator radius
const double _earth_b=6356752.31414; // [m] WGS84 epolar radius
const double _earth_e=8.1819190842622e-2; // WGS84 eccentricity
//const double _earth_e=sqrt(1.0-((_earth_b/_earth_a)*(_earth_b/_earth_a)));
const double _earth_ee=_earth_e*_earth_e;
//---------------------------------------------------------------------------
const double kmh=1.0/3.6; // [km/h] -> [m/s]
//---------------------------------------------------------------------------
void XYZtoWGS84 (double *abh ,double *xyz ); // [m,m,m] -> [rad,rad,m]
void WGS84toXYZ (double *xyz ,double *abh ); // [rad,rad,m] -> [m,m,m]
void WGS84toXYZ_posvel(double *xyzpos,double *xyzvel,double *abhpos,double *abhvel); // [rad,rad,m],[m/s,m/s,m/s] -> [m,m,m],[m/s,m/s,m/s]
void WGS84toNEH (reper &neh ,double *abh ); // [rad,rad,m] -> NEH [m]
void WGS84_m2rad (double &da,double &db,double *abh); // [rad,rad,m] -> [rad],[rad] representing 1m angle step
void XYZ_interpolate (double *pt,double *p0,double *p1,double t); // [m,m,m] pt = p0 + (p1-p0)*t in ellipsoid space t = <0,1>
//---------------------------------------------------------------------------
void XYZtoWGS84(double *abh,double *xyz)
{
int i;
double a,b,h,l,n,db,s;
a=atanxy(xyz[0],xyz[1]);
l=sqrt((xyz[0]*xyz[0])+(xyz[1]*xyz[1]));
// estimate lat
b=atanxy((1.0-_earth_ee)*l,xyz[2]);
// iterate to improve accuracy
for (i=0;i<100;i++)
{
s=sin(b); db=b;
n=divide(_earth_a,sqrt(1.0-(_earth_ee*s*s)));
h=divide(l,cos(b))-n;
b=atanxy((1.0-divide(_earth_ee*n,n+h))*l,xyz[2]);
db=fabs(db-b);
if (db<1e-12) break;
}
if (b>0.5*pi) b-=pi2;
abh[0]=a;
abh[1]=b;
abh[2]=h;
}
//---------------------------------------------------------------------------
void WGS84toXYZ(double *xyz,double *abh)
{
double a,b,h,l,c,s;
a=abh[0];
b=abh[1];
h=abh[2];
c=cos(b);
s=sin(b);
// WGS84 from eccentricity
l=_earth_a/sqrt(1.0-(_earth_ee*s*s));
xyz[0]=(l+h)*c*cos(a);
xyz[1]=(l+h)*c*sin(a);
xyz[2]=(((1.0-_earth_ee)*l)+h)*s;
}
//---------------------------------------------------------------------------
void WGS84toNEH(reper &neh,double *abh)
{
double N[3],E[3],H[3]; // [m]
double p[3],xyzpos[3];
const double da=1.567e-7; // [rad] angular step ~ 1.0 m in lon direction
const double dl=1.0; // [m] altitide step 1.0 m
vector_copy(p,abh);
// actual position
WGS84toXYZ(xyzpos,abh);
// NEH
p[0]+=da; WGS84toXYZ(N,p); p[0]-=da;
p[1]+=da; WGS84toXYZ(E,p); p[1]-=da;
p[2]+=dl; WGS84toXYZ(H,p); p[2]-=dl;
vector_sub(N,N,xyzpos);
vector_sub(E,E,xyzpos);
vector_sub(H,H,xyzpos);
vector_one(N,N);
vector_one(E,E);
vector_one(H,H);
neh._rep=1;
neh._inv=0;
// axis X
neh.rep[ 0]=N[0];
neh.rep[ 1]=N[1];
neh.rep[ 2]=N[2];
// axis Y
neh.rep[ 4]=E[0];
neh.rep[ 5]=E[1];
neh.rep[ 6]=E[2];
// axis Z
neh.rep[ 8]=H[0];
neh.rep[ 9]=H[1];
neh.rep[10]=H[2];
// gpos
neh.rep[12]=xyzpos[0];
neh.rep[13]=xyzpos[1];
neh.rep[14]=xyzpos[2];
neh.orto(1);
}
//---------------------------------------------------------------------------
void WGS84toXYZ_posvel(double *xyzpos,double *xyzvel,double *abhpos,double *abhvel)
{
reper neh;
WGS84toNEH(neh,abhpos);
neh.gpos_get(xyzpos);
neh.l2g_dir(xyzvel,abhvel);
}
//---------------------------------------------------------------------------
void WGS84_m2rad(double &da,double &db,double *abh)
{
// WGS84 from eccentricity
double p[3],rr;
WGS84toXYZ(p,abh);
rr=(p[0]*p[0])+(p[1]*p[1]);
da=divide(1.0,sqrt(rr));
rr+=p[2]*p[2];
db=divide(1.0,sqrt(rr));
}
//---------------------------------------------------------------------------
void XYZ_interpolate(double *pt,double *p0,double *p1,double t)
{
const double mz=_earth_a/_earth_b;
const double _mz=_earth_b/_earth_a;
double p[3],r,r0,r1;
// compute spherical radiuses of input points
r0=sqrt((p0[0]*p0[0])+(p0[1]*p0[1])+(p0[2]*p0[2]*mz*mz));
r1=sqrt((p1[0]*p1[0])+(p1[1]*p1[1])+(p1[2]*p1[2]*mz*mz));
// linear interpolation
r = r0 +(r1 -r0 )*t;
p[0]= p0[0]+(p1[0]-p0[0])*t;
p[1]= p0[1]+(p1[1]-p0[1])*t;
p[2]=(p0[2]+(p1[2]-p0[2])*t)*mz;
// correct radius and rescale back
r/=sqrt((p[0]*p[0])+(p[1]*p[1])+(p[2]*p[2]));
pt[0]=p[0]*r;
pt[1]=p[1]*r;
pt[2]=p[2]*r*_mz;
}
//---------------------------------------------------------------------------
#endif
//---------------------------------------------------------------------------
However they require basic 3D vector math see here for equations:
Understanding 4x4 homogenous transform matrices
Take the formula you use to convert positions from geographic to Cartesian coordinates. That's some vector p(λ,φ,h) ∈ ℝ³, i.e. you turn latitude, longitude and altitude into a three-element vector of x,y,z coordinates. Now compute the partial derivatives of this formula with respect to the three parameters. You will get three vectors, which should be orthogonal to one another. The derivative with respect to longitude λ should be pointing locally east, the one with respect to latitude φ pointing north, the one with respect to altitude h pointing up. Multiply these vectors with the velocities you have to obtain a Cartesian velocity vector.
Observe how the units match: the position is in meters, the first two derivatives are meters per degree, and the velocity would be degrees per second. Or something else, perhaps miles and radians.
All of this is fairly easy for the sphere. For the WGS84 ellipsoid the position formula is a bit more involved, and that complexity will carry through the computation.

Appropriate sample for PIC ADC after converting from analog voltage.

if I'm reading an analog signal from my pressure sensor at 500mSec. my instructor told me that you should make the ADC Timr0 interrupt double what you are reading from analog Oscilloscope (500mSec.).i.e. 2fc. My code is down below.
Should I configure my timer0 to be 20Hz or less or more?
enter code here
char temp[5];
unsigned int adc_value;
char uart_rd;
int i;
unsigned int d[10]={0};
int average = 0;
int counter =0;
void interrupt(){
if (INTCON.T0IF) {
INTCON.T0IF = 0 ;// clear T0IF (Timer interrupt flag).
}
TMR0 = 178;
}
void main() {
temp[0]='1';
temp[1]='2';
temp[2]='3';
temp[3]='4';
temp[4]=' ';
OSCCON= 0x77; //8MHz
ANSEL = 0b00000100; //ANS2
CMCON0 = 0X07; //
TRISA = 0b00001100;
UART1_Init(9600);
TMR0 = 178 ;
//CMCON0 = 0X04; // turn off compartor.
OPTION_REG = 0x87; //
INTCON =0xA0;
while(1){
average= ADC_Read(2);
temp[0] = average/1000+48;
temp[1] = (average/100)%10+48;
temp[2] = (average/10)%10+48;
temp[3] = average%10+48;
for (i=0;i<5; i++)
{
UART1_Write(temp[i]);
}
}
}
When preform sampling on a signal you are not capturing all of is information but only parts of it with a given sampling period.
The Nyquist–Shannon sampling theorem claims that if you can actual sample at above of some given frequency you can get all the information of a finite bandwidth of the signal. This frequency is twice the maximum frequency of that bandwidth.
If you don't do comply with that frequency you will suffer from an effect called aliasing.
You can learn more about here: https://en.wikipedia.org/wiki/Aliasing

how to get Celsius as output from LM335Z with arduino?

the firstsensor is my lm335z output.
int firstSensor = 0;
int secondSensor = 0;
int thirdSensor = 0;
int inByte = 0;
void setup()
{
Serial.begin(9600);
establishContact(); // send a byte to establish contact until receiver responds
}
void loop()
{
if (Serial.available() > 0) {
inByte = Serial.read();
firstSensor = analogRead(0);
delay(10);
secondSensor = analogRead(1);
thirdSensor = analogRead(2);
Serial.print(firstSensor, DEC);
Serial.print(",");
Serial.print(secondSensor, DEC);
Serial.print(",");
Serial.println(thirdSensor, DEC);
}
}
void establishContact() {
}
Based on its datasheet, the temperature output will vary at 10mV/K. But if you find a reference voltage at a known reference temperature, you can use this helpful equation from the datasheet:
V_out = V_ref * T_out/T_ref which is equivalent to T_out = T_ref * (V_out/V_ref)
So say your voltage is 2.982V at 25 degrees C or 298.15 degrees Kelvin (this is suggested in datasheet), then you can set your equation to:
T_out = (298.15 Kelvin)(V_out/2.982V)-273.15
So assuming you already can convert an analog reading into a voltage*, just plug in the measured voltage and this should give you your temp in degrees C.
*The Arduino has a built-in 10-bit ADC and the maximum voltage it can read is 5v. Therefore, you can factor in 5v/1024 ADC steps = 0.00488V per ADC step. (i.e. V_out = firstSensor*0.00488). So plugging in for V_out, the equation becomes:
T_out = (298.15)(firstSensor*0.001637)-273.15 where 0.001637 = 0.00488/2.982.

Resources