Transform accelerometer data to object space - math

For the context: I'm developing a embedded system that has an accelerometer built in. This device is connected to a smartphone and streams data (including the accelerometer values). The device can now be attached in any orientation to a vehicle / bike / ...
The Problem: When I receive the accelerometer data from the device, I would like to transform them into the "vehicle-space". What I found out so far is needed:
A downwards pointing vector, in "device-space" (basically gravitation)
A forward vector, in "device-space" (pointing in the forward direction of the vehicle)
I have both of this vectors calculated in my application, however I'm now a little bit stuck with the maths / implementation part.
What I found that could possibly a solution is the Change of Basis, however I was not able to
Find a confirmation that this is the way to do it
How to do this in code/pseudo-code
I don't want to include a fat math library for such a "small" task and would rather understand the maths behind it myself.
The current solution in my head, which is based on my long-ago memories from university-math and which I have no proof for: (Pseudo-Code)
val nfv = normalize(forwardVector)
val ndv = normalize(downwardVector)
val fxd = cross(nfv, ndv)
val rotationMatrix = (
m11: fxd.x, m12: fxd.y, m13: fxd.z,
m21: ndv.x, m22: ndv.y, m23: ndv.z,
m31: nfv.x, m32: nfv.y, m33: nfv.z
)
// Then for each "incoming" vector
val transformedVector = rawVector * rotationMatrix
Question: Is this the correct way to do it?

Related

LSTM neural network on Arduino 33 sense BLE using tensorflow lite

I have train my LSTM network and deploy it on the Arduino but my problem is that as you know the input of the LSTM network is as follows (window length of the time series data),(features) in my case 256 window length and 6 axis IMU data which means 2D input however all of the examples that I have seen are only 1D input
here is the Arduino code that used to assign the model's input
input->data.f[0] =aX
and when trying to modify the code to fit my input dimension
(256,6) like this
input->data.f[0][0] =aX
I got this error
fall_detection:142:44: error: invalid types 'float[int]' for array subscript
tflInputTensor->data.f[samplesRead][5] = (gZ + 2000.0) / 4000.0;
It looks like you are setting the tensor input data "manually" by digging in to the internal structure of the TF Lite tensor type. I would suggest using the API here:
https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_c
In particular the code snippet here:
float* input = interpreter->typed_input_tensor<float>(0);
// Fill `input`.
gives you the proper pointer. Then you can just cast the pointer for your actual data structure. Something like this:
input = reinterpret_cast<float *>(aX);
I see the potential for confusion for the 2D vs. 1D issue, but it is OK as long as the input tensor is properly shaped. If it has shape <256, 6> what this means is that the 1D sequence of float values will be "a0,a1,a2,a3,a4,a5,b0,b1,..." where the "a" values are all six values of the first row, the "b" values are all six values of the next row and so on. The standard C convention for multi-dimension memory layout works this way. Unless there is something unusual about your aX object, it should be fine. If aX is not laid out in this fashion, you should copy the data so that it is in this layout.

Source of crackle in phase modulation synthesis

I'm trying to make a simple phase modulation synthesizer based on wavetables and DDS. I have a 12bit wavetable containing 4096 sample of a sine wave and I'm using a 32bit phase accumulator.
Implementing my idea works but it seems to have a lot of low level crackle associated with modulating the depth of phase modulation. I'm generating my sample like so:
Modulator = Modulation*SineWavetable[PhaseAc2>>20];
Sample = SineWavetable[(PhaseAc1 + Modulator)>>20];
I thought the crackle could be generated by modulating the "Modulation" parameter a bit too hard/fast but this doesn't seem to be the cause of the problem. Could anybody enlighten me on potential problems with this method of phase modulation?
As ever, thanks!
As it turns out, typecasting is a very big deal here! I was trying to mix an int32_t (Modulator) with a uint32_t (PhaseAc1) and it was causing strange overflow problems where the phase would momentarily glitch, causing the audible problems. The phase accumulator was calculated outside of the array index section and shifted at a single variable like so:
Modulator = Modulation*SineWavetable[PhaseAc2>>20];
PhaseAc1 += (int32_t)Modulator;
Sample = SineWavetable[(PhaseAc1 + Modulator)>>20];

Simulated Gravity: Slow down on ground approach

I'm looking for some math, nothing language dependant.
"Standard" gravity for an object in a game would go something like this:
if player.y > ground.y {
player.velocity.y = player.velocity.y - gravity
}
In the little simulation I'm implementing I would actually like the gravity to weaken, and the velocity to slow, as the player approaches the ground.
IE: When the object is 100m above ground it should fall faster than when it's 1m above ground. It should land like a feather in a way.
I imagine the gravity needs to be some kind of function of the distance between the object and the ground.
I've been searching around Google but as I've not done math in a while and I don't know the name of what I'm looking for, I've not had much luck.
(Note: I considered posting on the SE: Game Dev but as it's more about math/programming than game design itself I though it would be more appropriate here)
You're correct in your assumption that gravity needs to be a function. The following snippet (source: http://gafferongames.com/game-physics/integration-basics/) applies more gravity for higher values of x, where State is a struct for position and velocity in a single dimension.
float acceleration( const State &state )
{
const float k = 10;
const float b = 1;
return -k * state.x - b*state.v;
}
You want the reverse of this, which you can achieve by changing the value of b based on distance to the ground, or applying negative acceleration after some threshold.

"Mathematical state" with functional languages?

I've read some of the discussions here, as well as followed links to other explanations, but I'm still not able to understand the mathematical connection between "changing state" and "not changing state" as it pertains to our functional programming versus non-FP debate. As I understand, the basic argument goes back to the pure math definition of a function, whereby a function maps a domain member to only one range member. This is then compared to when a computer code function is given certain input, it will always produce the same output, i.e., not vary from use to use, i.e.i.e., the function's state, as in its domain to range mapping behavior, will not change.
Then it get foggy in my mind. Here's an example. Let's say I want to display closed block-like polygons on an x-y field. In GIS software I understand everything is stored as directed, closed graphs, i.e. a square is four vectors, their heads and ends connected. The raw data representation is just the individual Cartesian start and end points of each vector. And of course, there might be a function in the software that "processed" all these coordinate sets. Good. But what about representing each polygon in a mathematical way, e.g., a rectangle in the positive x, negative y quadrant might be:
Z = {(x,y) | 3 <= x <= 5, -2 <= y <= -1}
So we'd have many Z-like functions, each one expressing an individual polygon -- and not being a whiz with my matrix math, maybe these "functions" could then be represented as matrices . . . but I digress.
So with the usual raw vector-data method, I've got one function in my code that "changes state" as it processes each set of coordinates and then draws each polygon (and then deals with polygons changing), while the one-and-only-one-Z-like-function-per-polygon method would seem to hold to the "don't change state" rule exactly. Right? Or am I way off here? It seems like the old-fashioned, one-function-processing-raw-coordinate-data is not mutating the domain-range purity law either. I'm confused....
Part of my inspiration came from reading about a new idea of image processing where instead of slamming racks of pixels, each "frame" would be represented by one big function capable of "gnu-plotting" the whole image, edges, colors, gradients, etc. Is this germane? I guess I'm trying to fathom why I would want to represent, say, a street map of polygons (e.g. city blocks) one way or the other. I keep hearing functional language advocates dance around the idea that a mathematical function is pure and safe and good and ultimately Utopian, while the non-FP software function is some sort of sloppy kludge holding us back from Borg-like bliss.
But even more confusing is memory management vis-a-vis FP versus non-FP. What I keep hearing (e.g. parallel programming) is that FP isn't changing a "memory state" as much as, say, a C/C++ program does. Is this like the Google File System where literally everything is just sitting out there in a virtual memory pool, rather than being data moved in and out of databases and memory locations? Somehow all these things are related. Therefore, it seems like the perfect FP program is just one single function (possibly made up of many sub-functions) doing one single task -- although a quick glance at any elisp code seems to be a study of programming schizophrenia on this count.
Referential transparency in programming (and mathematics, logic, etc.) is the principle that the meaning or value of an expression can be determined without needing any non-local context, and that the value of an expression doesn't change. Code like
int x = 0;
int nextX() {
return x++;
}
violates referential transparency in that nextX() will at one moment return 32, and at the next invocation return 33, and there is no way, based only on local analysis, what nextX() will return in any given location. It is easy in many cases to turn a non-referentially transparent procedure into a referentially transparent function by adding an argument to the procedure. For instance, in the example just given, the addition of a parameter currentX, makes nextX referentially transparent:
int nextX( int currentX ) {
return currentX+1;
}
This does require, of course, that every time nextX is called, the previous value is available.
For procedures whose entire purpose is to modify state (e.g., the state of the screen), this doesn't make as much sense. For instance, while we could write a method print which is referentially transparent in one sense:
int print( int x ) {
printf( "%d", x );
return x;
}
there's still a sort of problem in that the state of the system is modified. Methods that ask about the state of the screen will have different results before and after a call to print, for instance. To make these kinds of procedures referentially transparent, they can be augmented with an argument representing the state of the system. For instance:
// print x to screen, and return the new screen that results
Screen print( int x, Screen screen ) {
...
}
// return the contents of screen
ScreenContents returnContentsOfScreen( Screen screen ) {
...
}
Now we have referential transparency, though at the expense of having to pass Screen objects around. For instance:
Screen screen0 = getInitialScreen();
Screen screen1 = print( 2, screen0 );
Screen screen2 = print( 3, screen1 );
...
This probably feels like overkill for working with IO, since the intent is, after all, to modify some state (namely, the screen, or filesystem, or …). Most programming languages, as a result, don't make IO methods referentially transparent. Some, like Haskell, however, do. Since doing it as just shown is rather cumbersome, these language will typically have some syntax to make things a bit more clean. In Haskell, this is accomplished by Monads and do notation (which is really out of scope for this answer). If you're interested in how the Monad concept is used to achieve this, you might be interested in this article, You Could Have Invented Monads! (And Maybe You Already Have.)

Determine true north

I am working on an Arduino device I am building.
I have bought a GPS module and a tilt sensing compas with an accelerometer.
I wish to determine true north so that I can always point an object towards the sun.
Basically I want the device to always find true north wherever it is.
The GPS will give a position, the compass will find magnetic north. I guess the true north can be obtained during movement of the device and written to RAM then retrieved for use when the device is stationary.
But how?
Are you trying to get the most sun for your rotational solar panel? If so then you can get away with just rough position setting between East and West according to your clock (you can improve this with taking long/lat position into the account to calculate sun rise and sun set times). You will need a lot of astronomy calculations if you want to control both azimuth and elevation precisely. Arduino does not support double, and with single you will not have very accurate results (they will be enough for solar panel tracker, but not enough if you want telescope to track some sky object). My advice would be to either investigate a lot on the topic, or take a look at some open source astronomy software and extract the needed calculations from the source (if licence terms permit). Just to give you a hint, this is a small extract from PilotLogic TMoon component that you can get in CodeTyphon/Lazarus/FPC installation package found here:
procedure Sun_Position_Horizontal(date:TdateTime; longitude,latitude: extended; var elevation,azimuth: extended);
var
pos1: T_Coord;
begin
pos1 := sun_coordinate(date);
calc_horizontal(pos1,date,longitude,latitude);
end;
function sun_coordinate(date:TDateTime):t_coord;
var
l,b,r: extended;
lambda,t: extended;
begin
earth_coord(date,l,b,r);
(* convert earth coordinate to sun coordinate *)
l := l+180;
b := -b;
(* conversion to FK5 *)
t := (julian_date(date)-2451545.0)/365250.0*10;
lambda:=l+(-1.397-0.00031*t)*t;
l := l-0.09033/3600;
b := b+0.03916/3600*(cos_d(lambda)-sin_d(lambda));
(* aberration *)
l := l-20.4898/3600/r;
(* correction of nutation - is done inside calc_geocentric *)
{ calc_epsilon_phi(date,delta_phi,epsilon); }
{ l := l+delta_phi; }
(* fill result and convert to geocentric *)
result.longitude := put_in_360(l);
result.latitude := b;
result.radius := r*AU;
calc_geocentric(result,date);
end;
procedure calc_horizontal(var coord:t_coord; date:TDateTime; longitude,latitude: extended);
var
h: extended;
begin
h := put_in_360(star_time(date)-coord.rektaszension-longitude);
coord.azimuth := arctan2_d(sin_d(h), cos_d(h)*sin_d(latitude)-
tan_d(coord.declination)*cos_d(latitude));
coord.elevation := arcsin_d(sin_d(latitude)*sin_d(coord.declination)+
cos_d(latitude)*cos_d(coord.declination)*cos_d(h));
end;
If you had a case that your device is not moving after installation (which is not the case after I reread your question so you can ignore the rest of the message), then your longitude and latitude are fixed and you know them at compile time, or you can enter them manually when device is first installed. That way GPS is not needed. You can also find North once at installation time, so you don't need compass either.
Compass will get haywire when it gets near some magnetic material and will not be practical at all. You can calculate the azimuth and elevation of the sun with respect to your location. It is more practical to use some digital encoders with your system and make calculated incremental movements. A calibrate button on the arduino could be used to normalize the parameters using a standard tools. For fine tune, manual buttons could be provided for up and down movements.

Resources