I am attempting to use output from an accelerometer moving back and forth on a single axis to calculate its current position.
I have tried using Euler integrations, but the velocity and position errors become too large too quickly. After some reading, I am wondering whether RK4 could be applied to this problem to minimise the error?
Cheers,
A.
There is usually some drift in accelerometers. You need a measurement of the actual position to remove static errors.
RK4 might help.
Are you also measuring position, and that's where your assessment of accumulated error comes from?
It might also be due to time increments being too large. Perhaps you've exceeded a stability limit for your system. A finer time step and an implicit integration method might help with those errors.
Related
Well, i want to ask if ADXL345 can be used to detect an Earthquake Occurrence based on its magnitude/intensity level. For more information, I want to used an accelerometer to create a Device that can detect the intensity/magnitude level of an Earthquake.
I have absolutely no experience in this field, but it looks useful and fascinating.
Questions are:
is this device able to detect medium scale earthquakes?
if yes, does anybody did it, available to share experiences?
if no to the previous, is there any guide which explains algorithms, calculations and mechanical plans?
That sensor is not suitable. It has 13 bit resolution at +-16g full range. That gives you a sensitivity of 0.002g for the lsb. In order to detect an earthquake directly below you, you need approx. a few milli-g (e.g. see here), even less for earthquakes with an epicentre elsewhere.
You want a sensor which is much more sensitive by a factor of 100 and probably with more resolution (better ADC), too.
(And you should have been able to do this quick google-search analysis yourself ;) )
Using accelerometers reading tells you nothing about the actual magnitude of the quake itself. It tells you the size of the quake at your location. Combining location and amplitude will give you a 'weighted' measurement, but that's still useless without a calibration curve. Without knowing what acceleration, at a certain distance, corresponds to what magnitude you will be unable to tell what the magnitude is. You can certainly conclude that your measured earthquake has a median amplitude of, say, 2000% of a non-earthquake reading, but you won't be able to turn it into a Richter measurement. To do that you'd need to take some data during earthquakes of known magnitude and then work out how acceleration, distance and magnitude are related for your device. You could alternatively use a scale like the Shindo (just Google it).
Ok, so a laser on earth hits a mirror on the moon and bounces back. On the ATmega128 microprocessor, we use TIMER1 to capture the clock ticks when the laser shot out and the clock ticks when it returned, subtract and get a "distance" in clock ticks. (16MHz clock on ATmega128).
So we are supposed to determine how different can this measured distance be from the actual distance and what can cause it. As well as compute max error in each legal prescaler of TIMER1.
Looking at TIMER1's registers and input capture information in the ATmega128 datasheet I cannot find any kind of percentage error with the input capture. Like, this seems like a conceptual question, yet we are supposed to pull values out of the air and calculate something?
My question is if anyone knows anything of ATmega128, what values are being referred to in determining error from reading distance with timer ticks? My only guess is the error occurs when you use higher and higher prescalers because you lose preciseness when prescalers get larger. But again this is a conceptual answer and don't understand how I would calculate anything.
The counters/prescalars can be assumed to be perfect and will not cause any loss of resolution.
Your original clock source will be a predominant source of errors. If you are using an external clock with a crystal, these are usually good to 50 ppm (part per million) or better. If you are using an internal clock, the error is much higher (1% is not unreasonable for some microcontrollers).
The whole thing gets tricky if you remember your general relativity (you do have a PhD in Physics right?). The earth's rotation and gravity come into play wrt the speed of light and distance...
In a previous question, I had asked Why can't I simply negate the source time domain amplitude values to produce a destructive noise signal?
One of the posters said that while simply producing a inverses polarity (negated) signal will work in theory, in practice it is not possible
So I am asking, what is the fundamental approach (in a sort of semi technical way) to active noise cancellation?
Secondly, why are most literature on this topic in frequency domain?
It's rather simple.
By the time you send your inverted signal, the noise has already been heard.
You need to look at what frequencies are being generated, and then produce the appropriate inverted signals of those to cancel them out.
Noise cancellation is prediction. Your algorithm has to predict what the sound of the noise will be at some time in the future (that time given by the system and audio time latencies), and then predict what signal will produce the opposite sound at that same point in the future (which your system will distort and delay, so you have to figure in the opposite distortion and delay).
You might be able to use several successive FFTs to determine which frequencies in the noise are not changing, and assume or calculate some probability that they will continue for a short time into the future.
If you know the frequency response curve of the speaker, you might be able to figure out the frequency amplitudes of a signal needed to match some predicted noise spectrum. The phase angle of a sinusoid will change with time. If you know the time delay of your output signal, you might be able to calculate the phase of a sinusoid at some point in the future. If you have a predicted phase of a particular frequency of noise at some time and location, you can add π to that phase angle to estimate the noice-cancelling signal.
If you don't know the frequency response and delay of your system, then you won't know what frequencies, amplitudes or phases of signal to create for cancellation. You might well end up amplifying the noise instead of cancelling it.
It seems that what’s missing is the propagation delay required to intercept and negate a signal. The KISS rule will eventually prove this true. The FFT is a complex calculation and each N iteration will introduce resulting error due to the time required to process the signal. To cancel a sound wave it will need to be intercepted in advance, processed and inverted. Then the time constant of the transducer must. E considered. My experience is that a microphone near the source of “noise” connected by wire and amplification device and transducer near the location where It is to be cancelled.
edit: typo
The basic idea of ANC is to find repetitive sound and play the opposite of it. If the repetitive sound continue to play we'll be able to cancel it. That goes in direct contradiction to to the other answers, but I'll clarify.
Playing the opposite sound means playing it again with a precise power and delay, possibly inverting the waveform. The delay itself varies for each frequency. For example, for a 20Hz sound we have to replay the inverted sound on a precise multiple of 1/20 = 0.05s. For 23Hz, for example, the delay has to be a multiple of 1/23 ~= 0.04347s.
Since any waveform can be produced by sum of sinusoidal, one way of doing it would be to only worry about the N biggest sinusoids, measured in power (square of the amplitudes). For finding the sinusoidal's frequencies and power we use the Fourier Transform, typically with the FFT algorithm.
If we take, for example N=8, it means we are trying to eliminate the 8 most powerfull wave components. For each of them we store:
wave's amplitude
wave's offset, taking the computer's clock as a base.
than we constantly play 8 sinusoids, each on the correct power and with the correct delay. The hard part is what happens next. We need to keep listening to adapt, but now we are listening to the environment sound + our own sound. This algorithm is harder to implement, but conceptually is easier, and one could easily figure out how to do it by himself.
So, contrary to what the other answers say, managing the time delay is critical. Is not possible to create an ANC system without doing it. If you only care about the frequency domain, the only thing you could possibly do is filter those frequencies. On an ANC system this makes not sense.
I am working on a structure from motion application and I am tracking a number of markers placed on the object to determine the rigid structure of the object.
The app is essentially using standard Levenberg-Marquardt optimization over multiple camera views and minimizing the differences between expected marker points and the marker points obtained in 2D from each view.
For each marker point and each view the following function is minimised:
double diff = calculatedXY[index] - observedXY[index]
Where calculatedXY value depends on a number of unknown parameters that need to be found via the optimization and observedXY is the marker point position in 2D. In total I have (marker points * views) number of functions like the one above that I am aiming to minimise.
I have coded up a simulation of the camera seeing all the marker points but I was wondering how to handle the cases when during running the points are not visible due to lighting, occlusion or just not being in the camera view. In the real running of the app I will be using a web cam to view the object so it is likely that not all markers will be visible at once and depending on how robust my computer vision algorithm is, I might not be able to detect a marker all the time.
I thought of setting the diff value to be 0 (sigma squared difference = 0) in the case where the marker point could not be observed, could this skew the results however?
Another thing I noticed is that the algorithm is not as good when presented with too many views. It is more likely to estimate a bad solution when presented with too many views. Is this a common problem with bundle adjustment due to the increased likeliness of hitting a local minimum when presented with too many views?
It is common practice to just leave out terms corresponding to missing markers. Ie. don't try to minimise calculateXY-observedXY if there is no observedXY term. There's no need to set anything to zero, you shouldn't even be considering this term in the first place - just skip it (or, I guess in your code, it's equivalent to set the error to zero).
Bundle adjustment can fail terribly if you simply throw a large number of observations at it. Build your solution up incrementally by solving with a few views first and then keep on adding.
You might want to try some kind of 'robust' approach. Instead of using least squares, use a "loss function"1. These allow your optimisation to survive even if there are a handful of observations that are incorrect. You can still do this in a Levenberg-Marquardt framework, you just need to incorporate the derivative of your loss function into the Jacobian.
After working for a while developing games, I've been exposed to both variable frame rates (where you work out how much time has passed since the last tick and update actor movement accordingly) and fixed frame rates (where you work out how much time has passed and choose either to tick a fixed amount of time or sleep until the next window comes).
Which method works best for specific situations? Please consider:
Catering to different system specifications;
Ease of development/maintenance;
Ease of porting;
Final performance.
I lean towards a variable framerate model, but internally some systems are ticked on a fixed timestep. This is quite easy to do by using a time accumulator. Physics is one system which is best run on a fixed timestep, and ticked multiple times per frame if necessary to avoid a loss in stability and keep the simulation smooth.
A bit of code to demonstrate the use of an accumulator:
const float STEP = 60.f / 1000.f;
float accumulator = 0.f;
void Update(float delta)
{
accumulator += delta;
while(accumulator > STEP)
{
Simulate(STEP);
accumulator -= STEP;
}
}
This is not perfect by any means but presents the basic idea - there are many ways to improve on this model. Obviously there are issues to be sorted out when the input framerate is obscenely slow. However, the big advantage is that no matter how fast or slow the delta is, the simulation is moving at a smooth rate in "player time" - which is where any problems will be perceived by the user.
Generally I don't get into the graphics & audio side of things, but I don't think they are affected as much as Physics, input and network code.
It seems that most 3D developers prefer variable FPS: the Quake, Doom and Unreal engines both scale up and down based on system performance.
At the very least you have to compensate for too fast frame rates (unlike 80's games running in the 90's, way too fast)
Your main loop should be parameterized by the timestep anyhow, and as long as it's not too long, a decent integrator like RK4 should handle the physics smoothly Some types of animation (keyframed sprites) could be a pain to parameterize. Network code will need to be smart as well, to avoid players with faster machines from shooting too many bullets for example, but this kind of throttling will need to be done for latency compensation anyhow (the animation parameterization would help hide network lag too)
The timing code will need to be modified for each platform, but it's a small localized change (though some systems make extremely accurate timing difficult, Windows, Mac, Linux seem ok)
Variable frame rates allow for maximum performance. Fixed frame rates allow for consistent performance but will never reach max on all systems (that's seems to be a show stopper for any serious game)
If you are writing a networked 3D game where performance matters I'd have to say, bite the bullet and implement variable frame rates.
If it's a 2D puzzle game you probably can get away with a fixed frame rate, maybe slightly parameterized for super slow computers and next years models.
One option that I, as a user, would like to see more often is dynamically changing the level of detail (in the broad sense, not just the technical sense) when framerates vary outside of a certian envelope. If you are rendering at 5FPS, then turn off bump-mapping. If you are rendering at 90FPS, increase the bells and whistles a bit, and give the user some prettier images to waste their CPU and GPU with.
If done right, the user should get the best experince out of the game without having to go into the settings screen and tweak themselves, and you should have to worry less, as a level designer, about keeping the polygon count the same across difference scenes.
Of course, I say this as a user of games, and not a serious one at that -- I've never attempted to write a nontrivial game.
The main problem I've encountered with variable length frame times is floating point precision, and variable frame times can surprise you in how they bite you.
If, for example, you're adding the frame time * velocity to a position, and frame time gets very small, and position is largish, your objects can slow down or stop moving because all your delta was lost due to precision. You can compensate for this using a separate error accumulator, but it's a pain.
Having fixed (or at least a lower bound on frame length) frame times allows you to control how much FP error you need to take into account.
My experience is fairly limited to somewhat simple games (developed with SDL and C++) but I have found that it is quite easy just to implement a static frame rate. Are you working with 2d or 3d games? I would assume that more complex 3d environments would benefit more from a variable frame rate and that the difficulty would be greater.