Balancing quadcopter using Arduino - arduino

I am doing a project on self balancing quadcopter with Autonomous control. I am using Arduino Mega 2560 and MPU6050. I have obtained the roll and pitch angles from MPU6050 without the help of DMP and applied complex filter to omit the noise due to vibration.
Also configured and able to run the BLDC motors with Flysky Transmitter and receiver with the help of Arduino interrupts. Now for balancing I am focusing on only one axis (i.e. roll). I have also constructed a balancing stand for the free movement of roll axis by the motor.
For the controlling part, I am implementing PID algorithm. I tried using only the kp value so that, somehow I can balance and then move on to ki and kd term. But unfortunately, for Kp itself, the quadcopter is undergoing aggressive oscillation and is not settling at all.
Some of my queries are:
Whether a single PID loop is enough, or we have to add another?
What type of tuning method I can implement, to find the kp, ki, kd other than trial and error?
I programmed my ESC for 1000 to 2000 microseconds. My PID input angles will be within the range +/- 180. Whether I can directly set the PID output limits for range -1000 to 1000 or -180 to 180 or any other value?
The code can read from the URL https://github.com/antonkewin/quadcopter/blob/master/quadpid.ino

Since its not provided, I am assuming that:
The Loop time is atleast 4ms. (The less the better)
The sensor noise is been reduced to an acceptable level.
MPU-6050 needs gyro+accel data to be combined to get angles in degrees.
If the above points are not taken care of, it will Not balance itself.
Initially, you can get away without tuning kI. So let's focus on kP and kD:
Keep increasing Kp till it starts oscillate fast. Keep the Kp value half of that.
With kP set, start experimenting kD values, as it will try to dampen the overshoots of kP.
Fiddle around these two values, tune it for perfection.
Note, the more accurate your gyro data is, the higher you can set your kP to.

Related

Arduino accelerometer MPU-6050

This might sound like a very silly question, so I apologize if this is something very simple but I just cannot get my head around it. I am trying to understand what the data provides in terms of real time information, for example, the MPU-6050:
Gyroscope - is a 16 bit data register with a range from (0 <-> 65535)
There is a selection of ranges (±250, ±500, ±1000, and ±2000°/sec)
If the range is set to ±250°/sec, is the reading 360/65535 = 0.0054 resolution?
What does °/sec mean, if the sensor does not move and reads zero and then turned quickly does it mean it will be reading the angle at the set range? For example, if the range was set to ±2000°/sec and it was moved 200° would the read move from 0 to (2/65535 *200) and keep sending this value once the sensor stopped moving?
Accelerometer - is a 16 bit data register with a range from (0 <-> 65535)
There is a selection of ranges (±2g, ±4g, ±8g and ±16g)
If the sensor is not moving, completely flat the reading will be 0?
If the sensor is shocked at 2g will the max reading be 65535 (if set of 2g, with a resolution of 2/65535)
If the sensor is shocked at 16g will the max reading be 65535 (if set of 16g, with a resolution of 16/65535))
There are two main documents regarding the MPU6050, and those are the datasheet and the register map.
The gyro measurements are stored in the GYRO_XOUT, GYRO_YOUT, GYRO_ZOUT parameters, as you can see in the register map document, page 31. Each parameter is stored as a 2-complement signed 16 bit value split into two 8-bit registers: the GYRO_xOUT_L and _H.
In the same page, you can see the sensitivity for each full-scale range. For example, if your FSR is +/- 250º/sec, and you want to measure 1º/sec, the GYRO_xOUT parameter should read 131 counts.
The accelerometer-related registers can be seen in the same document, page 29. The idea is the same, two 8-bit registers to form a 2-complement signed 16-bit value, and the sensitivity values for each FSR.
Regarding your question in comments, if you rotate the device 125º in a second, at constant rotation speed, you should read 16375 in the rotation registers during the movement. This value comes from 131 counts/(º/sec) * 125º/sec = 16375 counts.

Generate signals with 0.1Hz resolutions using AD9833 via Arduino Uno

I would like to generate a frequency with the resolution of 0.1Hz from the range of 0.0 up til 1000.0 Hz ( Example such as 23.1 Hz, 100.5 Hz and 999.7 Hz) I have found that using AD9833 we can generate the signal as what I was required, but the notes are a bit confusing to me.
The specification can be obtained HERE .
Need your kind assist to if we can make the Arduino code.. lets say, to generate a signal of 123.4 Hz via Serial monitor from Arduino and it displayed as it is in the oscilloscope?
Thank you.
Looking at the notes, it appears that programming this chip will be non-trivial. If you don't require frequencies all the way down to 0 Hz, this job can be done much more easily with a standard Windows sound card. (Sound cards are AC-coupled, so won't go below a few Hz.) For one example, my Daqarta software can generate frequencies (with any waveform you want) at a resolution better than 0.001 Hz. The maximum frequency will be a bit less than half the sound card's sample rate... typically 20 kHz at the default 48000 Hz sample rate.
You don't have to buy Daqarta to get this capability; the Generator function will continue to work after the trial period... free, forever.
UPDATE: You don't mention what sort of waveforms you need, but note that if you can use square waves you may be able to do the whole job with the Arduino alone. The idea is to set up a timer to produce interrupts at some desired sample rate. On each interrupt you add a step value to an accumulator, and send the MSB of the accumulator to an output pin. You control the output frequency by changing the step value. This is essentially a 1-bit version of the phase accumulator approach used by the AD9833 (and by the Daqarta Generator). The frequency resolution is controlled by the sample rate and the size of the accumulator. You can easily get much better than 0.1 Hz resolution.
Best regards,

i don't really understand FFT and sample rates

Im really confused over here. I am a ai programmer working on a game that is designed to detect beats in songs and some more. I have no previous knowledge about audio and just reading through whatever material i can find. While i got fft working and stuff I simply don't understand the way samples are transferred to different frequencies. Question 1, what does each frequency stands for. For the algorithm i got. I can transfer for example 1024 samples into 512 outcomes. So are they a description of the strength of each spectrum at the current second? it doesn't really make sense since what i remember is that there are 20,000hz in a 44.1khz audio recording. So how does 512 spectrum samples explain what is happening in that moment? Question 2, from what i read, its a number that represent the sound wave at this moment. However i read that by squaring both left channel and right channel, and add them together and you will get the current power level. Both these seems incoherent to my understanding, and i am really buff led so please explain away.
DFT output
the output is complex representation of phasor (Re,Im,Frequency) of basis function (usually sin wave). First item is DC offset so skip it. All the others are multiples of the same fundamental frequency (sampling rate/N). The output is symmetric (if the input is real only) so use just first half of results. Often power spectrum is used
Amplitude=sqrt(Re^2+Im^2)
which is the amplitude of basis function. If phase is needed then
phase=atan2(Im,Re)
beware DFT results are strongly dependent on the input signal shape,frequency and phase shift to your basis functions. That causes the output to vibrate/oscillate around the correct value and produce wide peaks instead of sharp ones for singular frequencies not to mention aliasing.
frequencies
if you got 44100Hz then the max output frequency is half of it that means the biggest frequency present in data is 22050Hz. The DFFT however does not contain this frequency so if you ignore the mirrored second half of results then:
for 4 samples DFT outputs frequencies are { -,11025 } Hz
for 8 samples frequencies are: { -,5512.5,11025,16537.5 } Hz
The output frequency is linear to its address from start so if you got N=512 samples
do DFFT on it
obtain first N/2=256 results
i-th sample represents frequency f=i*samplerate/N Hz
where i={ 1,...,(N/2)-1} ... skipping i=0
the image shows one of mine utility apps tighted together with
2-channel sound generator (top left)
2-channel oscilloscope (top right)
2-channel spectral analyzer (bottom) ... switched to linear frequency scale to make obvious what I mean in above text
zoom the image to see the settings ... I made it as close to the real devices as I could.
Here DCT and DFT comparison:
Here the DFT output dependency on input signal frequency aliasing by sampling rate
more channels
Summing power of channels is more safe. If you just add the channels then you could miss some data. For example let left channel is playing 1 Khz sin wave and the right exact opposite so if you just sum them then the result is zero but you can hear the sound .... (if you are not exactly in the middle between speakers). If you analyze each channel independently then you need to calculate DFFT for each channel but if you use power sum of channels (or abs sum) then you can obtain the frequencies for all channels at once , of coarse you need to scale the amplitudes ...
[Notes]
Bigger the N nicer the result (less aliasing artifacts and closer to the max frequency). For specific frequencies detection are FIR filter detectors more precise and faster.
Strongly recommend to read DFT and all sublinks there and also this plotting real time Data on (qwt) Oscillocope

Clear out the ecg signal

I have a raw ecg signal, that contains complex values (real and imaginary) in time. Now I have to clear that signal out, remove noises, and flatten the signal.
The algorithm to do this that i know of is fast fourier transformation (FFT), but it doesnt flatten the signal, instead it generates correct fourier transformation, but the signal is not flat, it has high values on both sides. How can i do that?
I am doing this in java language, but I dont ask for the code, just for the hint with the idea, or an algorithm.
Thanks!
FFT doesn't flatten signal, it translates signal from time domain to frequency domain. If you signal is pure real, FT is symmetric - so you can see similar high peaks at both ends - this is very low frequency part of your signal.
To filter a signal, you can execute FT, apply some function to the result of transform - for example, lower high and very low frequency regions, and execute backward FT to return in the time domain.

Detecting and fixing overflows

we have a particle detector hard-wired to use 16-bit and 8-bit buffers. Every now and then, there are certain [predicted] peaks of particle fluxes passing through it; that's okay. What is not okay is that these fluxes usually reach magnitudes above the capacity of the buffers to store them; thus, overflows occur. On a chart, they look like the flux suddenly drops and begins growing again. Can you propose a [mostly] accurate method of detecting points of data suffering from an overflow?
P.S. The detector is physically inaccessible, so fixing it the 'right way' by replacing the buffers doesn't seem to be an option.
Update: Some clarifications as requested. We use python at the data processing facility; the technology used in the detector itself is pretty obscure (treat it as if it was developed by a completely unrelated third party), but it is definitely unsophisticated, i.e. not running a 'real' OS, just some low-level stuff to record the detector readings and to respond to remote commands like power cycle. Memory corruption and other problems are not an issue right now. The overflows occur simply because the designer of the detector used 16-bit buffers for counting the particle flux, and sometimes the flux exceeds 65535 particles per second.
Update 2: As several readers have pointed out, the intended solution would have something to do with analyzing the flux profile to detect sharp declines (e.g. by an order of magnitude) in an attempt to separate them from normal fluctuations. Another problem arises: can restorations (points where the original flux drops below the overflowing level) be detected by simply running the correction program against the reverted (by the x axis) flux profile?
int32[] unwrap(int16[] x)
{
// this is pseudocode
int32[] y = new int32[x.length];
y[0] = x[0];
for (i = 1:x.length-1)
{
y[i] = y[i-1] + sign_extend(x[i]-x[i-1]);
// works fine as long as the "real" value of x[i] and x[i-1]
// differ by less than 1/2 of the span of allowable values
// of x's storage type (=32768 in the case of int16)
// Otherwise there is ambiguity.
}
return y;
}
int32 sign_extend(int16 x)
{
return (int32)x; // works properly in Java and in most C compilers
}
// exercise for the reader to write similar code to unwrap 8-bit arrays
// to a 16-bit or 32-bit array
Of course, ideally you'd fix the detector software to max out at 65535 to prevent wraparound of the sort that is causing your grief. I understand that this isn't always possible, or at least isn't always possible to do quickly.
When the particle flux exceeds 65535, does it do so quickly, or does the flux gradually increase and then gradually decrease? This makes a difference in what algorithm you might use to detect this. For example, if the flux goes up slowly enough:
true flux measurement
5000 5000
10000 10000
30000 30000
50000 50000
70000 4465
90000 24465
60000 60000
30000 30000
10000 10000
then you'll tend to have a large negative drop at times when you have overflowed. A much larger negative drop than you'll have at any other time. This can serve as a signal that you've overflowed. To find the end of the overflow time period, you could look for a large jump to a value not too far from 65535.
All of this depends on the maximum true flux that is possible and on how rapidly the flux rises and falls. For example, is it possible to get more than 128k counts in one measurement period? Is it possible for one measurement to be 5000 and the next measurement to be 50000? If the data is not well-behaved enough, you may be able to make only statistical judgment about when you have overflowed.
Your question needs to provide more information about your implementation - what language/framework are you using?
Data overflows in software (which is what I think you're talking about) are bad practice and should be avoided. While you are seeing (strange data output) is only one side effect that is possible when experiencing data overflows, but it is merely the tip of the iceberg of the sorts of issues you can see.
You could quite easily experience more serious issues like memory corruption, which can cause programs to crash loudly, or worse, obscurely.
Is there any validation you can do to prevent the overflows from occurring in the first place?
I really don't think you can fix it without fixing the underlying buffers. How are you supposed to tell the difference between the sequences of values (0, 1, 2, 1, 0) and (0, 1, 65538, 1, 0)? You can't.
How about using an HMM where the hidden state is whether you are in an overflow and the emissions are observed particle flux?
The tricky part would be coming up with the probability models for the transitions (which will basically encode the time-scale of peaks) and for the emissions (which you can build if you know how the flux behaves and how overflow affects measurement). These are domain-specific questions, so there probably aren't ready-made solutions out there.
But one you have the model, everything else---fitting your data, quantifying uncertainty, simulation, etc.---is routine.
You can only do this if the actual jumps between successive values are much smaller than 65536. Otherwise, an overflow-induced valley artifact is indistinguishable from a real valley, you can only guess. You can try to match overflows to corresponding restorations, by simultaneously analysing a signal from the right and the left (assuming that there is a recognizable base line).
Other than that, all you can do is to adjust your experiment by repeating it with different original particle flows, so that real valleys will not move, but artifact ones move to the point of overflow.

Resources