50hz Sine lookup table using PWM - atmega

Can someone please guide me how to generate lookup table for generating 50 hz sine wave using PWM in Atmega32.
This is what i have done so far but confused of what to do.
50 Hz sine wave so 20 ms time period
256 samples (No. of divisions)
step i need to increase = 20 ms/256 = 0.078125 ms (Period of PWM signal)
angle step rate = 360/256 = 1.40625
Amplitude of sine wave should be 1.

I think you are starting from the wrong end and getting lost because of that.
Ignoring the lookup table, can you generate a 50 Hz PWM signal usign explicit calls to sin() ? Good. Now the lookup table saves you those expensive sin calls. sin is a periodic function, so you need to store only one period (*). How many points that are depends on your digital output frequency, which is going to be much more than 50 Hz. How much more defines the number of points in your lookup table.
To fill your lookup table, you don't send the result of your PWM function to the digital output but youwrite it to the lookup table. To use the lookup table, you don't call the expensive function but you just copy the table entries straight to your output.
There is one common optimization: A since function has a lot of repetition. You don't need to store the send half, that's just the inverse of the first half, and the second quarter is just the first quarter mirrored.

Related

Trying to understand how a series of arrays is being mapped in an AVR routine

I'm trying to port an Arduino AVR routine either to ESP32/8266 or a Python script and would appreciate understanding how to crack the operation of this program. I'm self-teaching and am only looking to get something that works - pretty isn't required. This is a hobby and I am the only audience. The basic operations are understood (99% certain ;)) - there are 4 arrays total: Equilarg and Nodefactor contain 10 rows of 37 values; startSecs contains the epochtime values for the start of each year (2022-2032); and speed contains 37 values.
I believe each row of the Equilarg and Nodefactor arrays corresponds to the year, but I can't work out how the the specific element is pulled from each of the 3, 37 element arrays.
Here is the operating code:
// currentTide calculation function, takes a DateTime object from real time clock.
float TideCalc::currentTide (DateTime now)
{
// Calculate difference between current year and starting year.
YearIndx = now.year() - startYear;
// Calculate hours since start of current year. Hours = seconds / 3600
currHours = (now.unixtime() - pgm_read_dword_near (&startSecs[YearIndx])) / float(3600);
// Shift currHours to Greenwich Mean Time
currHours = currHours + adjustGMT;
// **************Calculate current tide height**********
// initialize results variable, units of feet.
// (This is 3.35 if it matters to understanding how it works)
tideHeight = Datum;
for (int harms = 0; harms < 37; harms++)
{
// Step through each harmonic constituent, extract the relevant
// values of Nodefactor, Amplitude, Equilibrium argument, Kappa
// and Speed.
currNodefactor = pgm_read_float_near (&Nodefactor[YearIndx][harms]);
currAmp = pgm_read_float_near (&Amp[harms]);
currEquilarg = pgm_read_float_near (&Equilarg[YearIndx][harms]);
currKappa = pgm_read_float_near (&Kappa[harms]);
currSpeed = pgm_read_float_near (&Speed[harms]);
// Calculate each component of the overall tide equation
// The currHours value is assumed to be in hours from the start of
// the year, in the Greenwich Mean Time zone, not the local time zone.
tideHeight = tideHeight + currNodefactor * currAmp
* cos ((currSpeed * currHours + currEquilarg - currKappa) * DEG_TO_RAD);
}
//***************End of Tide Height calculation**********
// Output of tideCalc is the tide height, units of feet.
return tideHeight;
}
I've made several attempts to reverse engineer by running the code on an AVR board and trapping the input values and then work backwards but I'm just not seeing a basic part or two. In this instance knowing "kinda" what's going on falls too short.
pgm_read_float_near reads a float value from flash memory. It needs the address of the value. We give it the address of the indexed value when we use &Amp[harms] for example. Both Nodefactor and Equilarg are doubly indexed - by year and then by harmonic, while the other three are indexed by the harmonic alone.
It sounds like this is a Fourier series curve fit for the tide height. So they're summing up a series of cosine values, each with different amplitude, frequency, and phase.
As #Tom suggests, copy the code to a plain C file, make a little routine for a dummy pgm_read_float_near and see how it works on your PC. Many times I write and debug algorithms on a "big" computer, and later plop the code into the Arduino.
Have fun!

Equation to distribute items unevenly

I'm writing a javascript program that sends a list of MIDI signals over a specified period of time.
If the signals are sent evenly, it's easy to determine how long to wait in between each signal: it's just the total duration divided by the number of signals.
However, I want to be able to offer a setting where the signals aren't sent equally: either the signals are sent with increasing or decreasing speed. In either case, the number of signals and the total amount of time remain the same.
Here's a picture to visualize what I'm talking about
Is there a simple logarithmic/exponential function where I can compute what these values are? I'm especially hoping it might be possible to use the same equation for both, simply changing a variable.
Thank you so much!
Since you do not give any method to get a pulse value, from the previous value or any other way, I assume we are free to come up with our own.
In both of your cases, it looks like you start with an initial time interval: let's call it a. Then the next interval is that value multiplied by a constant ratio: let's call it r. In the first decreasing case, your value of r is between zero and one (it looks like around 0.6), while in the second case your value of r is greater than one (around 1.6). So your time intervals, in Python notation, are
a, a*r, a*r**2, a*r**3, ...
Then the time of each signal is the sum of a geometric series,
a * (1 - r**n) / (1 - r)
where n is the number of the pulse (1 for the first, 2 for the second, etc.). That formula is valid if r is not one, but if r is one then the sequence is a trivial sequence of a regular signal and the nth signal is given at time
a * n
This is not a "fixed result" since you have two degrees of freedom--you can choose values of a and of r.
If you want to spread the signals more evenly, just bring r closer to one. A value of one is perfectly even, a value farther from one is more clumped at one end. One disadvantage of this method is that if the signal intervals are decreasing then the signals will completely stop at some point, namely at
a / (1 - r)
If you have signals already sent or received and you want to find the value of r, just find the time interval between three consecutive signals, and r is the value of the time interval between the 2nd and 3rd signal divided by the time interview between the 1st and 2nd signal. If you want to see if this model is a good one for a given set of signals, check the value of r at multiple signals--if the value of r is nearly constant then this is a good model.

i don't really understand FFT and sample rates

Im really confused over here. I am a ai programmer working on a game that is designed to detect beats in songs and some more. I have no previous knowledge about audio and just reading through whatever material i can find. While i got fft working and stuff I simply don't understand the way samples are transferred to different frequencies. Question 1, what does each frequency stands for. For the algorithm i got. I can transfer for example 1024 samples into 512 outcomes. So are they a description of the strength of each spectrum at the current second? it doesn't really make sense since what i remember is that there are 20,000hz in a 44.1khz audio recording. So how does 512 spectrum samples explain what is happening in that moment? Question 2, from what i read, its a number that represent the sound wave at this moment. However i read that by squaring both left channel and right channel, and add them together and you will get the current power level. Both these seems incoherent to my understanding, and i am really buff led so please explain away.
DFT output
the output is complex representation of phasor (Re,Im,Frequency) of basis function (usually sin wave). First item is DC offset so skip it. All the others are multiples of the same fundamental frequency (sampling rate/N). The output is symmetric (if the input is real only) so use just first half of results. Often power spectrum is used
Amplitude=sqrt(Re^2+Im^2)
which is the amplitude of basis function. If phase is needed then
phase=atan2(Im,Re)
beware DFT results are strongly dependent on the input signal shape,frequency and phase shift to your basis functions. That causes the output to vibrate/oscillate around the correct value and produce wide peaks instead of sharp ones for singular frequencies not to mention aliasing.
frequencies
if you got 44100Hz then the max output frequency is half of it that means the biggest frequency present in data is 22050Hz. The DFFT however does not contain this frequency so if you ignore the mirrored second half of results then:
for 4 samples DFT outputs frequencies are { -,11025 } Hz
for 8 samples frequencies are: { -,5512.5,11025,16537.5 } Hz
The output frequency is linear to its address from start so if you got N=512 samples
do DFFT on it
obtain first N/2=256 results
i-th sample represents frequency f=i*samplerate/N Hz
where i={ 1,...,(N/2)-1} ... skipping i=0
the image shows one of mine utility apps tighted together with
2-channel sound generator (top left)
2-channel oscilloscope (top right)
2-channel spectral analyzer (bottom) ... switched to linear frequency scale to make obvious what I mean in above text
zoom the image to see the settings ... I made it as close to the real devices as I could.
Here DCT and DFT comparison:
Here the DFT output dependency on input signal frequency aliasing by sampling rate
more channels
Summing power of channels is more safe. If you just add the channels then you could miss some data. For example let left channel is playing 1 Khz sin wave and the right exact opposite so if you just sum them then the result is zero but you can hear the sound .... (if you are not exactly in the middle between speakers). If you analyze each channel independently then you need to calculate DFFT for each channel but if you use power sum of channels (or abs sum) then you can obtain the frequencies for all channels at once , of coarse you need to scale the amplitudes ...
[Notes]
Bigger the N nicer the result (less aliasing artifacts and closer to the max frequency). For specific frequencies detection are FIR filter detectors more precise and faster.
Strongly recommend to read DFT and all sublinks there and also this plotting real time Data on (qwt) Oscillocope

Clear out the ecg signal

I have a raw ecg signal, that contains complex values (real and imaginary) in time. Now I have to clear that signal out, remove noises, and flatten the signal.
The algorithm to do this that i know of is fast fourier transformation (FFT), but it doesnt flatten the signal, instead it generates correct fourier transformation, but the signal is not flat, it has high values on both sides. How can i do that?
I am doing this in java language, but I dont ask for the code, just for the hint with the idea, or an algorithm.
Thanks!
FFT doesn't flatten signal, it translates signal from time domain to frequency domain. If you signal is pure real, FT is symmetric - so you can see similar high peaks at both ends - this is very low frequency part of your signal.
To filter a signal, you can execute FT, apply some function to the result of transform - for example, lower high and very low frequency regions, and execute backward FT to return in the time domain.

Converting Real and Imaginary FFT output to Frequency and Amplitude

I'm designing a real time Audio Analyser to be embedded on a FPGA chip. The finished system will read in a live audio stream and output frequency and amplitude pairs for the X most prevalent frequencies.
I've managed to implement the FFT so far, but it's current output is just the real and imaginary parts for each window, and what I want to know is, how do I convert this into the frequency and amplitude pairs?
I've been doing some reading on the FFT, and I see how they can be turned into a magnitude and phase relationship but I need a format that someone without a knowledge of complex mathematics could read!
Thanks
Thanks for these quick responses!
The output from the FFT I'm getting at the moment is a continuous stream of real and imaginary pairs. I'm not sure whether to break these up into packets of the same size as my input packets (64 values), and treat them as an array, or deal with them individually.
The sample rate, I have no problem with. As I configured the FFT myself, I know that it's running off the global clock of 50MHz. As for the Array Index (if the output is an array of course...), I have no idea.
If we say that the output is a series of One-Dimensional arrays of 64 complex values:
1) How do I find the array index [i]?
2) Will each array return a single frequency part, or a number of them?
Thankyou so much for all your help! I'd be lost without it.
Well, the bad news is, there's no way around needing to understand complex numbers. The good news is, just because they're called complex numbers doesn't mean they're, y'know, complicated. So first, check out the wikipedia page, and for an audio application I'd say, read down to about section 3.2, maybe skipping the section on square roots: http://en.wikipedia.org/wiki/Complex_number
What that's telling you is that if you have a complex number, a + bi, you can picture it as living in the x,y plane at location (a,b). To get the magnitude and phase, all you have to do is find two quantities:
The distance from the origin of the plane, which is the magnitude, and
The angle from the x-axis, which is the phase.
The magnitude is simple enough: sqrt(a^2 + b^2).
The phase is equally simple: atan2(b,a).
The FFT result will give you an array of complex values. The twice the magnitude (square root of sum of the complex components squared) of each array element is an amplitude. Or do a log magnitude if you want a dB scale. The array index will give you the center of the frequency bin with that amplitude. You need to know the sample rate and length to get the frequency of each array element or bin.
f[i] = i * sampleRate / fftLength
for the first half of the array (the other half is just duplicate information in the form of complex conjugates for real audio input).
The frequency of each FFT result bin may be different from any actual spectral frequencies present in the audio signal, due to windowing or so-called spectral leakage. Look up frequency estimation methods for the details.

Resources